RiskWiki
bibi1909_mw1
http://www.bishopphillips.com/riskwiki/Main_Page
MediaWiki 1.30.1
first-letter
Media
Special
Talk
User
User talk
Itrontest
Itrontest talk
File
File talk
MediaWiki
MediaWiki talk
Template
Template talk
Help
Help talk
Category
Category talk
How do I get a copy of BPC RiskManager V6.2.5?
0
299
407
2010-08-04T14:54:17Z
Bishopj
1
wikitext
text/x-wiki
The BPC RiskManager V6.2.5 (Enrima Edition) Enterprise and Single User software is available in a downloadable form from the Bishop Phillips Consulting web site. The software comes with a 60 day evaulation license (which means you can use it as if you own it for 60 days) prior to purchase. Online and phone support is provided to evaluation clients as if they were paying clients.
You must register a software enquiry with BPC using this form:
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php http://www.bishopphillips.com/australia/BPCServiceEnquiry.php]
Note: The enquiry form can get a bit emotional when the moon is out, so if it tells you there are errors when there aren't any (ie you have completed the "required" fields), just put something in the general comments box and resubmit. That seems to make it happy! The form was written for php 5, but the server it is on currently hosts php4 (pending an upgrade), so although it was scaled back for the lower grade environment it is still wants to be a php5 program, and intermittently rebels.
Within 24 hours of receipt you will be contacted by email by Bishop Phillips Consulting. If you would like an evaluation copy, they will arrange it for you, and provide you with pricing and the contact details for the Bishop Phillips Consulting office closest to your location.
[[Category:RiskManager FAQ]]
<noinclude>{{BackLinks}}
</noinclude>
0a7dc86a8d2c10b03e1e92f3dc71c11803fc2dc5
Would it be possible to get a copy of the BPC RiskManager V6 installation guide?
0
313
439
2010-08-04T14:56:10Z
Bishopj
1
wikitext
text/x-wiki
Yes. Obviously, you get a copy with the install set for the BPC RiskManager, but you can also get a copy before installing.
The best approach to installation is to let the auto installer do it for you, and then a manual is not really required.
There is an installation instructions manual in pdf and another structured version on this riskwiki. The manual covers installation of all components, and includes the discussions of the architectural considerations. The documents are extremely detailed and assume very little knowledge of the windows environment, so it even covers installing the some 'not always installed' windows components such as the MS SQL Server (2000, 2005, 2008- with notes for Express), MS IIS server, and the MS SMTP server - which are Microsoft components, rather than BPC components. So essentially you can install from a raw MS operating system installation and just follow the installation guide. It covers installation on W2000, W2003, W2008, W2008-64, XP, Vista-Sp1, and Windows 7.
The best manual to use for installation is the riskwiki - as we update that first.
[[RM625ENT Installation Instructions]]
[[Category:RiskManager FAQ]]
<noinclude>{{BackLinks}}
</noinclude>
925da5dac920f31e89b1210c010dc79414be8e5c
Is there a feature listing for the BPC RiskManager windows client and the browser client?
0
314
441
2010-08-04T14:57:10Z
Bishopj
1
wikitext
text/x-wiki
=Background=
You are looking at the possibility of using a mixed client environment based on user specific needs and where they are.
=Answer=
The BPC RiskManager browser client and the non-browser client are BOTH thick clients, while the dynamically generated BPC SurveyManager pages are pure HTML "clients". The browser based BPC RiskManager client is an MS Internet Explorer browser plug-in - like a Flash (tm) media player.
With respect to the two BPC RiskManager clients, both are EXACTLY the same application, just with a different wrapper. One is like a Flash plug-in for a browser, the other is a standard MS windows style executable - but below the wrapper they are the same program, they look the same and they behave the same.
To get different behaviours for different staff, you configure the rights of the staff, or the database to which the application talks. Data entry or enquiry-only staff simply do not have access to all capabilities (can't see them) or, on certain screens are in 'read-only' mode.
Many of your corporate staff known by the system are not going to be users of the BPC RiskManager primary client at all. These, typically, will be completing survey screens, compliance checklists, responding to or actioning emails sent by the system, etc. In these cases the BPC SurveyManager screens will be their primary interface - and those are pure web based HTML and javascript. These screens are generated dynamically through decisions you make in the RiskManager client concerning what a survey (eg a compliance cheklist) contains, and who gets what survey, with what contents and when. There is no standard to these as everything is dynamically constructed by the SurveyManager on a just-in-time basis - right before a page is displayed. Various wizards allow you to cause the survey framework to be generated from within the RiskManager client and determine the look of the web pages yourself.
The full feature list is huge, but a short list of the features available in the browser and non browser clients is available [[BPC RiskManager V6 Enterprise (Enrima Edition)|here]]
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
01f4714de60f1c98e710b76dedb4f539c92d600a
When are multiple BPC RIskManager server licenses required?
0
300
409
2010-08-04T14:58:16Z
Bishopj
1
wikitext
text/x-wiki
=Background=
We will be acquiring an Enterprise license. We are looking to have RM implemented across a group of companies. They will all be using the same instance with same fields and definitions as the subject matter is the same, but they will each be on different servers, with different IT teams managing them. Can we use a single server license or will we require multiple server licenses?
=Answer=
Yes, and no. Firstly, the Enterprise license is not the best license for this, but rather a Group license (assuming all the companies are related entities) and you expect them all to adopt the BPC RiskManager system. Enterprise License counting is on production servers and legal entities (so you can have as many test and training servers as you like). Each system can have as many databases as you like (we don't license by the database).
The principal difference between the Group and Enterprise licensing is that the total license fee is capped, on the condition that the entities are all related parties (ie. subsidiaries, or a shared service client group.
At the Enterprise and Group license level, we do not license by client - so you can have as many clients (users) as you like, unless you have negotiated a special restricted enterprise license (which sometimes happens with Government clients). This discussion therefore focusses on production servers. Lets consider a couple of scenarios assuming an Enterprise licensing model:
* One application server, one legal entity, one database = 1 license.
* One application server, one legal entity, many databases = 1 license.
* One application server, multiple legal entities, one databases = 1 license.
* One application server, multiple legal entities, multiple databases, but purpose based rather than entity based = 1 license.
* One application server, multiple legal entities, multiple databases, but entity based rather than purpose based = 1 license primary + multiple add-on licenses
* Multiple application servers, one legal entity, one database = 1 license primary + multiple add-on licenses
In all scenarios:
* Multiple web servers (eg a web farm) hosting the BPC SurveyManager component = 1 license (restricted to internal corporate use)
* Servicing the web generally with surveys unrelated to my BPC RiskManager installation = Contact for agreed licensing arrangement.
Essentially, the Enterprise License is not a single server/single database license, but a server and company based license with additional servers/companies after the first one being heavily discounted via the addon licenses.
The Group License is a multi-server / multi-company license offered at a fixed fee for the group. Group licenses are offered on a per parent organisation basis once we understand you expected use scenarios and expected organisation structure. They are always cheaper than the equivalent Enterprise licensing scenario. In either case, there is a per installation charge for maintenance which includes bug fixing, help desk and access to regular upgrades levied annually in advance. Optionally included in this is a reference database vault - under which we retain copies of each of your multiple databases, fully configured (but possibly without data), so that location specific upgrade scripts can be generated and data recovery is easier.
The Group Licence fee allows any company within a group to access and use the software in whatever configuration is seen fit by the group (and there is a large range of possible structures as you will see in the installation guide). With any large group one database configuration does not fit all and that, while there are some common threads, there will always be configuration differences between databases (company structures, people, categories, reference links, etc.).
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
1bd2b683a960e75c3b67224807d79aaff695b024
Can you please provide information on the cost of licensing and the type of licensing for BPC RiskManager V6.x ?
0
315
443
2010-08-04T14:59:11Z
Bishopj
1
wikitext
text/x-wiki
=Licensing Philosophy=
There is an element of 'fair use' in the licensing models, and a little variation across countries to satisfy the particular market expectations of each country. Fee structures are based on your location.
Essentially we have to take into account the purpose of the installation - as this relates to maintenance and support. You local office will discuss and agree the terms of the license arrangement for these more unusual configurations. We do not generally count training or testing installs in the license. We rely on your honesty and integrity and sense of fair play - recognising that we all have to be able to stay in business. In the Enterprise and Group licenses, we also allow additional desktop (single user) copies to be installed as long as the use is for the purposes of the licensing client's business.
=Licensing models=
Subject to local variations, the basic models are:
# Single User. License by user - but we usually allow a few people to connect to the desktop without breaching the license - except that in this case you then have to set the web edition flag and some of the simplicity of pure single user mode access is then lost.
# Small work group - By seat - usually restricted to 10 to 15 users.
# Enterprise - Unlimited users. - Licensed by production server and legal entities (includes test and training server licenses without extra charge), with fair use qualifications. Due to the large number of ways the system can be set up, there has to be an element of fair use here. For example, we allow for the survey engine to be on a separate server / server farm without charging any extra licenses, but if the application server is on multiple servers that would require additional server licenses (at a heavy discount). (Also: Read answer to this question: [[When are multiple BPC RIskManager server licenses required?]])
# Group - Unlimited users. - Licensed by group of entities. Unlimited production servers. Fee set per client. (Also: Read answer to this question: [[When are multiple BPC RIskManager server licenses required?]])
Some example scenarios might help clarify the licensing expectations for BPC RiskManager V6 in some more complicated hosting scenarios:
# There is one physical application server (i.e. essentially one mother board - any number of CPU's) but many databases this is a single server license.
# Multiple application servers (i.e. multiple blades or distinct servers) (and one or more shared databases). One server license per computer - but discounted depending on the nature of the use:
## If the application server is, in fact, on multiple servers and the entity is a group with distinct but connected companies and each application server is set up as if it was a separate installation dedicated to separate entities in separately configured databases, then fair use (in our view) would say that that was separate licenses - although we would again significantly discount such an arrangement.
## If the hosting centre was hosting for multiple disconnected businesses (e.g. Government Departments, unrelated corporations, etc) - again fair use would dictate that these were separate installs with separate license, irrespective of the number of application servers involved - and a separate license would be required, but again discounted IRRESPECTIVE of whether there was only one massive physical server with many separate databases or many physical servers with only one or a few databases associated with each server.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
edafc045d7141761132875d0778fe0ae9aabe25b
Does your license include the cost of MS SQL Server ?
0
301
411
2010-08-04T14:59:49Z
Bishopj
1
wikitext
text/x-wiki
==Answer==
No. You will need a valid MS SQL server license ( or use SQL Express 2005 / 2008 - although this is not recommended for larger multi-user installs). We place such little demand on the database server that it is common for the physical database server to be shared, although often dedicated instances are created for RiskManager.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
47c534bfcee75c0c99ed6ccce8c4efb580405d7d
I just purchased BPC RiskManager. Will you be sending the install disks, and when?
0
316
445
2010-08-04T15:00:16Z
Bishopj
1
wikitext
text/x-wiki
We will be sending you a download link and then we will connect by phone to talk you through the install. It isn’t complicated.
Depending on your location and preference we can also send out a consultant to do the installation with you or for you, but there is normally an additional charge for this.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
5181561374ffc226d992ebe89874cfdfcb4f0c1c
What will need to be arranged prior to the installing BPC RiskManager?
0
302
413
2010-08-04T15:01:00Z
Bishopj
1
wikitext
text/x-wiki
Things we will need you to have arranged prior to the install:
# Fully patched Windows 2000, 2003 or 2008 server with IIS 6+ for the application server hosting (or if undertaking a desktop install - Windows 2000, Windows XP, Windows Vista SP2, Windows 7 PC).
# If using SSL/HTTPS as the communication protocol, an SSL Certificate for the Windows IIS 6/7 server for the domain you will be using for the RiskManager application server. We do not perform certificate validation – so an internal certificate should be fine but note that we use real Certificates issued by Verisign and Thawte on our sites so we have only tested under an environment where the certificates are “real”. I can not think of any reason why an internal certificate should be a problem however. Certificates are ONLY required for HTTPS, not for HTTP or Raw TCP/IP communications protocols (which is the normal way the application is used).
# Fully patched SQL Server 2000, 2005 or 2008 database engine (installed in MIXED MODE – not just Windows Authentication Mode) with Enterprise Manager (SQL 2000) or Database Studio (2005/2008) available (unless you want to fluff around in SQL command line calls).
# Administration access to the Windows servers and the SQL Server (you need to be able to create and restore a database and create an SQL user account and assign roles to that account), or if installing a destop version, you will need administration access to the desktop PC.
# A test client computer with admin rights to that computer – preferable Win XP (latest patches – of course) that has network connectivity to the Windows application server.
# Simple TCP/IP network connectivity between all these components – lets get it working in the simple scenario before we complicate it all with proxy servers. If you plan on using the HTTP or HTTPS communication protocols between the client and the application server, the windows IIS server rights settings are a little fiddly and the error messages are less than helpful when they are wrongly set so it would be better to know we have these right before we in introduce another layer network communications problems like proxy servers.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
bc43f33f67b4c5bd8a9bb32e914aa42135c4cdfd
Does the RiskManager client application work with FireFox browsers?
0
303
415
2010-08-04T15:14:03Z
Bishopj
1
wikitext
text/x-wiki
=Background=
Getting an error message when attempting to log in on a laptop (with Mozilla Firefox).
"Access Violation @ address 09BE4F70 in Module "Riskma~3.ocx". Read of address 000000
=Answer=
The BPC RiskManager Desktop client will coexist with all versions of FireFox browsers, however the embedded webpages held on some panels will note display unless IE is also present on the desktop. Clickable web page links (which are also available on every panel with an embedded web page display window will correctly launch whichever browser is you default browser.
With respect to the BPC RiskManager ActiveX Plugin client, Firefox versions after 2.5 do not work well with RiskManager. The message diaplayed in the backround section relates to this issue. With the release of Firefox 3 the responsible committee deleted support for ActiveX plug-ins, so RiskManager will not load in any version of Firefox above 2.5. Not only did they remove the libraries used, but deliberately restructured the interfacing architecture so that it was virtually impossible to write a support library so that an ActiveX plug-in could be supported.
The plugin support model adopted in place of the ActiveX model is, quite simply a bug ridden mess (at least with respect to FireFox 3), and given the tiny percentage of Firefox browsers among our corporate client base, it is impossible for us to justify separate client code base for 1% (based on our web site hits) of the potential client base. When the Firefox team wish to make the product relevant to business users again, we will gladly support it once more. (Ok - You get it: I am annoyed with the FireFox team about this! I feel better now.)
Clients who can not use Internet Explorer or one of the other ActiveX compatible browsers, must use the Windows executable client instead to access the application server - and you will still be able to use your favourite non-IE browser to see referenced web content from the client. This is explained in the installation and help documentation.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
83e3347656c97206132eb04099499480661a466e
In what programming language is BPC RiskManager written?
0
304
417
2010-08-04T15:16:15Z
Bishopj
1
wikitext
text/x-wiki
=In What Language Is BPC RiskManager Programmed=
BPC RiskManager consists of more than 100,000 lines of code written in Delphi 7 (Object Pascal) from CodeGear (formerly Borland, now Embarcadero) compiled into W32 binary executables and TSQL/SQLPlus. Some smaller components are written or supported by libraries in JavaScript, PaxScript, and ReportBuilder script.
The Delphi environment was originally developed by Borland, starting with Turbo Pascal in the 1980's. It has been one of the leading development environments and languages for almost 20 years and has one of the largest and most skilled development communities in the world. Delphi 7 was released in 2001 and has proven to be perhaps the most resilient, and bullet proof development environment of the last decade.
=Why Object Pascal?=
From our perspective the most apparent reason is that by default Pascal imposes rigid data typing, and size checking. In Pascal you have to turn these off if you want to misbehave, while in C the reverse is the case. Buffer overflow errors such as those that have plagued Microsoft operating systems (written in C and Basic) and been the cause of many security holes are not
possible in Pascal - because while it still operates at the hardware level of the computer it dynamically checks pointer references and array boundaries (raising exceptions when indexes flow past them) and maintains reference counts of objects allocated so that they can be released when no other objects point at them.
This safety net means that it is slightly slower in array and memory release operations than C, but identical in
procedure calls, pointer, floating point and stack operation speeds. So it delivers a higher level of reliability than C, while compromising only slightly on speed but remains much faster than .Net languages or Java - which are "interpreted" (although both claim to be compiled - the reality is that they are compiled as a set of runtime library calls) and operate inside a virtual machine that provides a managed pseudo machine in which the applications work.
A further advantage is that, because no run time engine is needed, any Pascal library will work on all the target machines, with any other machine library regardless of the compiler version. You do not have to worry about framework or psuedo machine engine versions - the idea is simply irrelevant.
=Does It Matter?=
Not really.
You won't notice the language in which we develop, any more than it is apparent what language Microsoft Word is written in. Think of RiskManager as just another MS Office application and you will be about right - it looks, feels and behaves the same way.
=I am used to applications in the .Net and Java Languages. How is this different?=
Its an awful lot simpler. No run time environment, no library version problems, no interactions with other applications sharing the run time environment. Just take our application out of the box and put it on your computer. The Windows client does not even need to be installed! You can literally copy it onto a desktop computer and just run it.
Actually, you are more used to applications written in Win32 languages, like Delphi, C, Visual Basic and C++. Think MS Office, Outlook, any Windows operating system (XP, Vista, Windows 2000, Windows 2003, etc.! All of these are Win32 applications, written in native compiled languages - not run-time languages.
Although many people refer to .Net as a language, strictly speaking, .Net is not a language as much as a runtime environment.
The .Net languages include C#, VisualBasic for .Net, Eifel (for .Net) and Delphi. I.e. Delphi is also available in a .Net form, that is not essentially different from its Win32 cousin, except that some things we can do in Delphi 7 (for Win32), we can not yet do in any .Net language.
Your .Net and Java runtime environments, in turn run on Win32 platforms using Win32 libraries to talk to the hardware. Delphi cuts out the unnecessary and resource hogging middle man.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
4adc0c28463916b421e2034ffb9c9a88460d3b72
Does the RiskManager plug-in itself have a certificate like a java applet does?
0
317
447
2010-08-04T15:17:14Z
Bishopj
1
wikitext
text/x-wiki
==Answer==
Yes. The plug-in is signed with a VeriSign code certificate – so you only have to allow installation of signed ActiveX’s
Remember, the browser plugin is only one of a number of clients available for RiskManager. You do not have to use the browser plugin if you prefer another connection method.
Instructions for configuring IE for browser plugins is available here: [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7]]
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
f08336967dfd90cb7adfcc9e47a7717900ed0b40
For support, what type of support is available (i.e.: email, phone, onsite, etc...)?
0
318
449
2010-08-04T15:18:05Z
Bishopj
1
wikitext
text/x-wiki
All of the above. Generally email to us is the fastest - because it will be addressed somewhere in the world very quickly, and usually the issues involve some kind of exchange of information, and where appropriate (or you request) we will call you. Sometimes things just have to be done face-to-face so that is done in those cases. Most things can be done remotely, however.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
275db26f386a66ce995268b8ae8b4b60355fbcc1
What is the best way to get support?
0
305
419
2010-08-04T15:18:38Z
Bishopj
1
wikitext
text/x-wiki
==Answer==
For IT Technical issues and software problems support is provided 24 hours * 7 days per week.
Email is the preferred method of communication as that ensures the correct person addresses your issue in the first instance, you won't have to wait to speak to anyone. We will generally call you in response to an email (if requested, or not requested, but deemed appropriate), and often within minutes of receipt of the email. There are local Canada, US and Australian numbers + International Skype numbers.
International IT and Technical support is handled from Australia and is handled directlty by the software programming team. The US and Australian support numbers route through to Australia while the Canada support number routes through to Canada.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
c646bf3902aa0b0a3d457f50e5ad3b667de28bfd
How do I arrange installation support and what is the timeline?
0
306
421
2010-08-04T15:30:52Z
Bishopj
1
wikitext
text/x-wiki
==Answer==
Immediate. If you have a windows 2003 (or higher) server set up with IIS 6 (or higher) and MS SQL Server 2000/2005/2008 and the administration passwords necessary for install on those environments (local machine administrator password and the SQL server SA password), you can download the software from our website, install and be live inside two hours (we can do it in 15-30 minutes).
If you are installing on a single user, or network server with either a local database server or a remote database server, the installation and upgrade is fully automated and will take about 15 minutes. Separate client components have their own managed installers and can be run separately on the target machines (even from a central network share). Client installation to application launch takes around 3 minutes.
Never the less we like to talk you through what is happening and any decisions points where you could enable non standard set-ups that might be more suited to your needs, and introduce you to the significan number of hidden tools and features that are provided against the time they are needed. One of the most important of these are the security setting - which are defaulted off in a fresh install so that the initial user can be automatically created during the first client connection.
To get this support, just send us an email the day before to confirm an install time and we will call you and talk you through the installation over the phone.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
561bf7fa1cf7b73f29165e302d54d73728c49d09
What support packages are available and at what cost?
0
307
423
2010-08-04T16:07:06Z
Bishopj
1
wikitext
text/x-wiki
==Answer==
The BPC RiskManager Enterprise License fee includes 24 hours telephone technical support and unlimited general email support during the first 6 months after first installation. After the first 6 months technical support is available under the maintenance subscription agreement. The BPC RiskManager Single User License fee includes unlimited email support and 2 hours telephone technical support during the first 3 months.
The BPC RiskManager maintenance subscription includes the support covering software upgrades and technical assistance. The majority of the subscription is dedicated to developing the software upgrades in the Beta and Production release cycles. Clients with current subscriptions receive technical support of up to 36 hours contact in a year on production versions and an additional 2 hours per beta release installed (and an unlimited additional support during a feature development phase if you are part of the Beta testing stream for a specific feature). Depending on the issues encountered and the context of the direct support, above that level we will probably approach you for some additional fees.
Training, configuration, report writing, survey writing, customisation, database conversion (where required), risk advisory, and similar activities are separately negotiated and quoted outside of the maintenance fee. Your quote should cover rates for the additional items (Canada Office). The maintenance fee is for the purposes of installing upgrades and funding the continued development of the software.
All support packages include priority scheduling of requested enhancements. Only current subscribers download, install or use RiskManager software upgrades. Where you have specific customisations (not configurations) you can either register the request with us for inclusion in a future release - under which arrangement its inclusion and timing of release is at our discretion - but significant priority is given to requests from current subscribers, or you can specifically contract for the modification in which case you can have certainty over timing and inclusion. In either case the universal condition is that ALL customisations are (at our sole discretion) included in the main code base that all clients receive as part of the upgrade cycle. This is to ensure that we only have one code base to manage and that nobody gets 'orphaned' because of their modifications.
"On demand" development of requested end-user reports is NOT part of the maintenance subscription, unless released as part of the Beta or Production software release cycles. Development of custom reports as a separate individual client release is a separately contracted service. In addition to those reports shipped as part of the main product, we also support the user-group open source efforts in report development and system customisation scripting however and we occasionally release additional report templates for public use through the forum.
Charges for maintenance support subscriptions vary depending on the installation, license and components used but are available at their current settings from the product order page.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
0ea7b8cb780cd3c229be984203c4d6098dbd1bb9
How do I get custom features added, or request new features for BPC RiskManager?
0
319
451
2010-08-04T16:24:05Z
Bishopj
1
wikitext
text/x-wiki
=Can I Request New Features & What Does It Cost?=
Yes - you can and are encouraged to request enhancements. We don't guarantee to embrace every suggestion, but we will certianly consider it. If you are happy to leave the decision to us to schedule them and slot them into the existing enhancements schedule, they will be included as part of your annual maintenance subscription. I.e. - No additional charge.
=What if I need the feature quickly, or I don't agree with your decision?=
If you need enhancements faster, or want to be sure they are included you can contract the development directly.
The only condition attached to contracted enhancements is that we reserve the exclusive right to decide to include the enhancement in the general code base that all clients enjoy. To date, 100% of contracted code enhancements have been included in the common code base. This is to your advantage as it ensures your application is not orphaned from the development stream.
See also: [[What support packages are available and at what cost?]]
=How do I request Features?=
The best method is to add the request directly to our team web site. There is a list of all known issues and enhancements planned that is maintained on our [http://team.bishopphillips.com/ http://team.bishopphillips.com/] website to which we and clients add items and track progress on development and release.
You can also just email the request to any BPC Staff member, but preferrably your allocated Bishop Phillips account manager contact. Phoning the request is the least acceptable strategy as there is every chance it will disappear into the ether.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
daf429f8614b4dcc43ce5361f50a760ed733b98e
Is there a User Group Forum?
0
308
425
2010-08-04T16:25:13Z
Bishopj
1
wikitext
text/x-wiki
==Answer==
Yes.
Forum: http://bpc.bishopphillips.com/forum/
Most users use other forms of communication such as emailing or phoning us were just so easy that users tended to do that rather than remember the forum details, and it is not software that tends to have a lot of bugs. Where they occur we fix them pretty quickly - often within 24 hours and then re-release. Initial installation is always handled by us with remote support as you work through the install - so there are not install questions to resolve on a forum - and upgrades are usually just a matter of running the auto-installer or (if you prefer manual methods) copying a couple of files and running a script - or with many clients simply backing up their database and giving it to us to convert.
Also:
# Wiki: Some many months ago we launched a public wiki (riskwiki.bishopphillips.com) to which we are progressively transferring our large internal library of management consulting and governance "technologies".
# TeamServices: Issues, enhancements and bugs are tracked through the team.bishopphillips.com site. Registered clients have access to this site, but to date most users simply send us an email and we record the issue on the team site.
# Blog: http://bpc.bishopphillips.com/riskthink/
# We are open to suggestions. BPC also runs a world-wide user-group coordinated by our Canadian office.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
055eec03f666403ae89c13bad6c1bcdcf63bcab0
What type of documentation, technical and user is available for BPC RiskManager?
0
320
453
2010-08-04T16:25:55Z
Bishopj
1
wikitext
text/x-wiki
==Answer==
<ol>
<li> A very detailed installation and configuration manual (assumes you know nothing about windows, SQL Server or RiskManager) and covers XP, Vista, W2000, W2003 and SQL Server 2000 and SQL Server 2005 and SQL Express 2005 setups (Aprox 80 pages) .
<li> Structured installation manual on the riskwiki.
<li> A growing Bishop Phillips Consulting and client/user maintained riskwiki.
<li> Extensive programmer level documentation for:
<ul>
<ul>
<ul>
<li> Report Builder - the end user report building tool (Aprox 135 pages) + Reference manual.
<li> PAXScript and ScripterStudio - the internal scripting languages
<li> WorkFlow Studio - the internal workflow tool (this is being rewritten and updated). Note the WorkFlow Studio is a beta release at the moment, so the documentation does not yet has a fixed target to document. It should be production grade by January. The beta bit refers to the fact that we have not yet sewn it through all the internal screens (because we are still deciding how best to use it - beyond merely documenting process flows), although all the database hooks are in place and the actual designer and engine, and task manager are fully functional and in production ready state.
</ul>
</ul>
</ul>
<li> User help library (being upgraded for the new release).
<li> Example databases with extensive internal documentation (e.g. Standard & Poors - with the risk categories explained). All documentation is shipped with the system and will progressively appear on the riskwiki over the next few months. We currently deliver it as a mixture of Windows help, HTML and pdf documents.
</ol>
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
59ffc6a4f5c588e0375a732eb2b4b2943b592672
How does one decide the optimum BPC RiskManager configuration?
0
309
427
2010-08-04T16:26:25Z
Bishopj
1
wikitext
text/x-wiki
==Answer==
BPC RiskManager is shipped with a pre-configured database, set up with the most common options so you can use it right "out of the box". However, this application can really "sing" and you will probably want to do a lot more than just the standard configuration.
Generally, (although not mandatory) we will conduct a short consultancy to ascertain the most suitable initial configuration, and build a pre-configured database for you to use.
BPC RiskManager is designed to cover a very wide range of risk models so the configuration settings are not always obvious from the start. Almost everything can be changed from the client and are a few settings that are set on the application server management interface - so the initial decisions can be changed later. The install set includes a partially configured database with default settings, so it can be used "out of the box" - but we would generally advise you to have a small amount of configuration and training support in addition to the license.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
514e487f1542ad1ee5f36f92460278bd51f6b74b
Is BPC RiskManager a Client-Server application?
0
310
429
2010-08-04T16:27:24Z
Bishopj
1
wikitext
text/x-wiki
==Answer==
No. BPC RiskManager is an n-tier application server system. Even in the single user configuration it is an application server solution (just with all layers on the one computer).
Refer to here for more information. [[BPC RiskManager V6.2 Network Architecture]]
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
e50081c44a8e651a24e643fe9950bf7c9e7af52f
What is the difference between the browser plugin and the windows executable RiskManager client?
0
321
455
2010-08-04T16:28:16Z
Bishopj
1
wikitext
text/x-wiki
==Answer==
In a nutshell, very little.
Bishop Phillips Consulting supplies a browser based and non-browser based client for BPC RiskManager. Both solutions are application server solutions (also called 3-tier application server) – not client-server.
There is no difference in functionality between the browser and non-browser version. The solutions differ in how the client component is served to the client computer desktop.
The browser based client is delivered as an IE 5/6/7/8 browser plug-in (like adobe reader or flash player) while the windows (non-browser) client resides on the user’s desktop (like word or excel). The main argument for using one over the other is that the browser based client is distributed simply by publishing it to a web server web page, while the windows client is distributed by copying it to the client computer. The interface is otherwise the same in both solutions. While the browser client is slightly simpler to distribute and update (just point your browser at the web site versus copy a single executable application to your computer.), it disconnects from the server when you close the web page on which it is hosted (just like adobe pdf reader), while the non-browser solution stays connected until you close it yourself (or the server side socket server times out the associated com object through inactivity).
The user interface is identical across both the browser and non-browser versions. We generally release updates to the non-browser client first and in the coming releases the non-browser client will probably behave more like outlook (existing as an icon in the system tray) when not actively in use. We tend to use the non-browser version oiurselves as opposed to the browser version, but that really doesn't prove anything either way.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
a3821cee6e9be7faf2b87fae8c7429ef58896ffc
Database stability: Is the RiskManager essentially a SQL Server application ported to Oracle?
0
322
457
2010-08-04T16:29:34Z
Bishopj
1
wikitext
text/x-wiki
==Answer==
BPC RiskManager V6.x is currently available only as an MS SQL 2000/2005/2008 server and MSDE 2000 / MS 2005 Express / MS 2008 Express solution.
BPC RiskManager Express is available as both an Oracle and SQL Server solution covering Oracle 8 through 10G and MS SQL 2000/2005/2008 server and MSDE 2000 / MS 2005 Express / MS 2008 Express. RiskManager Express has less functionality than RiskManager V6.x. The application in either case is developed on a database independent platform using an SQL Server test environment and then ported to Oracle the oracle server where Oracle versions are available. With respect to RiskManager Express there is no difference in stability of the application attributed to the database engine.
You are encouraged to adopt the MS SQL Server database for RiskManager V6 (the version that otherwise suites your requirements). In the event that an Oracle RiskManager V6 release is essential for you, the database independence layer utilized in BPC RiskManager Express was carried through into V6.
In fact, internally, V6 still goes through the database check steps on start-up that are used in BPC RiskManager Express to determine the database on which it is running and apply the changes to the SQL queries that would otherwise be required to run on Oracle. Therefore, we could produce an Oracle 10g+ release with approximately 1 month’s notice. The original intention when V6 was built was to release both Oracle and SQL Server versions – which is why the database independence layer was preserved in V6, but every V6 client to-date has chosen to adopt the MS SQL server version so we have not been able to justify the development effort required for Oracle solution. RiskManager Express predates RiskManager V6 and does have a predominantly Oracle and Interbase user base.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
9b768242e396b46d1cb7fcbad8eea12934dea426
Database support: Which database choice will give us the best level of support?
0
323
459
2010-08-04T17:16:42Z
Bishopj
1
wikitext
text/x-wiki
==Answer==
===Does the Choice of Database impact the level of support?===
No. All BPC RiskManager V6 systems use SQL Server 2000, 2005 or 2008. It is reasonable to expect during the product version release life of V6 all future releases of MS SQL Server will also be supported. Most remaining RiskManager Express clients (those that have not upgraded to V6) use Oracle, but it also supports all current versions of MS SQL Server. BPC RiskManager V6 has considerably more data-elements than Express and is not back-wards compatible with BPC RiskManager Express (RiskMan) databases.
All V6 customers should choose the latest possible version of the SQL Server database engine that has been released for at least 6 months. V5 customers may also choose Oracle 10 and 11 series databases but are strongly encouraged to choose MS SQL Server equivalents if available in your organisation as this is our primary development database platform.
Selection between SQL Standard/Enterprise versus SQL Express alternatives entirely at your discretion and will be determined by your data volume and user connection needs. The selection of RiskManager V6 or RiskManager Express V5 does not impact the version of SQL installed by your.
We maintain concurrent development tracks for both V6 and Express V5 systems.
===Does concurrent development of V6 and V5 (with its Oracle user base) impact support with respect to database version?===
This is a good question. At this point no – because RM 6 clients are all SQL Server and Express Clients are virtually all Oracle (and include some of our oldest, most loyal clients).
Updates for BPC RiskManager Express V5 are released on Oracle and SQL Server concurrently.
Going forward (assuming you request, or we decide, to release an Oracle version for V6)…the honest answer is yes and no. I expect we will always develop on SQL Server for V6 and future versions (although this depends on which system has the larger client base), and release beta versions on SQL Server. We will then release the Oracle port of the same solution (this may be only a week apart – but the order will most likely be SQL Server first).
Once a BPC RiskManager V6 Oracle version is in production there will be no difference in support, appearance or capabilities of RiskManager V6 on Oracle versus SQL Server. The current release of the application server can talk simultaneously with databases from multiple database servers all running different models and versions of database engines as long as it has an appropriate available ADO Driver library.
===Does (or would) the choice of database impact the system capabilities?===
No - aside from the obvious fact that Oracle is not available as a current choice for V6 (but is of V5 Express). In the event that additional brands of database engines were adopted for V6 the user and administration experience is identical across all databases.
The client and business logic are separated from the database layer using a three stage database virtualization layer in both V6 and Express:
# The lowest is MS ADO which provides a common database interface layer in terms of database connectivity)
# Classic areas of incompatibility across databases lie in the use of identity (auto-incrementing) data fields which are supported in SQL Server but not in Oracle – we do not use them, but rather have reproduced that functionality using triggers which are database independent and maintain our own auto-incrementation field table, - and in the syntax of table joins which we handle through a preprocessing layer which automatically adjusts the syntax of joins depending on the database. In spite of the fact that V6 actually only currently deals with SQL Server – it still applies this step to join syntax.
# Lastly the data manipulation and multi-user data integrity reconciliation is handled in the application layer and records are reconciled at the field level (rather than row level at the database level) so if two users update the same record but different fields the reconciliation layer is smart enough to generally work out the correct combined update.
These methods were all developed for RM version 2 which had a mixed Oracle and SQL Server client base. Hence the brand of database has very little impact on the operation of the system, nor the skills required of the support team. Database specific issues are almost never the cause of support related issues.
===If cross database support is so easy now why do you anticipate 2 months to release an Oracle version?===
Essentially because of 2 reasons:
* The use of blobs is considerably greater in RM6 than in Express including a few places where multiple blob fields are present in the one table. Blobs are traditionally handled differently across the different database brands and require specific attention to ensure correct operation.
* The use of dynamically created SQL statements is greater in RM6 and the probability that some of those SQL statements are not passed through the syntax standardisation layer is greater than otherwise. As this layer adjusts between join types, statements that are not passed through but should have been, will simply not work because they will fail at the syntax level rather than work incorrectly and in any case, all database interaction is held in only a few code modules so it is a reasonably mechanical process to check and fix.
* There are potentially some SQL constructs used dynamically that must be presented differently in Oracle
* There are many more stored procedures and some complex structures like NSTree generators and recursive association tree walkers that may have to be preconceived or for which their are built in capabilities in current oracle systems that should be used instead.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
01314612ba40e01ec405714e0807c89ffbe611e0
Security: What is the most secure architecture for BPC RiskManager?
0
311
431
2010-08-04T17:17:32Z
Bishopj
1
wikitext
text/x-wiki
=Access Rights=
With respect to access security we support trusted login, AD, LDAP, NT Groups, and internally managed methods for user access rights at the application server layer.
=Database Acess=
Only the application server accesses the database. There is no direct user to database connection ever established (even in report generation), consequently only one access login account is required between the database layer and the application server and there is no need to establish (or desirability in establishing) access rights for a user at the database level.
=Browser Plugin & Network Communictions=
==RiskManager Browser and Non-Browser Clients==
The browser based and non browser versions of the Risk Manager client face the same issues and use the same models for network communications. The browser merely hosts a plug-in component (think flash player, or adobe pdf reader) and is essentially used for distribution of that component. Once the plug-in starts it establishes a direct connection to the application server on a different port than that used for normal web communications (which may or may not also be a web server – i.e.. The web server that delivers the base page can use any security model you prefer – and the application server(s) can be on any physical server you desire – not necessarily the same machine as the web server).
The data stream is not a linear ascii data stream like a web page system but a stream of binary delta (change) packets which are essentially unusable out of the context of their stream and the non-delta (change) packets that are not re-transmitted in any case. On a private network this would generally be sufficient, in all but the most extreme scenarios.
The stream itself can also be separately encrypted, our preferred model where additional security is required is that the entire channel is encrypted through a Vitrual Proviate Network (VPN) tunnel (which can be defined to operate on a single port if desired) because these are generally more secure and faster than data-level encryption as they can be imposed at the hardware level. The RiskManager access model is STATEFULL so security models that allow for preservation of state across access are appropriate (hence VPN tunnels are a really good idea).
With respect to VPN solutions, either a fully fledged VPN (ideally hardware implemented for speed) should be used an the entire traffic between client and server tunnelled there-through, or HTTPS (SSL) can be used directly from the client to a dedicated (supplied) listener on the server, however in this latter case you will have to install an SSL certificate on the IIS server running in the application server and use the HTTPSrvr dll instead of (or in addition to) the SocketServer.
Built in to the RiskManager client / application server architecture are three models for communications:
* Proprietary port using raw TCP/IP (This is the default method)
* HTTP
* HTTPS (SSL)
==SurveyManager==
Where the survey manager module is used as part of the risk management process, this system uses conventional pure html web pages and will happily utilize secure socket layer or VPN tunnels as desired and appropriate to the location of survey page recipients.
The module is hosted on an IIS web server (any version) and any security model appropriate to web site technology is appropriate for use in this context. The server side of the surveymanager system is 100% STATELESS – so each page transaction is independent of any preceding or following transaction and hence the security model adopted does not even have to allow for preservation of session context across succeeding page submissions – except that common sense dictates that you would at least want the browser to be able to negotiate a login session across succeeding web pages with the web server if only to avoid the inconvenience of the user logging in with each page submitted.
The survey manager is used to deliver and collect compliance information and a variety of other data (such as risk or cause property information). Certificates, secure-socket layer, windows authentication, LDAP, etc access security models as well as no security with anonymous http access are all appropriate and acceptable. In terms of access (rather than data confidentiality), in addition to any access model adopted to log in to a the webserver, the survey engine uses a random key associated with the user ID to confirm the right-of access to the specific web page being served. The user does not need to know this key and it is delivered with the page invitation – so merely knowing the user id does not grant access to a survey. Login based security for respondents (to the extent this is desired) is expected to be handled by the web server / operating system, however there are also dedicated question types that will deliver a login page as part of a survey if that is preferred. In addition there are a variety of special field encryption mechanisms that can be turned on on a user, page, survey, survey instance, organisation, or database level.
Survey manager web pages are never stored as pages, but dynamically generated on the fly based on the user, the organization, the survey, a variety of context and user specific filters and keys, the responses to previous questions or other surveys, internally stored rules and a variety of other factors. All of this is stored in the SM/RM database and only graphical and page layout elements are actually stored on the web server itself (and even these can be stored in the database) – so a surveymanager website can consist of just the surveymanager dll and a single javascript library if necessary. The database accessed by the surveymanager library is determined by the library name (used as a key in the server registry), so again the user never establishes a direct page level connection to the underlying database.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
63aa5a1275f25d3d19275dbcfca6848d6ef4912b
What is the best client version - the browser or non browser Risk Manager client?
0
324
461
2010-08-04T17:18:31Z
Bishopj
1
wikitext
text/x-wiki
==Answer==
This is a tricky issue. Generally we would recommend the windows non-browser version, unless you have a large number of users and want to be able to distribute the client component in an ‘on-demand’ manner.
Why? - Simply because there is one less 'moving part' and resizing of the application can be applied to the base (main) window, but an alternate argument is that where there are multiple databases, the browser client offers an advantage because you can easilly list the various connections on the hosting web page, and specifically secure access to the web page as part of your access model. Further the web page offers a simple "single point of publication" distribution system which instantly delivers the latest version to all users.
Like we said - there are really good arguments for both versions. Most larger lients use both versions, with the majority of users using the web browser version. Smaller clients tand to use the windows version.
On the face of it, the browser plug-in would be best in the situation involving a large, diverse oir geographiucally spread user base, as it will look after distribution, and updating itself on client computers. Note, however, the comment following:
The plug-in component is a self registering verisign signed ActiveX which does not write back to the web page hosting it, nor does it respond to scripted instructions sent externally to the plug-in (i.e.. it is a self contained blackbox). The default version contains a separate MIDAS library which it installs and writes to the client computer as part of the registration process, but we can replace this with an internal version of the library if needed. The non-browser version uses an internal version of the library, and is therefore installed merely by copying it to any place on the client computer to which a user has write access. Both the browser and non-browser clients write user preference information to the registry and work under Vista with UAC enabled. Some lockdown configurations of client computers can (obviously) prevent the plug-in from registering, and in those cases the non-browser client is the better choice as it does not need to register itself.
ActiveX plug-ins are not supported in the latest release of FireFox (although earlier releases are fine). Clients using the latest release of FireFox are advised to use the non-browser client.
In version RiskManager 2.5 and above, both the browser and non-browser clients use an internal com wrapper for IE (any version 5+ available on the client computer) to allow embedded document, procedure manuals and other links such as team web sites to be displayed integrated with the risk information on certain tabs. The capability is not critical to the operation of the application, but significant for some forms of user experience. In the event that IE has been stripped from the client computer, this capability will not work. If this proves to be a critical issue, we can produce versions that either use FireFox in that role or use our own HTML display engine, but in the latter case JavaScript support for the displayed pages will be lost.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
b437550a2aeb1215294c4e4a670833dc4889c19c
BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7
0
312
433
2010-08-04T17:19:13Z
Bishopj
1
/* BackLinks */
wikitext
text/x-wiki
=Browser Setup For ActiveX Plugins using IE 7=
<ol>
<li> From a client computer (or from the application server computer if no client computer is easily available) open Internet Explorer.
<li> Choose “Tools” from the menu bar and “Internet Options” from the menu that appears.
<li> Select the “Security” tab.
<br>
<br>
[[Image:RMC_IESetup2.png]]
<br>
<br>
<li> Select the zone in which your risk manager application server resides relative to you client computer on the “Select a zone to view or change settings” tool bar. The diagram shows "Intranet Zone" which is the normal situation, but depending on your intended server destination you might need to choose a different zone - such as "Internet Zone"
<li> Select “Custom Level”
<li> On the “Security Settings” window scroll through the settings list until you find the “Download signed ActiveX Controls” setting. Enable the “Prompt” option (which is Microsoft’s recommended setting). Our ActiveX controls are signed with current Verisign ceritificates. Administrators can achieve higher level of security by also flagging controls from Bishop Phillips Consulting as being trusted, or from the riskmanager application server web site as being trusted – but the recommended setting should be enough.
<br>
<br>
[[Image:RMC_IESetup1.png]]
<br>
<br>
<li> We also set the automatic prompting for ActiveX controls to enable, but this may not be required in all scenarios.
<li> Scroll a little further down the list and enable the running of ActiveX plugins as follows:
<br>
<br>
[[Image:RMC_IESetup3.png]]
<br>
<br>
<li> Now select OK and close the security settings window, and select OK again and close the Internet Options window. You should now be back at your browser window.
</ol>
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
63750efadfd453fe53eee189342a5327b47f20cc
BPC RiskManager Server - After installing in production or adding an application server
0
325
463
2010-08-04T17:19:57Z
Bishopj
1
/* BackLinks */
wikitext
text/x-wiki
=Background=
You have an existing installation and configured database and you have just either added an extra application server to an existing database or ported from test/dev into production. You ran the auto-installer successfully and tried to connect to the new application server with a client and Risk Manager has rejected your login user ID - but you know your user name and password is correct.
You have seen a message that looks like this:
[[Image:RMLoginFail.jpg]]
=Answer=
That is essentially what it looks like. The application is ok, you are talking successfully to the server and the server has successfully tested the login user ID and password and rejected it as being wrong. So there is nothing wrong with the install, per se.
Now the question is how did you move it to production? What is you chosen authentication method? Have you set the authentication method on the server?
Assuming you are using the auto-installer and that you are using the most common security option - where the application manages the security itself....
On running the auto-installer on a new server, the installer will install the server in single user mode (General Tab on the application server), and Trusted signon shared login role = Administrator (RM Security Tab on the application server). This is to facilitate creation of a new account for on first time installation. In this case, you already have the accounts in the database so you need to switch the system into managing its own security.
So you need to:
<ol>
<li> Login as adminitrator to the application server computer
<li> Start the application server from the start menu on the server (“BPC RiskManager DataServer V6”)
<li> Double click on the green disk in the system tray. The Riskmanager DataServer management console will open.
<li> Click on the General tab.
<li> Change the “Risk Manager Edition” to “Web Edition”.
<br>[[Image:RMDS GP2.png]]
<li> Save settings.
<li> Click on the “RM Security” tab
<br>[[Image:RMDS GP10.png]]
<li> Switch the login role to “Assign access in application (Login Not trusted)”
<li> Switch the “Option to Assign Secure Identification” to “Use client user name only”
<li> Save settings.
<li> Click on “End Process” (bottom of the window)
<li> Attempt to login again.
</ol>
Obviously in steps 7 - 10 you set the security model to what ever model you are actually using. For an instructions on the various models and settings go to:
* [[Security Configuration - Update Installation and Reset]]
Lastly, did you follwow the steps laid recommended for migrating from test to production?
* [[Steps For Migrating RiskManager V6.x from Test To Production]]
If you still have problems send us an email to the support email address, providing a phone number we can call you on - and when you would like to be called. (oh..and make sure your maintenance/subscription fee is current :) )
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
88696fb078d5bdca852feb61666fd22e8bfdf6b0
Steps For Migrating RiskManager V6.x from Test To Production
0
271
281
2010-08-04T17:21:11Z
Bishopj
1
wikitext
text/x-wiki
==Introduction==
There is a very detailed installation process on [[RM625ENT Installation Instructions]]
However, this assumes an essentially manual installation process, essentially starting from a raw iron server, and includes installation of the OS components required. If you use the automatic installer (recommended) the process is much simpler. Production, generally differs from Test or Dev environments, however as the components maybe more widely distributed and you are generally starting with an at least partially configured server (unless you are dedicating a production application server instance to RiskManager).
Different sites do differing things for production, some reinstall completely others duplicate test into production, some do everything manually for production, while using the automated system for Dev, etc.
We recommend a reinstallation - partly because it is the least error prone, and possibly faster.
==If You Have An Existing BPC RiskManager Production Installation==
If you have an existing RM installation in production, you can actually just copy the changed files onto the server (replacing the existing files of the same name) and start the RiskManagerData server once, then close it down, and you are done, so the auto-installer is not actually necessary in this case. Alternativley, you can run the uninstaller in production to remove the previous installation, and then use the new installer to reinstall. You will NOT loose any of your configuration settings - so it is completely safe to do this. That will essentially make you existing system a raw machine EXCEPT that the connection settings will be in place already.
If this is your situation, the steps below are still correct BUT you should NOT let the installer create the database(s) for you - as you already have the connections present. Just say no to this question when it comes up during installation.
==Performing the Migration To Production==
Read the preceding section if your production server has a pre-existing RiskManager V6 installation. If you are migrating from BPC RiskManager Express or RiskMan, you DO NOT NEED TO UNINSTALL, BPC RiskManager V6.x will ignore the Express settings and installation.
Assuming we are starting with a W2003+ server that does not have a pre-existing RM installation, and that your SQL Server is on a separate computer:
===MAKE DECISIONS BEFORE INSTALLING:===
<ol>
<li> If using BPC support during installation email us to arrange a time for our call to assist you install.
<li> Decide whether you are going to enable SurveyManager as part of the installation, or later. (Ask the business)
<li> Decide how many databases will be set up in production (Can be increased later if desired, but easiest if known prior to installation as the installer does all the work for you).
<li> If you want to make an existing database available in production that has been set up in dev/test and you will NOT be using the same physical database as that set up in dev/test, decide whether you will be will be using the RM installer to restore a backup of the established database into production, or whether you will restore the backup separately (after installation completes). You should consider:
* If you have already restored the database into production you probably do not the installer to attempt to create it
* If the target database is the "DEFAULT" connection (so named) of the application server and the database does not already exist in production and the database server is essentially a single server solution with data and log files on the same server then either decision is appropriate, and it is probably simplest to let the installer create a database of the target name and then use the installer's restore option to restore the backup over the top of the newly created database.
* If the target database is a uniquely named database connection of the application server and the database does not already exist in production and the database server is essentially a single server solution with data and log files on the same server then either decision is appropriate, and it is probably simplest to let the installer create a database of the target name and then use the installer's restore option to restore the backup over the top of the newly created database.
* If the target database server is a complex configuration with log files and data files separated across multiple NAS/Servers etc, the installer will probably not be able to determine the configuration correctly as the information is not always available to it in the remote registry (although it will attempt to do it correctly). So restoring from backup on a remote machine may not succeed. You are probably best to do this manually prior to installing the server, or if the database does not yet exist on the target server, the simplest approach is to let the installer create an empty database of the same name and then restore your backup over the top of the newly created database after installation completes. If you choose the former approach, you will need to do some extra steps (instructions provided during installation below) so the client test will validate your install. If you do not create any databases during installation (or have a pre-existing database to which to connect) you will not be able to validate connection during installation. We strongly recommend that you at least let let the installer create its default database and test connection to that. You can always discard it later.
<li> Decide whether you will be using network compression comms or the default raw comms (see the instructions below for the implications of this decision – raw is simplest) and if both, which will be the default. (Can be enabled later if desired)
<li> Decide whether you will be enabling HTTP/HTTPS comms access as well. (Can be enabled later if desired)
<li> Decide whether you will be using the desktop client and or the browser plugin (we recommend the desktop client – both have the same functionality, but the plugin behaviour varies a little across different Win OS’s and IE versions due to MS security changes, so if you have mixed desktop OS’s not every desktop will behave exactly the same. If you want to know the implications or need this explained further, ask us or look on the riskwiki.
<li> Verify the installation site (eg the remote desktop on which the installer will be working) has phone access (preferably hands free), and that you know the telephone number for the phone, and, ideally, outbound internet (IE/Firefox/etc) access so you can look at the riskwiki if needed.
<li> You should do steps 1 – 9 below prior to the BPC support call.
</ol>
===PREPARE THE SITE BEFORE INSTALLING:===
<ol>
<li> Verify server has the following infrastructure on it:
* Functioning network connection to the rest of the network with port 211 (and ideally port 212 as well) and SQL Server TCP ports available – eg 1433.
* Functioning installation of IIS 6+
<li> Verify the server either has on it or available to it:
* Functioning SQL Server (any version) configured in Mixed mode authentication or SQL Authentication mode
* Functioning SMTP server that will accept relays from this machine (this can always be configured later)
<li> Verify that you the person installing knows:
* Server local system administrator user ID / PWD
* SQL Server user id SA / PWD (If SA is not available you will need to speak to me again)
* The name of the SQL Server and the instance (if not using the default instance)
* The Administrator account user ID (usually Administrator) and PWD for the RiskManagement system. This is database specific, and more important when restoring than installing. Not knowing does not stop you installing, but may prevent you from connecting via a client when the test is run at the end of the installation. Otherwise, any RM Administrator account is fine to use. It is auto-created on first connection, so it can often be the user name of the person who does the installation. Ideally you settle on a common user name, and always use that across all databases and remember the password. Access by the root administrator account can be blocked by the RM system administrator after installation of a fresh database, so for restored databases, it may be that this account’s access is blocked anyway.
* The http addressable name of the application server as it would be typed into browser address bar by a remote LAN client (eg: a human operating from her office)
* The fully qualified domain name of the application server as it would be entered in the windows network browser of a remote user if they were able to browse to a folder on the application server (eg. the human again)
(NOTE: Part of the installation process is to create special purpose limited rights SQL accounts, the installer either creates these for you, or expets you to know the passwords. I am assuming they do not exist yet on the target SQL server. You will need to provide a password during the installation for the “riskmanuser” sql server account. The installer will make this account if it is not available already, so you need to have decided what the password will be. I recommend using the same password as that used for dev. This is a limited rights ID. The other accounts will be set to use the same password. They can be changed manually later if desired.).
<li> If transfering the dev database into production:
* Prepare a backup of the dev database.
* Ensure the verison of SQL Server in production is the same as, or higher than, that in dev from where the backup comes (eg. You can NOT restore an SQL 2008 backup into an SQL 2005 server, but you can do the reverse)
<li> Confirm with the RM administrator how many databases they want in production. We recommend a minimum of two databases, the default auto names database, and another spare / empty database for future use. The auto named database will have the connection name “DEFAULT”, the other database can have whatever connection name you choose. The autonamed database will be called RiskManDB625 and the connection will be called “DEFAULT”. The connection name (and in fact the database name) can be changed later. The connection name is the name the user sees as the database name. The caonnection DEFAULT does not need to be entered at all by the user – so this is ideally the main database in use.
<li> Copy the RM Installer to a directory of the application server that will be accessable to the person performing the installation.
<li> Copy the backup file to a directory on the SQL server that the SQL server will be able to access (read from) during a restore. We recommend that that directory is the default backup directory for the targeted instance of the SQL server as that is where it will read from naturally (and if you use the installer to do it, the SQL server must be able to read the file – so it needs to be readable by the SQL server under the SA account).
<li> Verify that the place from which you will be connecting to the application server (ie the remote client) has a telephone preferrably able to run work in hands free mode (so we can talk you through the process by phone).
<li> Locate your BPC RiskManager registration code so you can enter it when asked. You will not need this until the client connects at the end of the installation process. If this is a new server and new database you will have up to 60 days to enter it.
<li> If you opted during the decision stage above to backup an existing RM database from Test and restore it into Production, you should do that now. (Or schedule it now to be done immediatley before the installation commences). Make sure you know the database name on the server.
<li> Send BPC an email or phone BPC to arrange a time for support to contact you – preferrably as long BEFORE you commence installing as possible. We will confirm the booking and contact you at that time. If you just wish us to be available should you need it during support, we will make sure we are able to take your call at that time, and email you a direct number to use should you need it.
</ol>
===INSTALLING:===
<ol>
<li> If using a remote client to connect to the application server and run the installation process (eg mstsc), verify that the remote client is set to operate at 96 DPI not 120 DPI (there is a bug in the installer display routine that hides some buttons at the 120 DPI resolution. If connecting via mstsc, enter mstc /console as the connection command in start/run from the remote computer so that you are operating in console mode. This is important so that you can see the system tray icons.
<li> (If using BPC support, await the call first). Run the installer in “Complete Mode”, read the onscreen instructions and answer all the questions.
* Always create default database, during initial installation
* If restoring a backed up dev database, the installer can do this AFTER the installer creates the databases, or you can do this after the entire process manually. For some complex SQL setups this may be required, as while the installer attempts to locate the correct places for database restoration from the SQL Server registry, this is not 100% reliable due to the various ways this information is stored in the registry across different versions and instances of SQL Server. Let the installer create the blamk database for you, so that all the connections are made, and then you can simply restore over the default database with your backed up database after the installation. If the SQL server is on the application server itself, there is a much higher probability of complete success in installer based restoration.
<li> The installer will auto-register the components and start the BPC RiskManager DataServer console. If you are NOT connecting to an existing database (ie you let the installer create new databases), you can go on to the next step - just select "End Process" on the consol window...other wise check the dot points below:
* If want to connect to an existing database that was NOT created or restored during installation (ie. a database that exists but that is not yet known by the application server on THIS computer) AND you already have the database(s) set up on the production database server, you will need to configure the connections when the application server console window appears (ie. NOW): [[BPC RiskManager - Database Configuration|How to Connect the BPS RiskManager Application Server to an existing database]]
<li> Next, the installer will start the client locally for a test connection at the end to verify access to the default (or other) database. If you can connect and see the main screen after login you have successfully installed.
NOTE: The installer will set the server up in single-user edition and auto-administration access mode. This does not prevent remote access but will (usually) need to be changed to your correct access settings for production enterprise deployment. See the section "After Installation" below.
<li> Switch the server into web edition - click [[BPC RiskManager - General Configuration|on this link for instructions]].
<li> Set up client access. Either (or all):
* Copy the desktop client installer (there are two to choose from depending on whether you prefer single exe or MSI installers) from the /program files/bishopphillips/RiskManagerVxxx to a network share that will be accessible to users
* Copy the already installed client from the /program files/bishopphillips/RiskManagerVxxx/win32client directory to a separate computer/folder and make the folder sharable, if you want people to simple run the client across the network from a remote folder. The client does not actually need to be installed on a destop to work, but installing it provides shortcuts / menus and enables the use of the network compression/encryption library in V6.2.5.x.
* Install the client into a citrix (or other remote desktop) image.
* Distribute the browser plugin ActiveX client to the Risk Manager web site.
<li> Go to a typical remote LAN computer and attempt to install/use the client set up in 13 to access the server using the same account used previously and verify remote connectivity to the application server.
<li> If intending to use streaming network compression/encryption, follow the instructions in the riskwiki for enabling this. Remember you will need to advise all users that the access settings are other than the defaults in the client. (a box has to be ticked and possibly a port changed in the login window). If using streaming network comms, we recommend 2 ports be enabled – one for raw comms and one for compressed comms. (Hence the suggestion at the start that you clear 211 and 212 for RM comms). In reallity RM does not care what port is used. By default it is set to expect communications on port 211, but you can set it to use any combination of ports you like. We advise sticking with the recommended (obviously). If using steaming compression, you should probably for simplicity enable that on port 211 – so clients only need to tick a box to enable it, and set the raw channel to be 212, as the raw channel is only for trouble shooting, and backup connection.
Note, enabling compression/encryption will EXCLUDE the option of copying clients as a means of installation as the compression library is currently a separate lib in V625.x - that will change in a future release.
</ol>
=After Installation=
Most of these actions require you to use the RiskManager application server configuration console. So firstly, on the application server computer locate the "BPC RiskManager DataServer" in the start menu and start it. When started, the application server appears as an icon in the Windows system tray, typically located in the lower right hand corner of your screen. Please double click on the icon [[Image:RM_App_Server_SysTrayIcon.png]] to interact with this program. The configuration console will open....and then..
<ol>
<li> Now proceed to the instructions for completing the security/access set up:
<br>
<br>
* [[Security Configuration - Update Installation and Reset]]
<br>
<br>
<li> If you have additional databases to connect to riskmanager that you did not do during installation, you had better so that now: [[BPC RiskManager - Database Configuration|How to Connect the BPS RiskManager Application Server to an existing database]]
<br>
<br>
<li> Depending on which other components you are using (network streaming compression/encryption, email messaging, surveymanager, browser plugin client, etc.) there may be a few manual steps to complete the installation using the RM Configuration wizard after the installation finishes and tests have been completed. You should generally, in any case, access the IIS server after installation and enable “Unknown ISAPI extensions” – for surveymnager operation even if the surveymanager is not being used yet, as it will save you time later when RM decide randomly to create a survey. The explanation of how to do this is in the riskwiki instructions below. Now do each of these steps in order (note all are optional - the system will work without any of these configurations, but some things like email will not be available without them:
<br>
<br>
# [[BPC RiskManager - Send Mail Options Configuration]]
# [[BPC RiskManager - Mail Server Connection Properties]]
# [[BPC RiskManager - Logging Configuration (OPTIONAL)]]
# [[BPC RiskManager - Create the Root Administrator]]
# [[BPC RiskManager - Distribution of Client Components]] (Browser plugin ActiveX)
# [[BPC RiskManager - Configure Risk Mail Manager]]
<br>
<br>
<li> If you are using the survey engine, the installer will have set that up on the application server, but there are a couple of things you will need to do. In particular you will have to manually tell IIS to allow unknown "ISAPI extensions" and if you have connected to a pre-existing database (rather than one created during the installation process) you will need to configure it. Also, if your SurveyManager web server will be different from your application server computer (eg a web farm), you will need to do the config step for each database in the RiskManager environment. (There is special tab in the to help with the multi database situation efficiently).
<br>
<br>
* [[BPC RiskManager - Install The SurveyManager]]
</ol>
[[Category:RiskManager FAQ]]
[[Category:BPC RiskManager V6 Installation]]
[[Category:BPC RiskManager V6 System Administration]]
<noinclude>
{{BackLinks}}
</noinclude>
ba321d524a897f9f6bc8c831a9035e1da24cabf6
285
281
2010-08-04T17:21:11Z
Bishopj
1
wikitext
text/x-wiki
==Introduction==
There is a very detailed installation process on [[RM625ENT Installation Instructions]]
However, this assumes an essentially manual installation process, essentially starting from a raw iron server, and includes installation of the OS components required. If you use the automatic installer (recommended) the process is much simpler. Production, generally differs from Test or Dev environments, however as the components maybe more widely distributed and you are generally starting with an at least partially configured server (unless you are dedicating a production application server instance to RiskManager).
Different sites do differing things for production, some reinstall completely others duplicate test into production, some do everything manually for production, while using the automated system for Dev, etc.
We recommend a reinstallation - partly because it is the least error prone, and possibly faster.
==If You Have An Existing BPC RiskManager Production Installation==
If you have an existing RM installation in production, you can actually just copy the changed files onto the server (replacing the existing files of the same name) and start the RiskManagerData server once, then close it down, and you are done, so the auto-installer is not actually necessary in this case. Alternativley, you can run the uninstaller in production to remove the previous installation, and then use the new installer to reinstall. You will NOT loose any of your configuration settings - so it is completely safe to do this. That will essentially make you existing system a raw machine EXCEPT that the connection settings will be in place already.
If this is your situation, the steps below are still correct BUT you should NOT let the installer create the database(s) for you - as you already have the connections present. Just say no to this question when it comes up during installation.
==Performing the Migration To Production==
Read the preceding section if your production server has a pre-existing RiskManager V6 installation. If you are migrating from BPC RiskManager Express or RiskMan, you DO NOT NEED TO UNINSTALL, BPC RiskManager V6.x will ignore the Express settings and installation.
Assuming we are starting with a W2003+ server that does not have a pre-existing RM installation, and that your SQL Server is on a separate computer:
===MAKE DECISIONS BEFORE INSTALLING:===
<ol>
<li> If using BPC support during installation email us to arrange a time for our call to assist you install.
<li> Decide whether you are going to enable SurveyManager as part of the installation, or later. (Ask the business)
<li> Decide how many databases will be set up in production (Can be increased later if desired, but easiest if known prior to installation as the installer does all the work for you).
<li> If you want to make an existing database available in production that has been set up in dev/test and you will NOT be using the same physical database as that set up in dev/test, decide whether you will be will be using the RM installer to restore a backup of the established database into production, or whether you will restore the backup separately (after installation completes). You should consider:
* If you have already restored the database into production you probably do not the installer to attempt to create it
* If the target database is the "DEFAULT" connection (so named) of the application server and the database does not already exist in production and the database server is essentially a single server solution with data and log files on the same server then either decision is appropriate, and it is probably simplest to let the installer create a database of the target name and then use the installer's restore option to restore the backup over the top of the newly created database.
* If the target database is a uniquely named database connection of the application server and the database does not already exist in production and the database server is essentially a single server solution with data and log files on the same server then either decision is appropriate, and it is probably simplest to let the installer create a database of the target name and then use the installer's restore option to restore the backup over the top of the newly created database.
* If the target database server is a complex configuration with log files and data files separated across multiple NAS/Servers etc, the installer will probably not be able to determine the configuration correctly as the information is not always available to it in the remote registry (although it will attempt to do it correctly). So restoring from backup on a remote machine may not succeed. You are probably best to do this manually prior to installing the server, or if the database does not yet exist on the target server, the simplest approach is to let the installer create an empty database of the same name and then restore your backup over the top of the newly created database after installation completes. If you choose the former approach, you will need to do some extra steps (instructions provided during installation below) so the client test will validate your install. If you do not create any databases during installation (or have a pre-existing database to which to connect) you will not be able to validate connection during installation. We strongly recommend that you at least let let the installer create its default database and test connection to that. You can always discard it later.
<li> Decide whether you will be using network compression comms or the default raw comms (see the instructions below for the implications of this decision – raw is simplest) and if both, which will be the default. (Can be enabled later if desired)
<li> Decide whether you will be enabling HTTP/HTTPS comms access as well. (Can be enabled later if desired)
<li> Decide whether you will be using the desktop client and or the browser plugin (we recommend the desktop client – both have the same functionality, but the plugin behaviour varies a little across different Win OS’s and IE versions due to MS security changes, so if you have mixed desktop OS’s not every desktop will behave exactly the same. If you want to know the implications or need this explained further, ask us or look on the riskwiki.
<li> Verify the installation site (eg the remote desktop on which the installer will be working) has phone access (preferably hands free), and that you know the telephone number for the phone, and, ideally, outbound internet (IE/Firefox/etc) access so you can look at the riskwiki if needed.
<li> You should do steps 1 – 9 below prior to the BPC support call.
</ol>
===PREPARE THE SITE BEFORE INSTALLING:===
<ol>
<li> Verify server has the following infrastructure on it:
* Functioning network connection to the rest of the network with port 211 (and ideally port 212 as well) and SQL Server TCP ports available – eg 1433.
* Functioning installation of IIS 6+
<li> Verify the server either has on it or available to it:
* Functioning SQL Server (any version) configured in Mixed mode authentication or SQL Authentication mode
* Functioning SMTP server that will accept relays from this machine (this can always be configured later)
<li> Verify that you the person installing knows:
* Server local system administrator user ID / PWD
* SQL Server user id SA / PWD (If SA is not available you will need to speak to me again)
* The name of the SQL Server and the instance (if not using the default instance)
* The Administrator account user ID (usually Administrator) and PWD for the RiskManagement system. This is database specific, and more important when restoring than installing. Not knowing does not stop you installing, but may prevent you from connecting via a client when the test is run at the end of the installation. Otherwise, any RM Administrator account is fine to use. It is auto-created on first connection, so it can often be the user name of the person who does the installation. Ideally you settle on a common user name, and always use that across all databases and remember the password. Access by the root administrator account can be blocked by the RM system administrator after installation of a fresh database, so for restored databases, it may be that this account’s access is blocked anyway.
* The http addressable name of the application server as it would be typed into browser address bar by a remote LAN client (eg: a human operating from her office)
* The fully qualified domain name of the application server as it would be entered in the windows network browser of a remote user if they were able to browse to a folder on the application server (eg. the human again)
(NOTE: Part of the installation process is to create special purpose limited rights SQL accounts, the installer either creates these for you, or expets you to know the passwords. I am assuming they do not exist yet on the target SQL server. You will need to provide a password during the installation for the “riskmanuser” sql server account. The installer will make this account if it is not available already, so you need to have decided what the password will be. I recommend using the same password as that used for dev. This is a limited rights ID. The other accounts will be set to use the same password. They can be changed manually later if desired.).
<li> If transfering the dev database into production:
* Prepare a backup of the dev database.
* Ensure the verison of SQL Server in production is the same as, or higher than, that in dev from where the backup comes (eg. You can NOT restore an SQL 2008 backup into an SQL 2005 server, but you can do the reverse)
<li> Confirm with the RM administrator how many databases they want in production. We recommend a minimum of two databases, the default auto names database, and another spare / empty database for future use. The auto named database will have the connection name “DEFAULT”, the other database can have whatever connection name you choose. The autonamed database will be called RiskManDB625 and the connection will be called “DEFAULT”. The connection name (and in fact the database name) can be changed later. The connection name is the name the user sees as the database name. The caonnection DEFAULT does not need to be entered at all by the user – so this is ideally the main database in use.
<li> Copy the RM Installer to a directory of the application server that will be accessable to the person performing the installation.
<li> Copy the backup file to a directory on the SQL server that the SQL server will be able to access (read from) during a restore. We recommend that that directory is the default backup directory for the targeted instance of the SQL server as that is where it will read from naturally (and if you use the installer to do it, the SQL server must be able to read the file – so it needs to be readable by the SQL server under the SA account).
<li> Verify that the place from which you will be connecting to the application server (ie the remote client) has a telephone preferrably able to run work in hands free mode (so we can talk you through the process by phone).
<li> Locate your BPC RiskManager registration code so you can enter it when asked. You will not need this until the client connects at the end of the installation process. If this is a new server and new database you will have up to 60 days to enter it.
<li> If you opted during the decision stage above to backup an existing RM database from Test and restore it into Production, you should do that now. (Or schedule it now to be done immediatley before the installation commences). Make sure you know the database name on the server.
<li> Send BPC an email or phone BPC to arrange a time for support to contact you – preferrably as long BEFORE you commence installing as possible. We will confirm the booking and contact you at that time. If you just wish us to be available should you need it during support, we will make sure we are able to take your call at that time, and email you a direct number to use should you need it.
</ol>
===INSTALLING:===
<ol>
<li> If using a remote client to connect to the application server and run the installation process (eg mstsc), verify that the remote client is set to operate at 96 DPI not 120 DPI (there is a bug in the installer display routine that hides some buttons at the 120 DPI resolution. If connecting via mstsc, enter mstc /console as the connection command in start/run from the remote computer so that you are operating in console mode. This is important so that you can see the system tray icons.
<li> (If using BPC support, await the call first). Run the installer in “Complete Mode”, read the onscreen instructions and answer all the questions.
* Always create default database, during initial installation
* If restoring a backed up dev database, the installer can do this AFTER the installer creates the databases, or you can do this after the entire process manually. For some complex SQL setups this may be required, as while the installer attempts to locate the correct places for database restoration from the SQL Server registry, this is not 100% reliable due to the various ways this information is stored in the registry across different versions and instances of SQL Server. Let the installer create the blamk database for you, so that all the connections are made, and then you can simply restore over the default database with your backed up database after the installation. If the SQL server is on the application server itself, there is a much higher probability of complete success in installer based restoration.
<li> The installer will auto-register the components and start the BPC RiskManager DataServer console. If you are NOT connecting to an existing database (ie you let the installer create new databases), you can go on to the next step - just select "End Process" on the consol window...other wise check the dot points below:
* If want to connect to an existing database that was NOT created or restored during installation (ie. a database that exists but that is not yet known by the application server on THIS computer) AND you already have the database(s) set up on the production database server, you will need to configure the connections when the application server console window appears (ie. NOW): [[BPC RiskManager - Database Configuration|How to Connect the BPS RiskManager Application Server to an existing database]]
<li> Next, the installer will start the client locally for a test connection at the end to verify access to the default (or other) database. If you can connect and see the main screen after login you have successfully installed.
NOTE: The installer will set the server up in single-user edition and auto-administration access mode. This does not prevent remote access but will (usually) need to be changed to your correct access settings for production enterprise deployment. See the section "After Installation" below.
<li> Switch the server into web edition - click [[BPC RiskManager - General Configuration|on this link for instructions]].
<li> Set up client access. Either (or all):
* Copy the desktop client installer (there are two to choose from depending on whether you prefer single exe or MSI installers) from the /program files/bishopphillips/RiskManagerVxxx to a network share that will be accessible to users
* Copy the already installed client from the /program files/bishopphillips/RiskManagerVxxx/win32client directory to a separate computer/folder and make the folder sharable, if you want people to simple run the client across the network from a remote folder. The client does not actually need to be installed on a destop to work, but installing it provides shortcuts / menus and enables the use of the network compression/encryption library in V6.2.5.x.
* Install the client into a citrix (or other remote desktop) image.
* Distribute the browser plugin ActiveX client to the Risk Manager web site.
<li> Go to a typical remote LAN computer and attempt to install/use the client set up in 13 to access the server using the same account used previously and verify remote connectivity to the application server.
<li> If intending to use streaming network compression/encryption, follow the instructions in the riskwiki for enabling this. Remember you will need to advise all users that the access settings are other than the defaults in the client. (a box has to be ticked and possibly a port changed in the login window). If using streaming network comms, we recommend 2 ports be enabled – one for raw comms and one for compressed comms. (Hence the suggestion at the start that you clear 211 and 212 for RM comms). In reallity RM does not care what port is used. By default it is set to expect communications on port 211, but you can set it to use any combination of ports you like. We advise sticking with the recommended (obviously). If using steaming compression, you should probably for simplicity enable that on port 211 – so clients only need to tick a box to enable it, and set the raw channel to be 212, as the raw channel is only for trouble shooting, and backup connection.
Note, enabling compression/encryption will EXCLUDE the option of copying clients as a means of installation as the compression library is currently a separate lib in V625.x - that will change in a future release.
</ol>
=After Installation=
Most of these actions require you to use the RiskManager application server configuration console. So firstly, on the application server computer locate the "BPC RiskManager DataServer" in the start menu and start it. When started, the application server appears as an icon in the Windows system tray, typically located in the lower right hand corner of your screen. Please double click on the icon [[Image:RM_App_Server_SysTrayIcon.png]] to interact with this program. The configuration console will open....and then..
<ol>
<li> Now proceed to the instructions for completing the security/access set up:
<br>
<br>
* [[Security Configuration - Update Installation and Reset]]
<br>
<br>
<li> If you have additional databases to connect to riskmanager that you did not do during installation, you had better so that now: [[BPC RiskManager - Database Configuration|How to Connect the BPS RiskManager Application Server to an existing database]]
<br>
<br>
<li> Depending on which other components you are using (network streaming compression/encryption, email messaging, surveymanager, browser plugin client, etc.) there may be a few manual steps to complete the installation using the RM Configuration wizard after the installation finishes and tests have been completed. You should generally, in any case, access the IIS server after installation and enable “Unknown ISAPI extensions” – for surveymnager operation even if the surveymanager is not being used yet, as it will save you time later when RM decide randomly to create a survey. The explanation of how to do this is in the riskwiki instructions below. Now do each of these steps in order (note all are optional - the system will work without any of these configurations, but some things like email will not be available without them:
<br>
<br>
# [[BPC RiskManager - Send Mail Options Configuration]]
# [[BPC RiskManager - Mail Server Connection Properties]]
# [[BPC RiskManager - Logging Configuration (OPTIONAL)]]
# [[BPC RiskManager - Create the Root Administrator]]
# [[BPC RiskManager - Distribution of Client Components]] (Browser plugin ActiveX)
# [[BPC RiskManager - Configure Risk Mail Manager]]
<br>
<br>
<li> If you are using the survey engine, the installer will have set that up on the application server, but there are a couple of things you will need to do. In particular you will have to manually tell IIS to allow unknown "ISAPI extensions" and if you have connected to a pre-existing database (rather than one created during the installation process) you will need to configure it. Also, if your SurveyManager web server will be different from your application server computer (eg a web farm), you will need to do the config step for each database in the RiskManager environment. (There is special tab in the to help with the multi database situation efficiently).
<br>
<br>
* [[BPC RiskManager - Install The SurveyManager]]
</ol>
[[Category:RiskManager FAQ]]
[[Category:BPC RiskManager V6 Installation]]
[[Category:BPC RiskManager V6 System Administration]]
<noinclude>
{{BackLinks}}
</noinclude>
ba321d524a897f9f6bc8c831a9035e1da24cabf6
435
285
2010-08-04T17:21:11Z
Bishopj
1
wikitext
text/x-wiki
==Introduction==
There is a very detailed installation process on [[RM625ENT Installation Instructions]]
However, this assumes an essentially manual installation process, essentially starting from a raw iron server, and includes installation of the OS components required. If you use the automatic installer (recommended) the process is much simpler. Production, generally differs from Test or Dev environments, however as the components maybe more widely distributed and you are generally starting with an at least partially configured server (unless you are dedicating a production application server instance to RiskManager).
Different sites do differing things for production, some reinstall completely others duplicate test into production, some do everything manually for production, while using the automated system for Dev, etc.
We recommend a reinstallation - partly because it is the least error prone, and possibly faster.
==If You Have An Existing BPC RiskManager Production Installation==
If you have an existing RM installation in production, you can actually just copy the changed files onto the server (replacing the existing files of the same name) and start the RiskManagerData server once, then close it down, and you are done, so the auto-installer is not actually necessary in this case. Alternativley, you can run the uninstaller in production to remove the previous installation, and then use the new installer to reinstall. You will NOT loose any of your configuration settings - so it is completely safe to do this. That will essentially make you existing system a raw machine EXCEPT that the connection settings will be in place already.
If this is your situation, the steps below are still correct BUT you should NOT let the installer create the database(s) for you - as you already have the connections present. Just say no to this question when it comes up during installation.
==Performing the Migration To Production==
Read the preceding section if your production server has a pre-existing RiskManager V6 installation. If you are migrating from BPC RiskManager Express or RiskMan, you DO NOT NEED TO UNINSTALL, BPC RiskManager V6.x will ignore the Express settings and installation.
Assuming we are starting with a W2003+ server that does not have a pre-existing RM installation, and that your SQL Server is on a separate computer:
===MAKE DECISIONS BEFORE INSTALLING:===
<ol>
<li> If using BPC support during installation email us to arrange a time for our call to assist you install.
<li> Decide whether you are going to enable SurveyManager as part of the installation, or later. (Ask the business)
<li> Decide how many databases will be set up in production (Can be increased later if desired, but easiest if known prior to installation as the installer does all the work for you).
<li> If you want to make an existing database available in production that has been set up in dev/test and you will NOT be using the same physical database as that set up in dev/test, decide whether you will be will be using the RM installer to restore a backup of the established database into production, or whether you will restore the backup separately (after installation completes). You should consider:
* If you have already restored the database into production you probably do not the installer to attempt to create it
* If the target database is the "DEFAULT" connection (so named) of the application server and the database does not already exist in production and the database server is essentially a single server solution with data and log files on the same server then either decision is appropriate, and it is probably simplest to let the installer create a database of the target name and then use the installer's restore option to restore the backup over the top of the newly created database.
* If the target database is a uniquely named database connection of the application server and the database does not already exist in production and the database server is essentially a single server solution with data and log files on the same server then either decision is appropriate, and it is probably simplest to let the installer create a database of the target name and then use the installer's restore option to restore the backup over the top of the newly created database.
* If the target database server is a complex configuration with log files and data files separated across multiple NAS/Servers etc, the installer will probably not be able to determine the configuration correctly as the information is not always available to it in the remote registry (although it will attempt to do it correctly). So restoring from backup on a remote machine may not succeed. You are probably best to do this manually prior to installing the server, or if the database does not yet exist on the target server, the simplest approach is to let the installer create an empty database of the same name and then restore your backup over the top of the newly created database after installation completes. If you choose the former approach, you will need to do some extra steps (instructions provided during installation below) so the client test will validate your install. If you do not create any databases during installation (or have a pre-existing database to which to connect) you will not be able to validate connection during installation. We strongly recommend that you at least let let the installer create its default database and test connection to that. You can always discard it later.
<li> Decide whether you will be using network compression comms or the default raw comms (see the instructions below for the implications of this decision – raw is simplest) and if both, which will be the default. (Can be enabled later if desired)
<li> Decide whether you will be enabling HTTP/HTTPS comms access as well. (Can be enabled later if desired)
<li> Decide whether you will be using the desktop client and or the browser plugin (we recommend the desktop client – both have the same functionality, but the plugin behaviour varies a little across different Win OS’s and IE versions due to MS security changes, so if you have mixed desktop OS’s not every desktop will behave exactly the same. If you want to know the implications or need this explained further, ask us or look on the riskwiki.
<li> Verify the installation site (eg the remote desktop on which the installer will be working) has phone access (preferably hands free), and that you know the telephone number for the phone, and, ideally, outbound internet (IE/Firefox/etc) access so you can look at the riskwiki if needed.
<li> You should do steps 1 – 9 below prior to the BPC support call.
</ol>
===PREPARE THE SITE BEFORE INSTALLING:===
<ol>
<li> Verify server has the following infrastructure on it:
* Functioning network connection to the rest of the network with port 211 (and ideally port 212 as well) and SQL Server TCP ports available – eg 1433.
* Functioning installation of IIS 6+
<li> Verify the server either has on it or available to it:
* Functioning SQL Server (any version) configured in Mixed mode authentication or SQL Authentication mode
* Functioning SMTP server that will accept relays from this machine (this can always be configured later)
<li> Verify that you the person installing knows:
* Server local system administrator user ID / PWD
* SQL Server user id SA / PWD (If SA is not available you will need to speak to me again)
* The name of the SQL Server and the instance (if not using the default instance)
* The Administrator account user ID (usually Administrator) and PWD for the RiskManagement system. This is database specific, and more important when restoring than installing. Not knowing does not stop you installing, but may prevent you from connecting via a client when the test is run at the end of the installation. Otherwise, any RM Administrator account is fine to use. It is auto-created on first connection, so it can often be the user name of the person who does the installation. Ideally you settle on a common user name, and always use that across all databases and remember the password. Access by the root administrator account can be blocked by the RM system administrator after installation of a fresh database, so for restored databases, it may be that this account’s access is blocked anyway.
* The http addressable name of the application server as it would be typed into browser address bar by a remote LAN client (eg: a human operating from her office)
* The fully qualified domain name of the application server as it would be entered in the windows network browser of a remote user if they were able to browse to a folder on the application server (eg. the human again)
(NOTE: Part of the installation process is to create special purpose limited rights SQL accounts, the installer either creates these for you, or expets you to know the passwords. I am assuming they do not exist yet on the target SQL server. You will need to provide a password during the installation for the “riskmanuser” sql server account. The installer will make this account if it is not available already, so you need to have decided what the password will be. I recommend using the same password as that used for dev. This is a limited rights ID. The other accounts will be set to use the same password. They can be changed manually later if desired.).
<li> If transfering the dev database into production:
* Prepare a backup of the dev database.
* Ensure the verison of SQL Server in production is the same as, or higher than, that in dev from where the backup comes (eg. You can NOT restore an SQL 2008 backup into an SQL 2005 server, but you can do the reverse)
<li> Confirm with the RM administrator how many databases they want in production. We recommend a minimum of two databases, the default auto names database, and another spare / empty database for future use. The auto named database will have the connection name “DEFAULT”, the other database can have whatever connection name you choose. The autonamed database will be called RiskManDB625 and the connection will be called “DEFAULT”. The connection name (and in fact the database name) can be changed later. The connection name is the name the user sees as the database name. The caonnection DEFAULT does not need to be entered at all by the user – so this is ideally the main database in use.
<li> Copy the RM Installer to a directory of the application server that will be accessable to the person performing the installation.
<li> Copy the backup file to a directory on the SQL server that the SQL server will be able to access (read from) during a restore. We recommend that that directory is the default backup directory for the targeted instance of the SQL server as that is where it will read from naturally (and if you use the installer to do it, the SQL server must be able to read the file – so it needs to be readable by the SQL server under the SA account).
<li> Verify that the place from which you will be connecting to the application server (ie the remote client) has a telephone preferrably able to run work in hands free mode (so we can talk you through the process by phone).
<li> Locate your BPC RiskManager registration code so you can enter it when asked. You will not need this until the client connects at the end of the installation process. If this is a new server and new database you will have up to 60 days to enter it.
<li> If you opted during the decision stage above to backup an existing RM database from Test and restore it into Production, you should do that now. (Or schedule it now to be done immediatley before the installation commences). Make sure you know the database name on the server.
<li> Send BPC an email or phone BPC to arrange a time for support to contact you – preferrably as long BEFORE you commence installing as possible. We will confirm the booking and contact you at that time. If you just wish us to be available should you need it during support, we will make sure we are able to take your call at that time, and email you a direct number to use should you need it.
</ol>
===INSTALLING:===
<ol>
<li> If using a remote client to connect to the application server and run the installation process (eg mstsc), verify that the remote client is set to operate at 96 DPI not 120 DPI (there is a bug in the installer display routine that hides some buttons at the 120 DPI resolution. If connecting via mstsc, enter mstc /console as the connection command in start/run from the remote computer so that you are operating in console mode. This is important so that you can see the system tray icons.
<li> (If using BPC support, await the call first). Run the installer in “Complete Mode”, read the onscreen instructions and answer all the questions.
* Always create default database, during initial installation
* If restoring a backed up dev database, the installer can do this AFTER the installer creates the databases, or you can do this after the entire process manually. For some complex SQL setups this may be required, as while the installer attempts to locate the correct places for database restoration from the SQL Server registry, this is not 100% reliable due to the various ways this information is stored in the registry across different versions and instances of SQL Server. Let the installer create the blamk database for you, so that all the connections are made, and then you can simply restore over the default database with your backed up database after the installation. If the SQL server is on the application server itself, there is a much higher probability of complete success in installer based restoration.
<li> The installer will auto-register the components and start the BPC RiskManager DataServer console. If you are NOT connecting to an existing database (ie you let the installer create new databases), you can go on to the next step - just select "End Process" on the consol window...other wise check the dot points below:
* If want to connect to an existing database that was NOT created or restored during installation (ie. a database that exists but that is not yet known by the application server on THIS computer) AND you already have the database(s) set up on the production database server, you will need to configure the connections when the application server console window appears (ie. NOW): [[BPC RiskManager - Database Configuration|How to Connect the BPS RiskManager Application Server to an existing database]]
<li> Next, the installer will start the client locally for a test connection at the end to verify access to the default (or other) database. If you can connect and see the main screen after login you have successfully installed.
NOTE: The installer will set the server up in single-user edition and auto-administration access mode. This does not prevent remote access but will (usually) need to be changed to your correct access settings for production enterprise deployment. See the section "After Installation" below.
<li> Switch the server into web edition - click [[BPC RiskManager - General Configuration|on this link for instructions]].
<li> Set up client access. Either (or all):
* Copy the desktop client installer (there are two to choose from depending on whether you prefer single exe or MSI installers) from the /program files/bishopphillips/RiskManagerVxxx to a network share that will be accessible to users
* Copy the already installed client from the /program files/bishopphillips/RiskManagerVxxx/win32client directory to a separate computer/folder and make the folder sharable, if you want people to simple run the client across the network from a remote folder. The client does not actually need to be installed on a destop to work, but installing it provides shortcuts / menus and enables the use of the network compression/encryption library in V6.2.5.x.
* Install the client into a citrix (or other remote desktop) image.
* Distribute the browser plugin ActiveX client to the Risk Manager web site.
<li> Go to a typical remote LAN computer and attempt to install/use the client set up in 13 to access the server using the same account used previously and verify remote connectivity to the application server.
<li> If intending to use streaming network compression/encryption, follow the instructions in the riskwiki for enabling this. Remember you will need to advise all users that the access settings are other than the defaults in the client. (a box has to be ticked and possibly a port changed in the login window). If using streaming network comms, we recommend 2 ports be enabled – one for raw comms and one for compressed comms. (Hence the suggestion at the start that you clear 211 and 212 for RM comms). In reallity RM does not care what port is used. By default it is set to expect communications on port 211, but you can set it to use any combination of ports you like. We advise sticking with the recommended (obviously). If using steaming compression, you should probably for simplicity enable that on port 211 – so clients only need to tick a box to enable it, and set the raw channel to be 212, as the raw channel is only for trouble shooting, and backup connection.
Note, enabling compression/encryption will EXCLUDE the option of copying clients as a means of installation as the compression library is currently a separate lib in V625.x - that will change in a future release.
</ol>
=After Installation=
Most of these actions require you to use the RiskManager application server configuration console. So firstly, on the application server computer locate the "BPC RiskManager DataServer" in the start menu and start it. When started, the application server appears as an icon in the Windows system tray, typically located in the lower right hand corner of your screen. Please double click on the icon [[Image:RM_App_Server_SysTrayIcon.png]] to interact with this program. The configuration console will open....and then..
<ol>
<li> Now proceed to the instructions for completing the security/access set up:
<br>
<br>
* [[Security Configuration - Update Installation and Reset]]
<br>
<br>
<li> If you have additional databases to connect to riskmanager that you did not do during installation, you had better so that now: [[BPC RiskManager - Database Configuration|How to Connect the BPS RiskManager Application Server to an existing database]]
<br>
<br>
<li> Depending on which other components you are using (network streaming compression/encryption, email messaging, surveymanager, browser plugin client, etc.) there may be a few manual steps to complete the installation using the RM Configuration wizard after the installation finishes and tests have been completed. You should generally, in any case, access the IIS server after installation and enable “Unknown ISAPI extensions” – for surveymnager operation even if the surveymanager is not being used yet, as it will save you time later when RM decide randomly to create a survey. The explanation of how to do this is in the riskwiki instructions below. Now do each of these steps in order (note all are optional - the system will work without any of these configurations, but some things like email will not be available without them:
<br>
<br>
# [[BPC RiskManager - Send Mail Options Configuration]]
# [[BPC RiskManager - Mail Server Connection Properties]]
# [[BPC RiskManager - Logging Configuration (OPTIONAL)]]
# [[BPC RiskManager - Create the Root Administrator]]
# [[BPC RiskManager - Distribution of Client Components]] (Browser plugin ActiveX)
# [[BPC RiskManager - Configure Risk Mail Manager]]
<br>
<br>
<li> If you are using the survey engine, the installer will have set that up on the application server, but there are a couple of things you will need to do. In particular you will have to manually tell IIS to allow unknown "ISAPI extensions" and if you have connected to a pre-existing database (rather than one created during the installation process) you will need to configure it. Also, if your SurveyManager web server will be different from your application server computer (eg a web farm), you will need to do the config step for each database in the RiskManager environment. (There is special tab in the to help with the multi database situation efficiently).
<br>
<br>
* [[BPC RiskManager - Install The SurveyManager]]
</ol>
[[Category:RiskManager FAQ]]
[[Category:BPC RiskManager V6 Installation]]
[[Category:BPC RiskManager V6 System Administration]]
<noinclude>
{{BackLinks}}
</noinclude>
ba321d524a897f9f6bc8c831a9035e1da24cabf6
Real Learning In Virtual Worlds
0
279
305
2010-08-05T13:11:50Z
Bishopj
1
wikitext
text/x-wiki
=Real Learning in Virtual Worlds: An assessment of two approaches to content delivery with respect to learning outcomes=
Author: Dianne Bishop
A Minor Thesis
Submitted in partial fulfilment of the requirements for the
Degree of Master of Information Technology (Minor Thesis)
Faculty of IT
Monash University
December 2008
==Abstract==
This thesis comparatively explores two methods of delivering lecture based teaching material in the virtual world Second Life by comparing and contrasting tested outcomes of Bloom’s ‘remember’ and ‘understand’ cognitive processes, and analysing qualitative feedback on participants experiences.
The study provides an extensive literature review covering the history of research and invention in the Virtual Worlds commencing from gestation in fictional writings to realisation in the current genre of massively connected online virtual worlds, and finally summarises the specific research into application of virtual worlds in education, and outlines alternative models for measuring learning outcomes.
From this basis the thesis documents an experimental framework, virtual world teaching laboratory and learning management system built for the purpose of delivering lecture material in a controlled, experimental manner and an experiment conducted for the purposes of comparing the outcomes of two alternative delivery systems. Using otherwise identical content a “classic” 2D lecture and the same lecture augmented by 3D models and simulations was delivered to randomly selected participants and their achievement scores for Bloom’s cognitive processes ‘remember’ and ‘understand’ graded and analysed.
The research found that there is no significant difference between either ‘remember’ or ‘understand’ cognitive processes grades for the 2D and 3D groups, although there was a non-statistically significant advantage in remembering demonstrated by the 3D group at the extreme lower and upper deciles.
The thesis concludes by identifying a number of opportunities for further research.
==Table of Contents==
*[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|CHAPTER 1: Overview]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.1 Background to the Study.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.2 Research Questions.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.3 Overview of Study.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.4 Significance and Limitations.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.5 Structure of Thesis.]]
*[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|CHAPTER 2: Literature Review]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.1 Introduction.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.2 Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.2.1 What is a Virtual World?.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.2.2 Recognising a Virtual World by its Features.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.3 The Avatar–The Nature of a Participant’s Projection into a Virtual World.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4 A Taxonomy of Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4.2 A Taxon for Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4.3 Applied Taxonomies.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.5 What’s in a Name? – Virtual Worlds versus Virtual Reality.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.6 Dimensioning Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.6.1 The Degree of Virtuality.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.6.2 The Degree of Immersion and Presence.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7 Influences on Virtual Worlds from Art and Literature.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7.2 Virtual Worlds of the Arts.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7.3 Virtual Worlds of Fiction and Fantasy.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8 The History of Computational Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.2 Hardware Based User Interfaces and Virtual Reality Systems.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.3 Early Graphical Computer Games.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.4 Text Based Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.5 Graphical Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.6 Simulation and Learning Systems.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.9 Virtual Worlds for Education.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.9.1 Architecture Considerations.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.9.2 Education Applications in Virtual Worlds.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10 Learning & Instructional Design Theory.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.2 Behaviourism and Cognitivism.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.3 Gagne’s Nine Events of Instruction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.4 Bloom’s Taxonomy.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.11 Summary.]]
*[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|CHAPTER 3: Research Design]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.1 Introduction]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.2 Problem Statement and Research Hypothesis]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.3 Research Rationale]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.4 Research Method]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.4.1 Theoretical Assumptions]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.4.2 Research Study]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.5 Research Population]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.6 The Virtual Learning Environment]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.7 Learning Task Design]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.7.1 Subject Matter]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.7.2 Instruction Delivery]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8 Instrumentation]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8.1 Pre and Post Quiz]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8.2 Survey: Learning Experience]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8.3 Instrument Reliability]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9 Analysis Method]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.1 Introduction]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.2 Data Processing]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.3 Software]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.4 Quantitative Analysis Methods]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.5 Qualitative Analysis Methods]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.10 Summary]]
*[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|CHAPTER 4: Results.]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.1 Introduction]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2 Quantitative Analysis Results: Achievement Scores]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.1 Overview of Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.2 Pre-Quiz Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.3 Post-Quiz Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.4 Hypotheses Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.5 Survey Results: Likert Scales]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3 Qualitative Analysis Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3.1 Introduction]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3.2 Analysis Approach]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3.3 Themes of the Open Survey Questions]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.4 Summary]]
*[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|CHAPTER 5: Discussion & Conclusion]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.1 Introduction]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2 Quantitative Analysis]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.1 The Results of the Hypothesis]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.2 The Results of the Pre-Quiz]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.3 The Results of the Post-Quiz]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.4 Likert Scale Analysis]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.3 Qualitative Analysis]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.3.1 Thematic Analysis Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.3.2 Qualitative Analysis of Thematic Results]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.4 Discussion of Results]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.5 Conclusion]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6 Opportunities for Further Research]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.1 Improving Instrument Reliability]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.2 Course versus Lecture]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.3 Introducing a Real and Robot Presenter to the Experience]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.4 Testing Other Bloom’s Cognitive Processes]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.5 Outcome Measurement Over Time]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.6 Comparison to Real-World Training]]
*[[VirtualWorldLearningReferences|Referencs]]
*[[Real Learining in Virtual World - Selected Appendices|Selected Appendices]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix A: Terminology]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix B: MMOG Analysis]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix I: Second Life Demographics]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix J: Pre-Quiz Score Results]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix K: Post-Quiz Score Results]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix L: Instrument Reliability Results]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix M: Qualitative Analysis: A Sample of Participants Comments]]
==Full Appendices==
The full appendices to the original master's thesis on which Real Learing in Virtual Worlds articles are based include items such as pre and post question quizes, reproduction of building signage, and graphic heavy pages. This material is best examined in a downloadable. The full appendices A through M are avaliable here.
The content of the download is:
#'''Appendices.'''
*Appendix A: Terminology.
*Appendix B: MMOG Analysis.
*Appendix C: Welcome Room Information Content
*Appendix D: Instruction: Slide Presentation.
*Appendix E: Pre-Presentation Slide Show.
*Appendix F: Pre-Quiz.
*Appendix G: Post Quiz.
*Appendix H: Survey.
*Appendix I: Second Life Demographics.
*Appendix J: Pre-Quiz Score Results.
*Appendix K: Post-Quiz Score Results.
*Appendix L: Instrument Reliability Results.
*Appendix M: Qualitative Analysis: A Sample of Participants Comments.
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
a86cf20177dd347fdd09144b692cdb0f293fbdc7
359
305
2010-08-05T13:11:50Z
Bishopj
1
wikitext
text/x-wiki
=Real Learning in Virtual Worlds: An assessment of two approaches to content delivery with respect to learning outcomes=
Author: Dianne Bishop
A Minor Thesis
Submitted in partial fulfilment of the requirements for the
Degree of Master of Information Technology (Minor Thesis)
Faculty of IT
Monash University
December 2008
==Abstract==
This thesis comparatively explores two methods of delivering lecture based teaching material in the virtual world Second Life by comparing and contrasting tested outcomes of Bloom’s ‘remember’ and ‘understand’ cognitive processes, and analysing qualitative feedback on participants experiences.
The study provides an extensive literature review covering the history of research and invention in the Virtual Worlds commencing from gestation in fictional writings to realisation in the current genre of massively connected online virtual worlds, and finally summarises the specific research into application of virtual worlds in education, and outlines alternative models for measuring learning outcomes.
From this basis the thesis documents an experimental framework, virtual world teaching laboratory and learning management system built for the purpose of delivering lecture material in a controlled, experimental manner and an experiment conducted for the purposes of comparing the outcomes of two alternative delivery systems. Using otherwise identical content a “classic” 2D lecture and the same lecture augmented by 3D models and simulations was delivered to randomly selected participants and their achievement scores for Bloom’s cognitive processes ‘remember’ and ‘understand’ graded and analysed.
The research found that there is no significant difference between either ‘remember’ or ‘understand’ cognitive processes grades for the 2D and 3D groups, although there was a non-statistically significant advantage in remembering demonstrated by the 3D group at the extreme lower and upper deciles.
The thesis concludes by identifying a number of opportunities for further research.
==Table of Contents==
*[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|CHAPTER 1: Overview]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.1 Background to the Study.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.2 Research Questions.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.3 Overview of Study.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.4 Significance and Limitations.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.5 Structure of Thesis.]]
*[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|CHAPTER 2: Literature Review]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.1 Introduction.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.2 Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.2.1 What is a Virtual World?.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.2.2 Recognising a Virtual World by its Features.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.3 The Avatar–The Nature of a Participant’s Projection into a Virtual World.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4 A Taxonomy of Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4.2 A Taxon for Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4.3 Applied Taxonomies.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.5 What’s in a Name? – Virtual Worlds versus Virtual Reality.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.6 Dimensioning Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.6.1 The Degree of Virtuality.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.6.2 The Degree of Immersion and Presence.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7 Influences on Virtual Worlds from Art and Literature.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7.2 Virtual Worlds of the Arts.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7.3 Virtual Worlds of Fiction and Fantasy.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8 The History of Computational Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.2 Hardware Based User Interfaces and Virtual Reality Systems.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.3 Early Graphical Computer Games.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.4 Text Based Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.5 Graphical Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.6 Simulation and Learning Systems.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.9 Virtual Worlds for Education.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.9.1 Architecture Considerations.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.9.2 Education Applications in Virtual Worlds.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10 Learning & Instructional Design Theory.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.2 Behaviourism and Cognitivism.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.3 Gagne’s Nine Events of Instruction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.4 Bloom’s Taxonomy.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.11 Summary.]]
*[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|CHAPTER 3: Research Design]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.1 Introduction]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.2 Problem Statement and Research Hypothesis]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.3 Research Rationale]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.4 Research Method]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.4.1 Theoretical Assumptions]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.4.2 Research Study]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.5 Research Population]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.6 The Virtual Learning Environment]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.7 Learning Task Design]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.7.1 Subject Matter]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.7.2 Instruction Delivery]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8 Instrumentation]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8.1 Pre and Post Quiz]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8.2 Survey: Learning Experience]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8.3 Instrument Reliability]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9 Analysis Method]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.1 Introduction]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.2 Data Processing]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.3 Software]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.4 Quantitative Analysis Methods]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.5 Qualitative Analysis Methods]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.10 Summary]]
*[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|CHAPTER 4: Results.]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.1 Introduction]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2 Quantitative Analysis Results: Achievement Scores]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.1 Overview of Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.2 Pre-Quiz Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.3 Post-Quiz Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.4 Hypotheses Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.5 Survey Results: Likert Scales]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3 Qualitative Analysis Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3.1 Introduction]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3.2 Analysis Approach]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3.3 Themes of the Open Survey Questions]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.4 Summary]]
*[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|CHAPTER 5: Discussion & Conclusion]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.1 Introduction]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2 Quantitative Analysis]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.1 The Results of the Hypothesis]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.2 The Results of the Pre-Quiz]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.3 The Results of the Post-Quiz]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.4 Likert Scale Analysis]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.3 Qualitative Analysis]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.3.1 Thematic Analysis Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.3.2 Qualitative Analysis of Thematic Results]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.4 Discussion of Results]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.5 Conclusion]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6 Opportunities for Further Research]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.1 Improving Instrument Reliability]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.2 Course versus Lecture]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.3 Introducing a Real and Robot Presenter to the Experience]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.4 Testing Other Bloom’s Cognitive Processes]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.5 Outcome Measurement Over Time]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.6 Comparison to Real-World Training]]
*[[VirtualWorldLearningReferences|Referencs]]
*[[Real Learining in Virtual World - Selected Appendices|Selected Appendices]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix A: Terminology]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix B: MMOG Analysis]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix I: Second Life Demographics]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix J: Pre-Quiz Score Results]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix K: Post-Quiz Score Results]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix L: Instrument Reliability Results]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix M: Qualitative Analysis: A Sample of Participants Comments]]
==Full Appendices==
The full appendices to the original master's thesis on which Real Learing in Virtual Worlds articles are based include items such as pre and post question quizes, reproduction of building signage, and graphic heavy pages. This material is best examined in a downloadable. The full appendices A through M are avaliable here.
The content of the download is:
#'''Appendices.'''
*Appendix A: Terminology.
*Appendix B: MMOG Analysis.
*Appendix C: Welcome Room Information Content
*Appendix D: Instruction: Slide Presentation.
*Appendix E: Pre-Presentation Slide Show.
*Appendix F: Pre-Quiz.
*Appendix G: Post Quiz.
*Appendix H: Survey.
*Appendix I: Second Life Demographics.
*Appendix J: Pre-Quiz Score Results.
*Appendix K: Post-Quiz Score Results.
*Appendix L: Instrument Reliability Results.
*Appendix M: Qualitative Analysis: A Sample of Participants Comments.
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
a86cf20177dd347fdd09144b692cdb0f293fbdc7
Real Learning in Virtual Worlds - CHAPTER 1: Overview
0
280
307
2010-08-05T13:12:26Z
Bishopj
1
wikitext
text/x-wiki
=CHAPTER 1: Overview=
==1.1 Background to the Study==
“Imagine waking up in the morning and teaching a class without changing out of your pyjamas. Imagine teleporting and flying to the library instead of inching along a highway. Imagine teaching a classroom of students who may have blue skin, purple wings, or the body of a raccoon. Peculiar as they sound, all of these things are now possible” (Harvard's Berkman Center for Internet and Society, 2007; Kribble, 2007).
With recent advances in public access virtual world technology it is now practical for educators to experiment economically with virtual world based learning methods. Technological limitations no longer impose a substantial compromise on the educator’s preferred teaching method.
Virtual worlds differ fundamentally from the online HTML/PDF based learning environments that have been progressively adopted over the last 12 years for online education in the same way that a book differs from a lecture. At first glance, virtual worlds allow the distance education delivery to move into a virtual representation of the real world lecture, and therefore offer the possibility of a ‘quasi-realistic’ distance education delivery model. At second glance, they tempt the educator who is willing to fund the cost, with visions of highly interactive, immersive and engaging teaching vectors and learning management systems extending beyond the options available in real-world training.
Public access virtual worlds offer educators some potentially significant opportunities in the education space. These include the opportunity to approximate better the real-world education experience for distance learners using low cost (often free) publically available tools, and the reduction in the total-cost-of learning by elimination of travel, reduction in capital (infrastructure) investment through reduction in bricks & mortar infrastructure, world-wide sharing of education content, standardisation of environment navigation and access methods, on demand/automated training session delivery, “24 hours by 365 days a year” availability, instant & automated assessment, instant planet-wide delivery (at homogeneous cost) and the use of software simulations in place of physical models. The virtual reality capabilities of virtual worlds offer immersive exposure to simulations of real-world experiences (like tsunami’s or tornadoes) and events that otherwise could only be described and illustrated in conventional education. They enable exploration of events, places, micro and macro worlds, and theories that are either impossible to do in physical environments, or prohibitively costly to implement for individual courses. Lastly, the use of role play based simulations enable the exploration of foreign locations, cultures and historic events in a manner not otherwise available economically in the physical realm.
As the use of public access online virtual worlds is relatively new to the mainstream education community, many research questions remain unanswered. Exploitation of this technology is still relatively immature compared with traditional online learning platforms and therefore much (although certainly not all) of the content has been more experimental than useful for mainstream educational use – until, possibly, the last few years, if not currently.
Until the last few years, virtual worlds have been either special purpose (like flight simulators) and exceptionally costly to construct, or not sufficiently realistic, difficult to access, complex to use, constrained by limited communication vectors (such as missing audio or streaming media), or cumbersome and expensive to distribute and update. It has only been in the last few years that public access virtual world architectures and infrastructures have reached a level of maturity where convincing workable and low cost solutions have substantially neutralised objections of educators surrounding cost, realism, availability, standardisation, access, content distribution, and richness of sensory and communication vectors.
Possibly the greatest hurdle still faced by educators that are willing to experiment in these worlds is that much of the public continue to perceive public online virtual worlds as game technology. They are yet to be widely acknowledge by mainstream educators as a valid option for the delivery of higher educational course material (Jamison, 2007). Yet the potential for both quality gains and cost savings from the successful exploitation of virtual world training in higher education and industry are very high. There is, therefore, a great need for research in this area that provides insight into the affordances of this technology in education, and guidance on its cost-effective.
While much work has been done to compare the relative “effectiveness” of virtual world versus real world training over many years, little or no structured research has been undertaken comparing the “effectiveness” of different approaches to education within a virtual world.
With a few notable exceptions, research has traditionally examined virtual world training in the context of social interaction or 3D object manipulation and simulation. As discussed in the literature review, this body of work has generally found virtual training to be as effective or better then the real world equivalent (at least within the theoretical confines of the subject matter explored). Yet, a direct consequence of the realism available from the latest generation of virtual world technology has provided the ability to simulate the real world teaching environment itself, not just the ability to build better simulations and 3D models of teachable content. The traditional teaching environment[1] can now be practically reproduced – class rooms or lecture theatres providing a central location for real students to learn in a virtual world. Provided that participants are not constrained by technological requirements as discussed in the literature review, increasing the latest environments allow the reproduction of a real world learning environment, simulating almost verbatim the traditional real world “chalk and talk” lecture experience.
In designing the topic delivery, educators in virtual worlds are now presented with choices between simulating a real world lecture environment delivering essentially similar presentation material to that which they might deliver in a “chalk and talk” lecture in the real world and delivering a purpose built simulation of the material itself – or some combination between these two extremes. The literature review references many studies where the focus has been on assessing the effectiveness of simulation of the teaching material rather than simulation of the real world teaching environment. In the former case the 3D software development effort is significantly in the construction of the topic focussed material, while in the latter the 3D software development is more heavily biased to the teaching environment – such as “lecture rooms”.
Although costs are only superficially explored in this research, it is perhaps reasonable to propose that purpose built, topic centric simulators for each course or subject are necessarily a more expensive investment proposition than a single initial investment in lecture room simulators that are shared by many lecturers and across many topics. The closer the virtual world training delivery model gets to mirroring its real world equivalent the more practical this latter option becomes and the closer the preparation cost matches those of the traditional real world learning methods, yet without the overhead of real world infrastructure and physical student and teacher transportation much reducing the total cost of learning.
A casual survey by the researcher of the teaching infrastructure built by, or for, educators in at least one of these public access virtual worlds that is frequented by more than 200 educational institutions (SimTeach, 2008), reveals that the majority of teaching spaces have been built around exactly this traditional “chalk and talk” lecture model, with essentially conventional auditorium style lecture rooms. Prim-facie this seems an under-utilisation of the environment. Surely, if a 3D representation or simulation of an item can be built, one might argue, the educator is almost duty-bound to exploit the capability. Of course, even in a virtual world with dedicated fast-to-use 3D modelling and agent scripting tools, construction of 3D objects and simulations requires considerably more investment than a simple 2D slide show with audio voice-over, that constitutes the body of a “chalk and talk” lecture.
A central question arises, therefore: on a platform capable of delivering 3D models and simulations, is the mere use of it as a virtual “chalk and talk” class room consisting of 2D lecture slides a reasonable and acceptable use of this technology? This is the central question that this research sets out to explore.
==1.2 Research Questions==
This study assessed the learning outcomes using two groups in the widely adopted public access virtual world of Second Life. One group experienced a lecture on the topic ‘The Physics of Bridges’ as a 2D slide show presentation and the other group experienced the same lecture as a 3D augmented lecture of the content contained in the slide show presentation. In order to answer the research question: How effective is it to learn in a virtual world using a traditional 2D slide show method compared to that of a 3D interactive simulation?
To carry out this study the following research hypothesis was formed:
Learning outcomes are not independent of the delivery methods in a virtual world, in that varying the delivery method between 2D and a 3D presentation results in a significant difference in the post-quiz achievement scores of a participant in relation to Bloom’s cognitive process of factual knowledge of ‘remember’ and ‘understand’.
Second Life was chosen as the experimental platform for the research question because of its low cost of access (free), wide platform availability (PC/Linux/Mac), it wide adoption (16 million plus registered users (Linden Lab, 2008a)), huge educator community (200 plus educational institutions (SimTeach, 2008)), maturity and capability of its tool set (3D, streaming and interactive audio, streaming media, web interfacing, html content support, etc), content publication delay (instant) and its environmental realism (real time content streaming, spatial audio, environmental and spatial lighting, 3D perspective, layering, animation, concurrent multitasking agents, realistic photo finished avatar mesh, etc.).
==1.3 Overview of Study==
This research study was conducted in the online virtual world of Second Life. Using an experimental design approach a virtual learning campus was constructed to utilise two different forms of lecture delivery method on the topic of ‘The Physics of Bridges’. This topic was presented as a lecture with a 2D slide show and audio (reproducing a real world lecture on the topic in the virtual space) and the same 2D content and audio augmented with immersive 3D models. Both delivery methods used identical content, slides, audio and time allocation. The independent variable in the delivery method was the presence or absence of 3D bridges and simulations matching the 2D slides and audio.
The 2D and 3D lecture environment simulated real-world lecture theatres with seating for up to 18 people and a large front facing projection screen. The 3D lecture room contained an additional space with lecture screens on three walls in which 3D objects appeared and with which users could interact or examine. The 2D and 3D theatres were otherwise identical.
Participants were recruited from the in world population of Second Life by advertisement and self selection (i.e. without profiling or filtering) and without replacement (avatars could not repeat any test). Prior to the lecture they received a pre-quiz containing 8 questions to establish a prior-knowledge benchmark. After completion of the pre-quiz participants were randomly allocated to either a 2D or 3D lecture theatre. On completion of their lecture they were given a 20 question post-quiz to test the learning outcomes of the lecture and a survey to gain an understanding of their learning experience within the virtual world environment. A total of 111 participants took part in this entire research process. The 2D and 3D participants numbered 55 and 56 participants respectively.
The learning materials along with the pre and post quiz questions were constructed using Bloom’s cognitive processes of ‘remember’ and ‘understand’. The quiz questions were divided evenly across these processes, which provided the basis for analysis.
The analysis method adopted in this research was triangulation using mixed methods. The pre and post quiz questions provided the basis for quantitative analysis. The post survey open questions provided the basis for qualitative analysis. Both of these analyses were then triangulated in order to compare the learning outcomes and experiences of the two groups that took part in this research.
==1.4 Significance and Limitations==
Mirroring real world education, there are at least three barriers an educator must overcome in order to deliver virtual world training:
Hosting infrastructure (the software environment the hosts the virtual world mechanics)
Training infrastructure (the creation of training spaces in the virtual world such as lecture theatres, or presentation screens)
Training content (the actual training material presented).
Today’s public online virtual worlds provide the hosting infrastructure while enabling low cost construction or acquisition of the training infrastructure to enable educators the opportunity to experiment with virtual learning delivery efficiently. Prior to this, educators were faced with extensive time, cost and complexity to build custom applications that could deliver the infrastructure before any virtual learning could take place. What once required extensive support from heads of department now requires very little effort on the educator’s behalf to enter into the world of virtual learning.
With a public online virtual world such as Second Life, the cost to develop, publish and deliver a 2D slide show based instructional learning program, such as the one produced in this experiment, is comparable to that of a real world ‘face to face’ lecture. Yet given the opportunity of this technology to go beyond real world instructional methods the temptation to exploit the full modelling and simulation capabilities of the environment is strong.
The research aimed to inform the question as to whether it is ‘worth’ the extra cost and time to build something more complicated than a 2D slide presentation. For this research the cost was measured in time (hours). While the cost of the 2D lecture was identical to preparing and delivering the same in the real-world, the 3D augmented version was approximately 3 times the cost. There is therefore a significant incentive to determine under experimental conditions the difference in learning outcomes and experience of the participants when presented with two different forms of delivery methods.
To preserve the integrity of the concept of separation of costs of content from costs of infrastructure (both hosting and training), a re-usable general purpose campus and lecture space was first constructed. With all content and tests independent of the campus and lecture infrastructure and interchangeable, the environment that can support both multiple simultaneous courses and rapid 5 to 15 minute course change in each lecture room. While this was not critical to the study, it was judged essential to the integrity of the assumptions on which the research was based: that virtual world content could be treated independently of the training infrastructure if a shared protocol was adopted. Secondly, the content preparation technology expectations were intentional constrained to a standard SL and MS Office equipped PC. PowerPoint and MS Audio Recorder (or other audio recorder) and the Second Life client are all that is required at the minimum to prepare a course for delivery for the purpose of the research.
Despite the recent growth in publicly accessible on-line virtual worlds, little published work has been conducted in this specific area of research. Furthermore, at the time of writing none, if any, had been performed using experimental methods. There is a growing body of high grade and scientific work in other aspects of educational and social dimensions of virtual worlds, and a respectable body of earlier work in purpose built and text based 3D virtual worlds, particularly in the comparative aspects of virtual and real world presence. Possibly, it is only with the realism attained in the latest generation of full content streaming, mixed graphical, audio and text worlds that this research has become practical. Thus the researcher’s motivation is to add to a body of knowledge, which is, as yet, predominantly (if not totally) lacking in scientific rigour via an experiment conducted under controlled conditions.
There have been multiple studies that compare traditional face to face learning methods with distance education learning outcomes. Thomas Russell’s book ‘No Significant Difference Phenomenon’ (2001) documents a review of literature of accumulative studies that goes back as far 1928 with the research question: ‘Does taking a course via distance education lower a student's chances for success as compared to the same student taking the same course in a face-to-face format?’ In most cases Russell’s findings resulted in ‘no significant difference’ in learning outcomes. The common identifier by Russell being that no student is better or worse off when comparing distance learning delivery methods with that of traditional face to face learning methods.
Similarly Richard Clark’s (1983) article published in the early 80s ‘Reconsidering Research on Learning from Media’ claimed that when comparing learning effects of different media platforms, there is no signification difference in outcome. In this article, Clark dismissed any studies that did find differences by providing that any differences that may have been found were not due to the medium platform but rather to the instructional design in the study.
Clark’s article sparked a heated response from Robert Kozma who had opposing views on the matter. This lead to a public debate between the two researchers (R. E. Clark, 1994; Kozma, 1994) in academic journals. This debate continues today amongst educational researchers and is commonly termed ‘The Media Debate’ (EduTech Wiki (2009).
This researcher does not enter into the media debate nor does she enter into the debate over whether real face-to-face learning ‘is better’ or ‘worse’ than virtual world learning. Rather this research has taken the position of ‘Now we are here [in the virtual world] what do we do?’
Consistent with this position the research decided to recruit only from the in world population. Therefore the constraint related to this is that the tested population is more likely to be pre-disposed to the virtual environment for a range of purposes one of which might include education. In the context of this experiment, however, the researcher is not convinced that such a condition would have had any impact on the outcomes. The elimination of the novice user dimension removed mechanical unfamiliarity as a significant factor from the outcomes which was appropriate for a study comparing virtual world delivery methods as opposed to a study comparing virtual and real world learning methods, and has been a factor that complicated the interpretation of some virtual-world research results in prior studies.
==1.5 Structure of Thesis==
For common terms used in this thesis see Appendix A: Terminology.
Chapter Two Literature Review; examines virtual world technology and a brief overview of educational learning theory.
The Virtual world section discusses alternative definitions, characteristics, history, key architectural features, research outcomes and applications in education of virtual worlds. The review of virtual worlds has been taken from an historic perspective discussing key influences that have lead to today’s massively multi-user virtual worlds. Discussion of virtual worlds concludes with a review of educational uses, affordances and a review of current research into online virtual worlds.
Chapter two concludes with a review of learning theory and instructional methods that provides basis of the learning methods and materials used to conduct this experiment.
Chapter Three Research Design; examines the research design along with the researcher’s theoretical assumptions, environment design, lecture material design and analysis methods adopted in this research study.
Chapter Four Results: presents the quantitative and the qualitative results of the virtual world learning experiment conducted in Second Life between the two groups of participants who undertook the differing lecture delivery methods for a lecture on ‘The Physics of Bridges’.
Chapter Five Discussion & Conclusion; provides an analysis of the results of the experiment along with discussion of these results and opportunities for further research.
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
32c50ce77d96d6139033f5943bbad674980ab982
361
307
2010-08-05T13:12:26Z
Bishopj
1
wikitext
text/x-wiki
=CHAPTER 1: Overview=
==1.1 Background to the Study==
“Imagine waking up in the morning and teaching a class without changing out of your pyjamas. Imagine teleporting and flying to the library instead of inching along a highway. Imagine teaching a classroom of students who may have blue skin, purple wings, or the body of a raccoon. Peculiar as they sound, all of these things are now possible” (Harvard's Berkman Center for Internet and Society, 2007; Kribble, 2007).
With recent advances in public access virtual world technology it is now practical for educators to experiment economically with virtual world based learning methods. Technological limitations no longer impose a substantial compromise on the educator’s preferred teaching method.
Virtual worlds differ fundamentally from the online HTML/PDF based learning environments that have been progressively adopted over the last 12 years for online education in the same way that a book differs from a lecture. At first glance, virtual worlds allow the distance education delivery to move into a virtual representation of the real world lecture, and therefore offer the possibility of a ‘quasi-realistic’ distance education delivery model. At second glance, they tempt the educator who is willing to fund the cost, with visions of highly interactive, immersive and engaging teaching vectors and learning management systems extending beyond the options available in real-world training.
Public access virtual worlds offer educators some potentially significant opportunities in the education space. These include the opportunity to approximate better the real-world education experience for distance learners using low cost (often free) publically available tools, and the reduction in the total-cost-of learning by elimination of travel, reduction in capital (infrastructure) investment through reduction in bricks & mortar infrastructure, world-wide sharing of education content, standardisation of environment navigation and access methods, on demand/automated training session delivery, “24 hours by 365 days a year” availability, instant & automated assessment, instant planet-wide delivery (at homogeneous cost) and the use of software simulations in place of physical models. The virtual reality capabilities of virtual worlds offer immersive exposure to simulations of real-world experiences (like tsunami’s or tornadoes) and events that otherwise could only be described and illustrated in conventional education. They enable exploration of events, places, micro and macro worlds, and theories that are either impossible to do in physical environments, or prohibitively costly to implement for individual courses. Lastly, the use of role play based simulations enable the exploration of foreign locations, cultures and historic events in a manner not otherwise available economically in the physical realm.
As the use of public access online virtual worlds is relatively new to the mainstream education community, many research questions remain unanswered. Exploitation of this technology is still relatively immature compared with traditional online learning platforms and therefore much (although certainly not all) of the content has been more experimental than useful for mainstream educational use – until, possibly, the last few years, if not currently.
Until the last few years, virtual worlds have been either special purpose (like flight simulators) and exceptionally costly to construct, or not sufficiently realistic, difficult to access, complex to use, constrained by limited communication vectors (such as missing audio or streaming media), or cumbersome and expensive to distribute and update. It has only been in the last few years that public access virtual world architectures and infrastructures have reached a level of maturity where convincing workable and low cost solutions have substantially neutralised objections of educators surrounding cost, realism, availability, standardisation, access, content distribution, and richness of sensory and communication vectors.
Possibly the greatest hurdle still faced by educators that are willing to experiment in these worlds is that much of the public continue to perceive public online virtual worlds as game technology. They are yet to be widely acknowledge by mainstream educators as a valid option for the delivery of higher educational course material (Jamison, 2007). Yet the potential for both quality gains and cost savings from the successful exploitation of virtual world training in higher education and industry are very high. There is, therefore, a great need for research in this area that provides insight into the affordances of this technology in education, and guidance on its cost-effective.
While much work has been done to compare the relative “effectiveness” of virtual world versus real world training over many years, little or no structured research has been undertaken comparing the “effectiveness” of different approaches to education within a virtual world.
With a few notable exceptions, research has traditionally examined virtual world training in the context of social interaction or 3D object manipulation and simulation. As discussed in the literature review, this body of work has generally found virtual training to be as effective or better then the real world equivalent (at least within the theoretical confines of the subject matter explored). Yet, a direct consequence of the realism available from the latest generation of virtual world technology has provided the ability to simulate the real world teaching environment itself, not just the ability to build better simulations and 3D models of teachable content. The traditional teaching environment[1] can now be practically reproduced – class rooms or lecture theatres providing a central location for real students to learn in a virtual world. Provided that participants are not constrained by technological requirements as discussed in the literature review, increasing the latest environments allow the reproduction of a real world learning environment, simulating almost verbatim the traditional real world “chalk and talk” lecture experience.
In designing the topic delivery, educators in virtual worlds are now presented with choices between simulating a real world lecture environment delivering essentially similar presentation material to that which they might deliver in a “chalk and talk” lecture in the real world and delivering a purpose built simulation of the material itself – or some combination between these two extremes. The literature review references many studies where the focus has been on assessing the effectiveness of simulation of the teaching material rather than simulation of the real world teaching environment. In the former case the 3D software development effort is significantly in the construction of the topic focussed material, while in the latter the 3D software development is more heavily biased to the teaching environment – such as “lecture rooms”.
Although costs are only superficially explored in this research, it is perhaps reasonable to propose that purpose built, topic centric simulators for each course or subject are necessarily a more expensive investment proposition than a single initial investment in lecture room simulators that are shared by many lecturers and across many topics. The closer the virtual world training delivery model gets to mirroring its real world equivalent the more practical this latter option becomes and the closer the preparation cost matches those of the traditional real world learning methods, yet without the overhead of real world infrastructure and physical student and teacher transportation much reducing the total cost of learning.
A casual survey by the researcher of the teaching infrastructure built by, or for, educators in at least one of these public access virtual worlds that is frequented by more than 200 educational institutions (SimTeach, 2008), reveals that the majority of teaching spaces have been built around exactly this traditional “chalk and talk” lecture model, with essentially conventional auditorium style lecture rooms. Prim-facie this seems an under-utilisation of the environment. Surely, if a 3D representation or simulation of an item can be built, one might argue, the educator is almost duty-bound to exploit the capability. Of course, even in a virtual world with dedicated fast-to-use 3D modelling and agent scripting tools, construction of 3D objects and simulations requires considerably more investment than a simple 2D slide show with audio voice-over, that constitutes the body of a “chalk and talk” lecture.
A central question arises, therefore: on a platform capable of delivering 3D models and simulations, is the mere use of it as a virtual “chalk and talk” class room consisting of 2D lecture slides a reasonable and acceptable use of this technology? This is the central question that this research sets out to explore.
==1.2 Research Questions==
This study assessed the learning outcomes using two groups in the widely adopted public access virtual world of Second Life. One group experienced a lecture on the topic ‘The Physics of Bridges’ as a 2D slide show presentation and the other group experienced the same lecture as a 3D augmented lecture of the content contained in the slide show presentation. In order to answer the research question: How effective is it to learn in a virtual world using a traditional 2D slide show method compared to that of a 3D interactive simulation?
To carry out this study the following research hypothesis was formed:
Learning outcomes are not independent of the delivery methods in a virtual world, in that varying the delivery method between 2D and a 3D presentation results in a significant difference in the post-quiz achievement scores of a participant in relation to Bloom’s cognitive process of factual knowledge of ‘remember’ and ‘understand’.
Second Life was chosen as the experimental platform for the research question because of its low cost of access (free), wide platform availability (PC/Linux/Mac), it wide adoption (16 million plus registered users (Linden Lab, 2008a)), huge educator community (200 plus educational institutions (SimTeach, 2008)), maturity and capability of its tool set (3D, streaming and interactive audio, streaming media, web interfacing, html content support, etc), content publication delay (instant) and its environmental realism (real time content streaming, spatial audio, environmental and spatial lighting, 3D perspective, layering, animation, concurrent multitasking agents, realistic photo finished avatar mesh, etc.).
==1.3 Overview of Study==
This research study was conducted in the online virtual world of Second Life. Using an experimental design approach a virtual learning campus was constructed to utilise two different forms of lecture delivery method on the topic of ‘The Physics of Bridges’. This topic was presented as a lecture with a 2D slide show and audio (reproducing a real world lecture on the topic in the virtual space) and the same 2D content and audio augmented with immersive 3D models. Both delivery methods used identical content, slides, audio and time allocation. The independent variable in the delivery method was the presence or absence of 3D bridges and simulations matching the 2D slides and audio.
The 2D and 3D lecture environment simulated real-world lecture theatres with seating for up to 18 people and a large front facing projection screen. The 3D lecture room contained an additional space with lecture screens on three walls in which 3D objects appeared and with which users could interact or examine. The 2D and 3D theatres were otherwise identical.
Participants were recruited from the in world population of Second Life by advertisement and self selection (i.e. without profiling or filtering) and without replacement (avatars could not repeat any test). Prior to the lecture they received a pre-quiz containing 8 questions to establish a prior-knowledge benchmark. After completion of the pre-quiz participants were randomly allocated to either a 2D or 3D lecture theatre. On completion of their lecture they were given a 20 question post-quiz to test the learning outcomes of the lecture and a survey to gain an understanding of their learning experience within the virtual world environment. A total of 111 participants took part in this entire research process. The 2D and 3D participants numbered 55 and 56 participants respectively.
The learning materials along with the pre and post quiz questions were constructed using Bloom’s cognitive processes of ‘remember’ and ‘understand’. The quiz questions were divided evenly across these processes, which provided the basis for analysis.
The analysis method adopted in this research was triangulation using mixed methods. The pre and post quiz questions provided the basis for quantitative analysis. The post survey open questions provided the basis for qualitative analysis. Both of these analyses were then triangulated in order to compare the learning outcomes and experiences of the two groups that took part in this research.
==1.4 Significance and Limitations==
Mirroring real world education, there are at least three barriers an educator must overcome in order to deliver virtual world training:
Hosting infrastructure (the software environment the hosts the virtual world mechanics)
Training infrastructure (the creation of training spaces in the virtual world such as lecture theatres, or presentation screens)
Training content (the actual training material presented).
Today’s public online virtual worlds provide the hosting infrastructure while enabling low cost construction or acquisition of the training infrastructure to enable educators the opportunity to experiment with virtual learning delivery efficiently. Prior to this, educators were faced with extensive time, cost and complexity to build custom applications that could deliver the infrastructure before any virtual learning could take place. What once required extensive support from heads of department now requires very little effort on the educator’s behalf to enter into the world of virtual learning.
With a public online virtual world such as Second Life, the cost to develop, publish and deliver a 2D slide show based instructional learning program, such as the one produced in this experiment, is comparable to that of a real world ‘face to face’ lecture. Yet given the opportunity of this technology to go beyond real world instructional methods the temptation to exploit the full modelling and simulation capabilities of the environment is strong.
The research aimed to inform the question as to whether it is ‘worth’ the extra cost and time to build something more complicated than a 2D slide presentation. For this research the cost was measured in time (hours). While the cost of the 2D lecture was identical to preparing and delivering the same in the real-world, the 3D augmented version was approximately 3 times the cost. There is therefore a significant incentive to determine under experimental conditions the difference in learning outcomes and experience of the participants when presented with two different forms of delivery methods.
To preserve the integrity of the concept of separation of costs of content from costs of infrastructure (both hosting and training), a re-usable general purpose campus and lecture space was first constructed. With all content and tests independent of the campus and lecture infrastructure and interchangeable, the environment that can support both multiple simultaneous courses and rapid 5 to 15 minute course change in each lecture room. While this was not critical to the study, it was judged essential to the integrity of the assumptions on which the research was based: that virtual world content could be treated independently of the training infrastructure if a shared protocol was adopted. Secondly, the content preparation technology expectations were intentional constrained to a standard SL and MS Office equipped PC. PowerPoint and MS Audio Recorder (or other audio recorder) and the Second Life client are all that is required at the minimum to prepare a course for delivery for the purpose of the research.
Despite the recent growth in publicly accessible on-line virtual worlds, little published work has been conducted in this specific area of research. Furthermore, at the time of writing none, if any, had been performed using experimental methods. There is a growing body of high grade and scientific work in other aspects of educational and social dimensions of virtual worlds, and a respectable body of earlier work in purpose built and text based 3D virtual worlds, particularly in the comparative aspects of virtual and real world presence. Possibly, it is only with the realism attained in the latest generation of full content streaming, mixed graphical, audio and text worlds that this research has become practical. Thus the researcher’s motivation is to add to a body of knowledge, which is, as yet, predominantly (if not totally) lacking in scientific rigour via an experiment conducted under controlled conditions.
There have been multiple studies that compare traditional face to face learning methods with distance education learning outcomes. Thomas Russell’s book ‘No Significant Difference Phenomenon’ (2001) documents a review of literature of accumulative studies that goes back as far 1928 with the research question: ‘Does taking a course via distance education lower a student's chances for success as compared to the same student taking the same course in a face-to-face format?’ In most cases Russell’s findings resulted in ‘no significant difference’ in learning outcomes. The common identifier by Russell being that no student is better or worse off when comparing distance learning delivery methods with that of traditional face to face learning methods.
Similarly Richard Clark’s (1983) article published in the early 80s ‘Reconsidering Research on Learning from Media’ claimed that when comparing learning effects of different media platforms, there is no signification difference in outcome. In this article, Clark dismissed any studies that did find differences by providing that any differences that may have been found were not due to the medium platform but rather to the instructional design in the study.
Clark’s article sparked a heated response from Robert Kozma who had opposing views on the matter. This lead to a public debate between the two researchers (R. E. Clark, 1994; Kozma, 1994) in academic journals. This debate continues today amongst educational researchers and is commonly termed ‘The Media Debate’ (EduTech Wiki (2009).
This researcher does not enter into the media debate nor does she enter into the debate over whether real face-to-face learning ‘is better’ or ‘worse’ than virtual world learning. Rather this research has taken the position of ‘Now we are here [in the virtual world] what do we do?’
Consistent with this position the research decided to recruit only from the in world population. Therefore the constraint related to this is that the tested population is more likely to be pre-disposed to the virtual environment for a range of purposes one of which might include education. In the context of this experiment, however, the researcher is not convinced that such a condition would have had any impact on the outcomes. The elimination of the novice user dimension removed mechanical unfamiliarity as a significant factor from the outcomes which was appropriate for a study comparing virtual world delivery methods as opposed to a study comparing virtual and real world learning methods, and has been a factor that complicated the interpretation of some virtual-world research results in prior studies.
==1.5 Structure of Thesis==
For common terms used in this thesis see Appendix A: Terminology.
Chapter Two Literature Review; examines virtual world technology and a brief overview of educational learning theory.
The Virtual world section discusses alternative definitions, characteristics, history, key architectural features, research outcomes and applications in education of virtual worlds. The review of virtual worlds has been taken from an historic perspective discussing key influences that have lead to today’s massively multi-user virtual worlds. Discussion of virtual worlds concludes with a review of educational uses, affordances and a review of current research into online virtual worlds.
Chapter two concludes with a review of learning theory and instructional methods that provides basis of the learning methods and materials used to conduct this experiment.
Chapter Three Research Design; examines the research design along with the researcher’s theoretical assumptions, environment design, lecture material design and analysis methods adopted in this research study.
Chapter Four Results: presents the quantitative and the qualitative results of the virtual world learning experiment conducted in Second Life between the two groups of participants who undertook the differing lecture delivery methods for a lecture on ‘The Physics of Bridges’.
Chapter Five Discussion & Conclusion; provides an analysis of the results of the experiment along with discussion of these results and opportunities for further research.
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
32c50ce77d96d6139033f5943bbad674980ab982
Real Learning in Virtual Worlds - CHAPTER 3: Research Design
0
281
309
2010-08-05T13:13:23Z
Bishopj
1
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 3: Research Design=
==3.1 Introduction==
This study measured learning outcomes through the achievement scores of a multiple choice post-quiz at two cognitive levels of Bloom’s Factual Knowledge: Remember and Understand for two different lectures delivered in the virtual world of Second Life.
This chapter will discuss the research design of this study along with the researcher’s theoretical assumptions, environment design, lecture material design and analysis methods used in producing the results discussed in the next chapter of this thesis.
==3.2 Problem Statement and Research Hypothesis==
The problem of this study was to determine the difference in learning outcomes between two randomly selected groups that attended the same lecture in a 3D virtual world using differing methods of delivery. Group 1 received a 2D slide show with pre-recorded audio in a lecture room setting (emulating a classical lecture in a 3D virtual world space) and group 2 received the same lecture augmented with appropriate 3D objects in an appropriately modified virtual 3D theatre space. Both were delivered in the virtual world of Second Life. The research investigated whether a difference in the delivery method (the addition of interactive “life size” 3D models), where instructional design, timing, content and environmental setup are otherwise the same produces different learning outcomes with respect the two identified cognitive levels.
To carry out this study the following hypothesis was formed:
Learning outcomes are not independent of the delivery methods in a virtual world, in that varying the delivery method between 2D and a 3D presentation results in a significant difference in the post-quiz achievement scores of a participant in relation to Bloom’s cognitive process of factual knowledge of ‘remember’ and ‘understand’.
==3.3 Research Rationale==
In spite of the extensive current efforts of many institutions and educators to establish a virtual presence and adapt delivery of courses to this newly emerged generation of mass market virtual worlds, little (if any) formal and structured analysis has been undertaken by researchers to assess the comparative cognitive affordances of learning delivery methods in these spaces.
An anecdotal assessment of delivery methods in university campuses and training rooms within Second Life showed a preponderance of virtualised traditional lecture rooms – complete with front facing chairs, projection screens and even lecterns. The implication is therefore that a significant volume of current delivery in Second Life (at least) is therefore merely virtualising traditional real world delivery. The question arises, however, in a space capable of delivering highly interactive collaborative learning and 3D simulations, potentially for lower input costs than would be required in the real world, is the traditional chalk and talk approach the most appropriate?
There was a significant incentive to distinguish the effectiveness of these two learning approaches. The comparative costs, expertise and effort required to utilise a set of pre-prepared real world slides with audio and present them in a virtualised classroom is essentially the same as that required for real world delivery (at least in the Second Life virtual world). Even with Second Life’s simplified and efficient 3D building editor and scripting language, a 2D slide show based presentation with audio narration can be imported or streamed into Second Life and presented, for a fraction of the cost (in time) for course preparation and level of sophistication of learning materials, and skill set required where an interactive 3D simulation is built and utilised. Distinguishing between these two learning approaches would assist educators determine whether the extra cost (of construction) and time (in design and preparation) involved in the development of 3D instructional learning materials is worth the effort as it produces better learning outcomes.
==3.4 Research Method==
===3.4.1 Theoretical Assumptions ===
Previous research into education within virtual worlds can be divided into two main areas. Research that assesses the affordances of the environment to be used as an educational tool (Dickey, 2003; Gonzalez, 2007; Martinez et al., 2007; Youngblut, 1998) or research that compares the virtual world learning outcomes to that of real world learning methods (Kurt, Mike, Jamillah, & Thomas, 2004; Mania & Chalmers, 2001; Youngblut, 1998).
The former usually takes an interpretive research approach the latter a positivist research approach. From a purist’s standpoint, these two approaches are at opposite ends of the scale in their theoretical assumptions. This, in turn, affects how the researcher approaches, conducts and analyses their research data.
In an interpretive research approach the researcher adopts an investigative approach to analyse and ‘understand’ the conceptual meaning of the social construct. This approach to research is one of total immersion, experiencing the research from an insider’s view, where the researcher plays a social actor within the social construct (Klein & Myers, 1999; A. Lee, 1991; Orlikowski & Baroudi, 1991).
A positivist analyst takes a very different approach than that of interpretive analyst. Positivist research follows principals such as (A. Lee, 1991; Orlikowski & Baroudi, 1991):
*the researcher is independent of the research,
*the researcher is inquiry value-free,
*a linear cause-effect relationship exists and is verified and tested by deductive logic and analysis methods.
Without passing judgement on the merits of either approach, this research has generally taken a positivist research approach using a classical experimental design method (Neuman, 2006). A direct consequence of this decision was that a ‘laboratory’ first had to be created in the virtual world that could enable the delivery of the lectures under controlled experimental conditions. We will explore this laboratory in this chapter.
===3.4.2 Research Study===
A virtual learning campus was set-up in the virtual world of Second Life where participants were randomly allocated into two groups to participant in either:
#2D Slide Show Lecture: Slides and audio in a class room setting
#3D Augmented Lecture: Slides and audio augmented by ‘life size’ 3D objects and simulations, in a class room setting
By ‘life size’ we mean that the 3D objects seemed large than the participant in the 3D space and large enough for the participant’s avatar to walk on and around them. The lecture was on ‘The Physics of Bridges’ that was presented using identical subject matter, audio and slide times. The only difference being the presence of the 3D objects and the minimum necessary environmental changes to allow avatar interaction with the 3D objects.
Before the lecture participants were given a pre-quiz and afterwards a post-quiz to test the learning outcomes of each group with respect to Bloom’s factual knowledge of ‘remember’ and ‘understand’, and a survey collecting qualitative data about the experience. Both groups received identical pre and post quizzes and surveys. The questions in the pre-quiz differed from those in the post quiz. A summary of the experiment design is provided below (Table 7):
{|border="1"
|'''Research Design Summary'''
|-
|'''Research Design'''
|'''Classical Experimental Design'''
|-
|'''Sampling'''
|Random without replacement (i.e. Avatars were prevented from taking either quiz more than once).
111+ selections.
|-
|'''Random Assignment'''
|Yes
|-
|'''Independent Variable'''
|Learning Delivery Method
Virtual 2D Slide Show Lecture vs. 3D Augmented Lecture
◦ Course Delivered: The Physics of Bridges
◦ Time 20 minutes for both
|-
|'''Groups'''
|2D Group: 2D Slide Show Lecture
3D Group: 3D Augmented Lecture
|-
|'''Dependant Variable'''
|Cognitive Learning Outcome
Post-test achievement scores measuring the lecture objectives of Bloom’s:
◦ Factual knowledge of Remember Cognitive process
◦ Factual knowledge of Understand Cognitive process
|-
|'''Instrument'''
|'''Pre-Test'''
Test current factual knowledge of topic before course delivery
'''Post-Test & Survey'''
Retest factual knowledge for ‘Remember’ & ‘Understand’ after course delivery
Survey of participant’s learning experience
|}
'''Table 7. Research Design Summary'''
==3.5 Research Population==
The population and frame for inclusions was the total residences in Second Life which consists of 16,318,063 million users (60 day logons 1,344,215 million) with demographics of 59% male and 41% female, where the highest group at 35% are aged between 24-34 years with a total age population being over 18 years of age. The majority of Second Life residences, 39%, live in the United States of America. Appendix I: Second Life Demographic provides a more detailed breakdown of these statistics (Linden Lab, 2008b).
It was decided to use only current in world users (rather than recruiting new users to participate in world) to avoid the weaknesses of previous research studies that was discussed in Chapter 2, where the participants were learning a new toolset rather than the learning-material presented (Martinez et al., 2007; Youngblut, 1998).
==3.6 The Virtual Learning Environment==
The virtual world Second Life was chosen over other virtual world environments in light of the discussion provided in Chapter 2 concerning Architecture Considerations and the review of Educational Research in virtual worlds. Second Life currently provides many benefits over other virtual worlds for open access to learning due to the capabilities of its toolset that simplify the rapid import of 2D materials and construction of 3D interactive environments. Second Life has powerful scripting and modelling tools that come standard as a part of its interface that provide a vast range of approaches with which to create the virtual learning environment. Lastly, as noted in Chapter 2, the take up by tertiary institutions of Second Life for education purposes worldwide numbers in the hundreds.
In the section that follows we will discuss the virtual world learning environment (the ‘laboratory’) that was built in Second Life in order to conduct this research experiment.
====3.6.1.1 Building the Virtual Learning Environment: Design Considerations====
There are two general approaches to the design layout of a virtual space (Corbit, 2002). One separates places within the space into discrete areas where users move around using portals (known as teleports in Second Life), the other is more representative of the real world where users navigate to different places using such things as pathways between buildings or rooms within the virtual space. Both of these constructs offer advantages depending upon the circumstances for example, the former method of using portals offers a more simple method for the user to navigate the space easily and quickly whereas if one wanted to assist the user in obtaining a sense of placement, presence and collaboration within the virtual environment then latter may be more appropriate (S. Clark & Maher, 2006) where the user is encouraged to explore the virtual space in order to form a relationship with the environment (Corbit, 2002).
This virtual learning environment was built largely around the first approach where a series of rooms were built and participants navigated the environment using teleports in order to complete the appropriate stage within the experiment, but with the rooms themselves emulating a real world environment with chairs for sitting, lecture rooms with projection screens and foyers, teller machines for delivering participant fees, etc.
The use of teleports not only offered simplicity for navigation but also enabled the control required over the steps in the process for the experimental design approach taken in this research. Teleports allowed the environment to be easily automated for participants to operate without the intervention or the assistance from the researcher so as to uphold the positivist research approach and remaining unbiased and inquiry free, and independent of the experiment under study (Orlikowski & Baroudi, 1991). Furthermore, the use of distinct purpose specific and separate rooms connected only by teleports was also indicated due to technical and security reasons that will be discuss later in the System Controls section below.
A further consideration was given to the construction of the rooms themselves, including the look and content of each room. Bellman and Landauer (2000) believe that a key question of the implementation and application of a virtual world is decide what reality should be made virtual by incorporating “functional realism”. Functional realism is purpose built realism that maintains sufficient realism for illusionary effects for presence and immersion but does not support the goal of absolute realism. Absolute realism in most instances, they believe, only distracts from the real objectives of the environment. For example, implementing window scenes in a university lecture room that have passing cars, jets flying though the sky and construction to a neighbouring building may be a realistic scene in the real world but in a virtual world it would only distract the students from their learning objectives. Applying functional realism not only provides focussed design but also enhances the virtual world by only including key components and excluding any adversities that may be disruptive in real world. [24]
This virtual learning environment build was based upon a real world setting, using a theatre theme, in rooms that were self contained with only the essentials elements included in order to complete the learning task at hand.
====3.6.1.2 Virtual Learning Campus Overview====
The overall virtual learning campus consisted of a Welcome Room, a Pre-Quiz Room, 6 Lecture Room complexes (containing an arrival foyer, theatre, exit foyer and theatre control room), a Post-Quiz/Survey Room and a central Control Room; Figure 49 provides an overview of the process flow of the virtual learning campus.
The starting area for all visitors was the Welcome Room, in this room the participant could read about the research, the rules, authority, and standards, etc. From this room a participant could take a teleport to the Pre-Quiz room. On arrival avatar identity keys were automatically recorded.
After completing the pre-quiz in the pre-quiz room participants were paid a minimum amount for attending and they could decide either to leave the research project, or continue onto a lecture. On commencement and completion of quizzes avatar identity keys were recorded.
There were 6 Lecture Rooms divided evenly into 2 types of lectures – a 2D audio-slide show presentation or a 3D augmented audio-slide show presentation. Each lecture theatre could hold up to 18 seated participants and were timed to commence every 10 minutes in pairs.
If participants continued onto the lecture their completion of the pre-quiz was automatically verified and they were randomly allocated on teleportation to either one of these lectures. Once the lecture completed they could then teleport to the Post-Quiz/Survey Room to be tested on their learning outcome and surveyed on their experience and finally they were paid for their participation in the research project.
This entire process took approximately 30 minutes for the participant to complete.
The entire virtual campus build time took approximately one man month to build [25] with the 3D presentation content taking approximately 3 times longer to build than the 2D presentation content (approximately 3 days to build for the 3D presentation and 1 day for the 2D presentation).
In the section that follows a detailed view of each room is provided along with the function of the room.
Figure 49. Environment: Virtual Learning Campus Flow Chart
====3.6.1.3 Welcome Room====
The Welcome Room provided the entry point into the virtual campus (Figure 50). Here the participants were provided information about the research and if they decided to participate what could be expected of them within the research experiment.
This room contained four large wall signs and four smaller floor signs in each corner.
The wall signs provided the following information (see Appendix C: Welcome Room Information Content for more details):
*The aim of this research;
*What can I expect?
*How long will it take?
*Payment?
The floor signs provided the participant with a web link to the research explanatory statement (see Appendix C: Welcome Room Information Content for more details) and a virtual note card providing them with the welcome room information that they could hold in their inventory to take away from the research location.
If the participant decided to take part in this research then they took a teleport (the gold rings partially visible in the image) from this room, which transported them to the Pre-Quiz Room.
Figure 50. Environment: Welcome Room
====3.6.1.4 Pre-Quiz Room====
The Pre-Quiz Room was a common area where all participants were given a Pre-Quiz to obtain their level of knowledge of the subject prior to the delivery of the lecture.
A participant would be teleported from the Welcome Room into the centre of this room and provided with instructions by the large sign on the main wall to be seated in order to take the pre-survey (Figure 51, Left). Once seated a web-link would be provided to them to take the pre-quiz. This web-link was connected to a survey engine that operated over the internet and stored details into a database outside of the Second Life environment. The survey database recorded the participant’s answers to the pre-quiz along with other details such as the participant’s avatar key (the unique identify of the Second Life user). The avatar’s key was used to verify that the participant had completed the pre-quiz prior to payment and teleportation into the next scheduled lecture.
Once the participant had completed the pre-quiz they could collect part payment for completion of this stage of the research from an ATM along the back wall (Figure 51, Right) and then could use a teleport, situated next to the ATMs, to transport them to the next scheduled lecture. The lectures were scheduled every 10 minutes for both the 2D and 3D presentations. If the blue beam on the teleport was displayed then this showed the participant that the next lecture was available for them to teleport. Timers beside the ATMs showed the time until the next lecture. On teleportation a participant was randomly allocated to either a 2D or 3D lecture.
Figure 51. Environment: Left Pre-Quiz Room, Right ATMs & Teleporters
====3.6.1.5 Lecture Theatre====
The participant would arrive in the foyer of the lecture theatre where they were instructed via floor signs to switch on their audio and video controls and to be seated inside the lecture theatre (Figure 52).
The slide presentation was delivered using streaming in world web-technology where PowerPoint slides were constructed and saved as html files and streamed into Second Life using an in world constructed HTML viewer. Audio streams were also recorded and synchronised to each of these slides throughout to the presentation.
Figure 52. Environment: Lecture Theatre
Both the 2D and 3D theatres were setup essentially the same and delivered within the same time frame: which took approximately 20 minutes of instructional delivery. The only variable that changed was the presence or absence of 3D objects in the delivery method of the presentation.
In the 2D presentation a participant would continue to be seated to watch and listen to the 2D lecture (Figure 53, Left) throughout the lecture. In the 3D presentation the participant would commence the session seated, but on commencement of the lecture a room would open up behind the front 2D presentation screen and the participant would be automatically transported in their chairs and dropped into the 3D presentation space to view the 3D slide show presentation in a specially designed 3D viewing area (Figure 53, Right). Participants in the 3D presentation were then left standing in this space and were able to move around in the 3D space if they wished. In the 2D mode the front facing projection screen displayed the slides, while in the 3D space, the 2D slides were projected on the walls around the 3D viewing space, with the 3D objects created and removed automatically in synch with the slides and audio in the centre of (and around) the 3D viewing space.
Figure 53. Environment: Learning Delivery Method
Careful consideration was given so that both groups obtained the same instructional information. The only exception was that the pictures contained in the 2D slide presentation was translated into 3D form and either rotated and animated, or positioned for ‘walking on’ or exploration in front of the participant.
Once the lecture had completed the participants for both groups were instructed to move to the exit foyer and teleport to the next phase of the research project via teleports located in the exit foyer. The entrance to the exit foyer and the teleports therein were only switched on after the last slide had been delivered (Figure 54).
Figure 54. Environment: Lecture Room Teleporters
Each lecture theatre contained a hidden control room and separate bank of teleports (restricted to the administration avatar) connecting them and the central control room that allowed for independent movement and invisible monitoring of the lecture rooms, and contained the control system and communication devices for that lecture theatre.
====3.6.1.6 Post-Quiz Room====
The final phase for the participants was to take a post-quiz and survey. This room operated the same as the Pre-Quiz room.
The Post-Quiz Room was a common room where all participants would be teleported into the middle of the room after their lecture. A participant would be instructed via the main sign on the wall to be seated in order to take the quiz and survey (Figure 55). Once they had completed the quiz and survey they were then instructed to go to the back of the room to collect their payment from an ATM for the final payment for their research participation. The survey engine would note they had completion of this survey and only then allow payment if completed.
Figure 55. Environment: Post-Quiz Room
====3.6.1.7 Control Room====
At the centre of this system was a Control Room. The Control Room was responsible for managing the 28 public teleports as well as containing separate teleports for members of the administration team. At any one time a member of the administrator team could bypass the controls contained within the system and move to any room within environment (Figure 56).
Figure 56. Environment: Control Room
====3.6.1.8 System Controls====
In the design consideration section it was mentioned that this environment was best setup using separated rooms with teleports to navigate the system. This decision allowed for an increase in security as well as allowing the use of teleports to operate as control gates.
Within Second Life you can use what is called roaming camera mode to navigate around without moving your avatar. A person can use this mode to move around to view other locations within a definable distance and even operate controls like the sit command therefore providing a security risk that a participant could bypass steps within the research process. Having rooms located far away from each other at random distances in 3D space and connected with teleports prevented this from occurring. Even if a participant found of way of teleporting to a location that was out of sequence to the research process (eg they had visited before and created a landmark to teleport back or had given away this landmark to another avatar) then the teleports, seats and ATMs all communicated with a central off-world web site (containing the survey engine) which verified the proper completion of each required step and acted as a gatekeeper to stop a person from breaching the system.
At every stage when an avatar used a teleport, used a quiz seat, or used an ATM, these teleports, seats or ATMs, connected to an external database that would look-up the avatar’s key to ensure that the appropriate stage had been completed prior to allowing access. For example, a participant had to have completed their pre-quiz survey prior to entry into a lecture theatre. If they tried to breach this sequence then the teleport reported an error message and would not allow them to teleport. A further example, a participant was required to complete an entire lecture prior to completing the post-survey. The exit Lecture Room teleports were disabled until lecture finished after which a participant could take a teleport to the Post-Quiz room in doing so the participant was flagged as having completed the lecture which enabled them take the post-quiz and survey.
Other controls were built into the ATM machines so that a participant could only be paid once and also built into the survey system so that a participant could only undertake the research once (although they were allowed to attend again if they chose they just could not take the quizzes or survey again).
This design construction of the virtual learning campus allowed for an automated system that could be operated over 24 hours for multiple participants. It was also fault tolerant to possible SIM crashes with the entire system to be able to automatically restart and recover correctly unattended.
Lastly, driven entirely by a specially designed control language in replaceable text files, the design made for an easily modifiable and manageable system with minimum scripting change to introduce any new rules. An entirely new lecture and testing set can be loaded into the system in less than 5 minutes (once the content has been written or built).
==3.7 Learning Task Design==
===3.7.1 Subject Matter===
The subject matter that was chosen was the Physics of Bridges. This topic was chosen both for its familiarity (everyone knows what a bridge is) and obscurity (they don’t generally know as much as they might initially believe about the detail of how they work) and because the content could be easily adapted for both forms of delivery. The level of difficulty was aimed at approximately a year 12 level high school student. The content of information was mainly sourced from academic and government information web-sites. Appendix D:Instruction: Slide Presentation contains the delivered presentation along with a references list on the last page of this presentation.
===3.7.2 Instruction Delivery===
A virtual learning system, no matter how good its delivery design is only as good as the instructional design of the learning task. As discussed in Chapter 2 Learning and Instructional Design Theory, the instruction methods used to assist in the delivery and assessment of the course was Gagne’s Nine Events of Instruction and the revised Bloom’s Taxonomy Cognitive domain.
This section provides details of how both the 2D and 3D materials were constructed, for the differences within these deliveries refer to section 3.6.1.5 Lecture Theatre in this chapter.
====3.7.2.1 Gagne====
The theme of this lecture was how the various bridge designs handled the key forces of tension and compression. A variety of bridge designs were explored with respect to these two forces.
Gagne’s 9 stages of instructional delivery were provided for as follows:
#Gaining Attention (Reception): This stage grabs the attention of the participant. A slide show was given while participants arrived in the theatre prior to the commencement of the formal presentation that contained a variety of bridge structures that were the ‘best of’ bridges along with music to motivate and excite the participant for the lecture that was to follow (see Appendix E: Pre-Presentation Slide Show).
#Informing Learners of the Objective (Expectancy): This stage informs the participant what new knowledge they can expect to learn. The 2nd Slide obtained the objectives of the presentation (see Appendix D: Instruction: Slide Presentation). These objectives were also written in conjunction using the revised Bloom’s taxonomy.
#Stimulating Recall of Prior Learning (Retrieval): This stage tries to place the new information that will be delivered in the form of current knowledge so that they can relate better to the newly presented information. Every slide that introduced a new bridge structure contained a picture of a real bridge so that the participant could relate to real life experience to the new information that would be presented.
#Presenting the Stimulus (Selective Perception): This is where the learning (or new knowledge) was presented, each bridge form was presented with an overview, its relationship to tension and compression and the limitations of the bridge design. The information was chunked into a logical structure. Stages (4) and (5) are interrelated which tries to provide the participant new knowledge in a logical and meaningful context.
#Providing Learning Guidance (Semantic Encoding): This stage presents the information in a deeper form allowing the participant to encode the new information into their long-term memory. Here the information was presented in different forms using both pictures (and in the case of the 3D group, 3D models) and text. Furthermore, three different concepts (ie overview, tension and compression and limitations) were provided for each bridge to enhance a participant’s breath of knowledge of that bridge. The bridges were also presented to the participant from simplest to most complex so that they could gradually understand the concept of a bridge structure and its relationship to tension and compression.
#Eliciting Performance (Responding): This stage of instructional delivery allows the participant to ‘do something’ with their new knowledge. Given we only had 20 minutes to deliver the material this stage was not performed. If Bloom’s cognitive process of Apply was tested then inclusion of this stage would have been imperative. The researcher recognises that although time was a limitation to this study, ultimately, this stage would have been interesting to include.
#Providing Feedback (Reinforcement): The stage of instructional delivery is usually performed with feedback from the lecturer to confirm that the participant understood the new knowledge presented. Again due to time constraints and the type research method used (experimental design) direct lecturer interaction was not an option, so in order to hold this experiment constant for all participants’ summary slides where used. These provided a form of feedback by presenting the information again but in a different form to that that was initially used in the main body of the presentation, forcing some degree of participant thought to process the summary information (and of course, the post quiz served a similar purpose, but without the learning confirmation).
#Assessing Performance (Retrieval): In this research study this was the final stage of delivery where the participant’s were provided with the post-quiz to assess their learning outcome.
#Enhancing Retention and Transfer (Generalisation) The final stage of Gagne’s instructional delivery is to generalise and transfer the information delivered in light of new information that may be presented in future. This step was partly performed at stage (7) were the information was summarised. Transfer in normal situations (ie non-experimental) would allow the student to take away their new knowledge, ie the lecture materials. Although in Second Life this is possible as this was under experimental conditions that had to be controlled the lecture materials were not transferred to the participant.
====3.7.2.2 Bloom’s====
The revised Bloom’s taxonomy (Anderson et al., 2001) provided the overall learning objectives of the course content (and therefore the new knowledge presented throughout the instruction) as well as the way in which participants were tested on this new knowledge. The two learning outcomes this research assessed were ‘Remember’ and ‘Understand’ of Factual Knowledge dimensions of the revised Bloom’s cognitive process as can be seen in Figure 57 below.
Figure 57. The Revised Bloom’s Taxonomy Table: Tested Process Dimensions
Bloom defines ‘remember’ of Factual Knowledge as knowledge that is presented to participants in the learning instruction, which are the basic elements of the subject matter. For example, Bridge Types presented were: Beam, Truss, Arch and Suspension. To recall the names of these bridges is the cognitive process of ‘remember’ of Factual information. Participants either remember or they do not when tested.
Bloom defines ‘understand’ of Factual Knowledge as a means to promote retention of ‘remember’ by linking the new knowledge of ‘remember’ with prior knowledge of the participant to be able to achieve more than just remember but utilise this new knowledge in other forms like interpreting, comparing, explaining etc which is not necessarily presented to them in instruction but rather it is assimilated from the entire information that is presented to them through instruction. For example, participants were tested on hybrid bridges but were never instructed on these forms of bridges in the lecture. The participant should have been able to construct this knowledge based upon the basic bridge forms presented in the lecture.
In application of the revised Bloom’s taxonomy the researcher identified the learning objectives, defined these learning objectives in terms of one of Bloom’s 19 levels of Cognitive Process (noting that each cognitive category contains specific cognitive processes), facilitated these objectives into instruction then assessed these objectives.
==3.8 Instrumentation==
The instrument used to assess a participant’s learning outcome as well as their overall learning experience was in survey form. Below is the survey structure that was used in this research study (Table 8):
{|align="center"
|-bgcolor="lightgrey"
|Pre-Survey
|''Total questions: 8''
|-bgcolor=white
|Pre-Quiz
|8 multi-choice questions
|-bgcolor=lightgrey
|Post-Survey
|''Total question: 32''
|-bgcolor=white
|Post-Quiz
|20 multi-choice questions
|-bgcolor=lightgrey
|Survey
|2 content knowledge: self-assessment of pre & post knowledge
3 Delivery Method : self-assessment of quality of learning materials
2 Technology: Assess technical difficulties
5 Learning Experience: Assess satisfaction level in learning method
|}
<p align=center >
'''''Table 8. Pre and Post Survey Structure'''''
</p >
The survey system that was used to record the data was a web based survey system as discussed in this chapter The Virtual Learning Environment section (Figure 58).
Figure 58. Web-Based Survey System
===3.8.1 Pre and Post Quiz===
A total of 28 quiz questions were prepared which were divided into 2 groups of Bloom’s Factual Knowledge of ‘remember’ and ‘understand’ (see section 3.7.2.2 Bloom’s for more details for the difference between these two cognitive dimensions). A total of 8 of these questions were given to all participants as a pre-quiz and 20 in the post-quiz.
A participant was never tested on the same question twice or provided the answers for either quiz, reducing the likelihood that a participant would learn from quiz questions rather than the lecture material presented. The pre-quiz was delivered to the participant prior to the lecture (see Appendix F: Pre-Quiz) and the post-quiz and survey was delivered directly after the lecture (see Appendix G: Post Quiz & Appendix H: Survey).
In order to construct these questions Bloom’s Taxonomy provides sample objectives and corresponding assessment examples within each cognitive category. The format of the multiple choice questions contained both direct selection and cueing as the question format. For example a direct selection question proposes a statement or asks a question and provides the participant with a list from which to select an answer while a cueing question provides the participant with a sentence that contained a blank space for which the responder selects an appropriate response from a multiple choice list.
===3.8.2 Survey: Learning Experience===
After a participant completed the post-quiz a brief survey made up of 12 questions was given (questions 21-32) to assess a participant’s own perception of their prior and post content knowledge, the delivery method, technological constraints and their learning experience. The structure of these questions used 6 Likert scale questions (5-point scales), 1 yes/no question for technical difficulty along with a general comment to explain difficulty, 2 questions to list both positive and negative experiences they perceived about the technology as a learning tool, and 2 open-ended questions for general comments about the course delivery and the participants overall experience (see Appendix H: Survey Q21-32).
The survey was implemented in order to assist the researcher as to whether there may had been any adverse effects that may have affected a participant’s performance in completing the knowledge quiz as well as to assist the researcher into gaining a better understanding of the overall research results and a participant’s relative experiences across the two delivery methods.
===3.8.3 Instrument Reliability===
Kuder-Richardson Formula 20 (KR-20) was the selected reliability test for the pre and post test quiz questions due to the design of the instrument. As the pre-test and post-test were not equivalent K20 measures internal consistency on a single set of survey results (Burns, 2000; Siegle, 2008). KR-20 is widely accepted by those educators and psychologist who support the instrument reliability concept to be a satisfactory method to measure the reliably of a testing instrument (Yount, 2006).
In order to test the Likert scales in the post survey Cronbach's Alpha was used to measure reliability. Similar to K20 in concept, but Cronbach's Alpha allows for testing of data across scales. K20 requires the data to be dichotomously scored (although both in reality produce the same results on dichotomously scored data).
The overall results of the instrument reliability test were low. The problem with the instrument reliability test is that there were too few questions within each group to obtain a true value for the reliability test. The results along with a discussion of the instrument reliability tests performed are provided in Appendix L: Instrument Reliability Results.
==3.9 Analysis Method==
===3.9.1 Introduction===
As discussed in the Research Method section of this chapter this research has generally taken a positivist research approach as opposed to an interpretive research approach. A purest approach to research from either side can lead to weaknesses when interpreting results (Onwuegbuzie, 2002; Richardson, 2005; Walsham, 1995; Weber, 2004), critics argue:
*Positivist: that this method can lead to narrow, non-innovative and repetitive thought, while failing to understand that the selection of data, the method of collection, form of quantification and the tests applied are not themselves objective processes.
*Interpretive: that this method can lead to unresolvable propositions, contextually isolated understandings, non-reproducible observations and ideas sustainable only in the mind of the interpreter.
Thus, in order to minimise the weakness of positivist research the researcher has used triangulation. Triangulation in research can be applied in many forms; in this research it has been used as ‘theory triangulation’ as described by Denzin (1978) which involves using multiple theoretical perspectives in order to interpret the data results. Although unlike the Denzin perspective where triangulation is used as a means of avoiding bias and validating the data results this researcher’s reasoning for the application of theory triangulation is to gain a greater understanding of the results by adding range and depth to the quantitative data analysis (Fielding & Fielding, 1986; Olsen, 2004).
===3.9.2 Data Processing===
The survey data was extracted from the database along with survey start and finishing times of participants and processed in Microsoft’s Excel spreadsheets. After conducting a small number of trials with independent trusted respondents, not otherwise part of the assessment, to determine the minimum practical time for completion of the quiz and survey, it was decided that a cut-off time of 2 minutes would be used as the basis filtering post-surveys. Post-quiz/surveys completed under this time were examined and removed. This time was based upon how long it took the researcher and the trusted responders to read and respond to only the quiz questions at a medium speed. Each survey was also reviewed for possible fake entry of the quiz answers eg selecting the first or last value for every question for their given answers. By extracting these surveys it was hoped to lessen the chance of erroneous results.
No missing data was contained in the survey because every field except the general comments and technical comment questions were all required response fields before a quiz/survey was accepted by the system and saved to the database.
===3.9.3 Software===
The software used to analyse the data results was Microsoft Excel 2007 Data Analysis add-in, STATGRAPHICS Centurion (2009) which is a statistical software package similar to SPSS, StatCal developed by David Moriarty (2008) an excel spreadsheet for testing normal distribution and Del Siegle (2008) excel spreadsheet for testing instrument reliability.
===3.9.4 Quantitative Analysis Methods===
Quantitative research methods are a natural fit with the principles of positivist research, which requires a scientific approach to analysis. Quantitative research can be described as a process of presenting and interpreting data that follows a linear research path using logical models to measure variables and test a hypothesis that is directly linked to a cause. Analysis is performed using hard data, (i.e. numerical) but soft data (i.e. non-numerical) may also be assessed by transforming natural phenomena into numbers using quantification techniques (Neuman, 2006).
====3.9.4.1 Operational Hypotheses====
Quantitative analysis methods require a research hypothesis (as given early in the Problem Statement and Research Hypothesis section) to be re-expressed into operational hypotheses so that each hypothesis forms a tighter a more testable statement (Burns, 2000). From the research hypothesis the following operational hypotheses were formed:
#(H1): That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
#(H2): That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
Statistical analysis requires testing be performed on a hypothesis where no difference exist thus known as a null hypothesis (H0). Since H1 and H2 are expressed in terms of differences the null hypotheses H01 and H02 respectively was tested for no significant difference. If either null hypothesis H01 and H02 measures a statistically significant result then null hypothesis of either H01 and H02 gets rejected thus accepting the probability that the results of the experiment are unlikely to be a random variation in sampling error and that the conclusions drawn from the sampled population in the experiment can be drawn for the entire research population (Burns, 2000).
The experimental data results used to test the above hypotheses were based upon participant multiple-choice post-quiz achievement scores. These multiple-choice answers were dichotomously scored (ie 0 wrong answer, 1 correct answer) and analysed as will be discussed next.
====3.9.4.2 Statistical Significance====
This study used the non-parametric Mann-Whitney U Test to test H01 and the parametric t-test for independent groups to test H02.. All significance tests used a critical alpha level (α) 0.05, i.e. the probability (p) that 95% of the results were not due to chance. The selection of this test was based upon the way in which the hypothesis was formed and whether the results data met the assumptions of parametric test selection.
Burns (2000, p. 155) provides a flowchart to assist in the selection of a statistical test. As can be seen in Figure 59, the highlighted statistical tests are the test options available in this research study. The test selection is based upon a combination of the data type, hypothesis statement and the sample population selection.
Figure 59. Significance Test Selection
Burns (2000) states that if a researcher has a choice between the selection of a parametric or non-parametric test it is best to select the parametric test. Parametric tests are more powerful at picking up significant differences than a non-parametric test because parametric tests not only take into account the rank order of scores but also calculate variances between these scores. The selection of a parametric test should only be chosen if the experimental data results meet three assumptions, which are that the data be – naturally numerical using interval or ratio scales, of normal distribution and homogeneity of variance.
Using Burns’ diagram above, in this study we measure the differences between 2 groups (2D and 3D) were the population was randomly selected therefore the data was in 2 independent groups. From Burns diagram[26] this research study should either use the parametric independent t-test or the non-parametric Mann-Whitney U test. If the data meets the three parametric test assumptions then a parametric test should be chosen over the non- parametric test.
Within the data analysis for significance, it was decided that the significant difference would be based upon a 2-tail hypothesis. Due to the lack of research that had been performed in this area the researcher was not able to come to a strong conclusion that either method would produce a significant difference in their test results.
=====3.9.4.2.1 Assumptions of Parametric Testing: Tests Performed=====
Prior to testing for significance the results data was tested to see if the data met the assumptions of parametric testing, that is that the data be; 1) naturally numerical using interval or ratio scales, 2) of normal distribution and 3) homogeneity of variance as provided by Burns above.
The first assumption is that the data be naturally numeric. The data type of the pre and post quiz scores was interval scaled therefore the first assumption of parametric testing was met.
The second assumption is that the data is of normal distribution. There are various methods with which you can test for normal distribution (Fife-Schaw, 2007). This research has adopted the following approach:
*The measure skewness and kurtosis can be used to test for normal distribution. If either skewness and kurtosis departs significantly from zero[27] (±2 standard errors of skewness (ses) or standard errors of kurtosis (sek)) then the results cannot be assumed to be normality distributed (Brown, 1997).
*D’Agostino-Pearson K2 omnibus test (K2) was chosen as the statistical test to measure whether the data does not deviate from normal distribution significantly. This test is known as the most powerful Gaussian test as it is not affected by duplicate values in the data (which the result data contains) (Fife-Schaw, 2007; Graphpad, 2009).
The third assumption is that the data between the two groups do not vary significantly. Levene's F-test was applied to measure if the standard deviation variance between the groups varied significantly (NIST, 2006).
====3.9.4.3 Other Tests Performed====
Other tests performed that will be discussed in the results section are statistical descriptive analysis for each group using both the pre-post quiz data and the survey data. These tests will provide further insight into the research results and the differences obtained in this experiment.
The Likert scales in the survey was treated as ordinal data and therefore where not seen to have the same variance and thus treated as 3 groups positive, neutral and negative (Jacoby & Matell, 1971).
===3.9.5 Qualitative Analysis Methods===
Qualitative research methods are a natural fit with an interpretive research approach. Qualitative research is a process of interpreting the data by applying ‘logic in practice’ using a non linear research path. The emphasis is on constructionism, using inductive analysis for the generation of theory. Data used in analysis is soft data, the researcher will analysis the data looking at ways in which an individual interprets their social construct (Neuman, 2006).
Unlike quantitative analysis, no hypothesis is formed at the start of a study. It is an inductive process where the main concern of the researcher is to generate and develop new theories based upon interpretation. Qualitative research analysis relies heavily on the application of phenomenological sociology, hermeneutics and ethnography in order to interpret their findings (A. Lee, 1991).
In this study we have used qualitative methods as a way to gain an understanding of the overall experience of a participant learning experience in a virtual world as well as any differences that they may have experienced in the alternative delivery methods of the lecture.
====3.9.5.1 Analysis Data====
The data in this research study that was analysed using qualitative analysis methods was the post-survey data (see Appendix H: Survey). This survey contained open questions to enable a participant to provided feedback on their learning experience, instructional delivery and any technical constraints that they may have had during their lecture delivery. The technical difficulty question was straight forward; if they answered yes then they could comment on what went wrong. The questions that were asked in order to understand their perception of virtual world learning and delivery method were as follows:
*'''DELIVERY METHOD ASSESSMENT''' (Q 25) General Comment:
*'''VIRTUAL WORLD LEARNING EXPERIENCE'''
**(Q 30) List 3 positive experiences you had with using this technology to learn:
**(Q 31) List 3 negative experiences you had with using this technology to learn:
**(Q 32) General Comment:
Qualitative analysis of these questions required the application of hermeneutic method which is the process of analysing verbal conversations, text, journals, pictures etc looking for meaning in the detail and as a whole to reveal the deeper meaning contained within - i.e. ‘reading between the lines’ in order to extract meaning. Within this method a hermeneutic circle is preformed were interpretation takes an iterate approach interpreting as a whole and of its parts then reinterpreting in light of the new understanding (Klein & Myers, 1999; A. Lee, 1991).
====3.9.5.2 Coding====
Using hermeneutic method on the survey data as described above data was coded into patterns, themes and contextual structures in light of the research problem and literature review. Coding generally takes 3 stages in qualitative analysis – Open, Axial and Selective coding (Neuman, 2006).
Open coding was performed as a preliminary analysis to develop codes to condense data into specific meanings and themes. This process was preformed several times prior and after the quantitative analysis was preformed.
Axial coding was then performed to develop possible relationships between the coded data.
Selective coding, the final stage, was performed to extract major themes and general theory that emerged which will be discussed in the Results section of this paper.
==3.10 Summary==
In this chapter the researcher has discussed the research design that required the construction of the virtual learning campus and learning materials. The instrument used to collect the data was a pre and post quiz and survey.
This research will be applying theory triangulation, which represents a mixed method approach to the analysis. An operational hypothesis was drawn from the research problem that will be assessed using quantitative analysis methods. Qualitative analysis will be used in order to gain a better understanding of the quantitative results as well as the learning experience of participants.
The next chapter discusses the results of this research project using the methods that were discussed under Analysis Method in this chapter.
</div >
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
ccccfae42a86ad81647f0dc7e9d1c0ef9e7c70bf
363
309
2010-08-05T13:13:23Z
Bishopj
1
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 3: Research Design=
==3.1 Introduction==
This study measured learning outcomes through the achievement scores of a multiple choice post-quiz at two cognitive levels of Bloom’s Factual Knowledge: Remember and Understand for two different lectures delivered in the virtual world of Second Life.
This chapter will discuss the research design of this study along with the researcher’s theoretical assumptions, environment design, lecture material design and analysis methods used in producing the results discussed in the next chapter of this thesis.
==3.2 Problem Statement and Research Hypothesis==
The problem of this study was to determine the difference in learning outcomes between two randomly selected groups that attended the same lecture in a 3D virtual world using differing methods of delivery. Group 1 received a 2D slide show with pre-recorded audio in a lecture room setting (emulating a classical lecture in a 3D virtual world space) and group 2 received the same lecture augmented with appropriate 3D objects in an appropriately modified virtual 3D theatre space. Both were delivered in the virtual world of Second Life. The research investigated whether a difference in the delivery method (the addition of interactive “life size” 3D models), where instructional design, timing, content and environmental setup are otherwise the same produces different learning outcomes with respect the two identified cognitive levels.
To carry out this study the following hypothesis was formed:
Learning outcomes are not independent of the delivery methods in a virtual world, in that varying the delivery method between 2D and a 3D presentation results in a significant difference in the post-quiz achievement scores of a participant in relation to Bloom’s cognitive process of factual knowledge of ‘remember’ and ‘understand’.
==3.3 Research Rationale==
In spite of the extensive current efforts of many institutions and educators to establish a virtual presence and adapt delivery of courses to this newly emerged generation of mass market virtual worlds, little (if any) formal and structured analysis has been undertaken by researchers to assess the comparative cognitive affordances of learning delivery methods in these spaces.
An anecdotal assessment of delivery methods in university campuses and training rooms within Second Life showed a preponderance of virtualised traditional lecture rooms – complete with front facing chairs, projection screens and even lecterns. The implication is therefore that a significant volume of current delivery in Second Life (at least) is therefore merely virtualising traditional real world delivery. The question arises, however, in a space capable of delivering highly interactive collaborative learning and 3D simulations, potentially for lower input costs than would be required in the real world, is the traditional chalk and talk approach the most appropriate?
There was a significant incentive to distinguish the effectiveness of these two learning approaches. The comparative costs, expertise and effort required to utilise a set of pre-prepared real world slides with audio and present them in a virtualised classroom is essentially the same as that required for real world delivery (at least in the Second Life virtual world). Even with Second Life’s simplified and efficient 3D building editor and scripting language, a 2D slide show based presentation with audio narration can be imported or streamed into Second Life and presented, for a fraction of the cost (in time) for course preparation and level of sophistication of learning materials, and skill set required where an interactive 3D simulation is built and utilised. Distinguishing between these two learning approaches would assist educators determine whether the extra cost (of construction) and time (in design and preparation) involved in the development of 3D instructional learning materials is worth the effort as it produces better learning outcomes.
==3.4 Research Method==
===3.4.1 Theoretical Assumptions ===
Previous research into education within virtual worlds can be divided into two main areas. Research that assesses the affordances of the environment to be used as an educational tool (Dickey, 2003; Gonzalez, 2007; Martinez et al., 2007; Youngblut, 1998) or research that compares the virtual world learning outcomes to that of real world learning methods (Kurt, Mike, Jamillah, & Thomas, 2004; Mania & Chalmers, 2001; Youngblut, 1998).
The former usually takes an interpretive research approach the latter a positivist research approach. From a purist’s standpoint, these two approaches are at opposite ends of the scale in their theoretical assumptions. This, in turn, affects how the researcher approaches, conducts and analyses their research data.
In an interpretive research approach the researcher adopts an investigative approach to analyse and ‘understand’ the conceptual meaning of the social construct. This approach to research is one of total immersion, experiencing the research from an insider’s view, where the researcher plays a social actor within the social construct (Klein & Myers, 1999; A. Lee, 1991; Orlikowski & Baroudi, 1991).
A positivist analyst takes a very different approach than that of interpretive analyst. Positivist research follows principals such as (A. Lee, 1991; Orlikowski & Baroudi, 1991):
*the researcher is independent of the research,
*the researcher is inquiry value-free,
*a linear cause-effect relationship exists and is verified and tested by deductive logic and analysis methods.
Without passing judgement on the merits of either approach, this research has generally taken a positivist research approach using a classical experimental design method (Neuman, 2006). A direct consequence of this decision was that a ‘laboratory’ first had to be created in the virtual world that could enable the delivery of the lectures under controlled experimental conditions. We will explore this laboratory in this chapter.
===3.4.2 Research Study===
A virtual learning campus was set-up in the virtual world of Second Life where participants were randomly allocated into two groups to participant in either:
#2D Slide Show Lecture: Slides and audio in a class room setting
#3D Augmented Lecture: Slides and audio augmented by ‘life size’ 3D objects and simulations, in a class room setting
By ‘life size’ we mean that the 3D objects seemed large than the participant in the 3D space and large enough for the participant’s avatar to walk on and around them. The lecture was on ‘The Physics of Bridges’ that was presented using identical subject matter, audio and slide times. The only difference being the presence of the 3D objects and the minimum necessary environmental changes to allow avatar interaction with the 3D objects.
Before the lecture participants were given a pre-quiz and afterwards a post-quiz to test the learning outcomes of each group with respect to Bloom’s factual knowledge of ‘remember’ and ‘understand’, and a survey collecting qualitative data about the experience. Both groups received identical pre and post quizzes and surveys. The questions in the pre-quiz differed from those in the post quiz. A summary of the experiment design is provided below (Table 7):
{|border="1"
|'''Research Design Summary'''
|-
|'''Research Design'''
|'''Classical Experimental Design'''
|-
|'''Sampling'''
|Random without replacement (i.e. Avatars were prevented from taking either quiz more than once).
111+ selections.
|-
|'''Random Assignment'''
|Yes
|-
|'''Independent Variable'''
|Learning Delivery Method
Virtual 2D Slide Show Lecture vs. 3D Augmented Lecture
◦ Course Delivered: The Physics of Bridges
◦ Time 20 minutes for both
|-
|'''Groups'''
|2D Group: 2D Slide Show Lecture
3D Group: 3D Augmented Lecture
|-
|'''Dependant Variable'''
|Cognitive Learning Outcome
Post-test achievement scores measuring the lecture objectives of Bloom’s:
◦ Factual knowledge of Remember Cognitive process
◦ Factual knowledge of Understand Cognitive process
|-
|'''Instrument'''
|'''Pre-Test'''
Test current factual knowledge of topic before course delivery
'''Post-Test & Survey'''
Retest factual knowledge for ‘Remember’ & ‘Understand’ after course delivery
Survey of participant’s learning experience
|}
'''Table 7. Research Design Summary'''
==3.5 Research Population==
The population and frame for inclusions was the total residences in Second Life which consists of 16,318,063 million users (60 day logons 1,344,215 million) with demographics of 59% male and 41% female, where the highest group at 35% are aged between 24-34 years with a total age population being over 18 years of age. The majority of Second Life residences, 39%, live in the United States of America. Appendix I: Second Life Demographic provides a more detailed breakdown of these statistics (Linden Lab, 2008b).
It was decided to use only current in world users (rather than recruiting new users to participate in world) to avoid the weaknesses of previous research studies that was discussed in Chapter 2, where the participants were learning a new toolset rather than the learning-material presented (Martinez et al., 2007; Youngblut, 1998).
==3.6 The Virtual Learning Environment==
The virtual world Second Life was chosen over other virtual world environments in light of the discussion provided in Chapter 2 concerning Architecture Considerations and the review of Educational Research in virtual worlds. Second Life currently provides many benefits over other virtual worlds for open access to learning due to the capabilities of its toolset that simplify the rapid import of 2D materials and construction of 3D interactive environments. Second Life has powerful scripting and modelling tools that come standard as a part of its interface that provide a vast range of approaches with which to create the virtual learning environment. Lastly, as noted in Chapter 2, the take up by tertiary institutions of Second Life for education purposes worldwide numbers in the hundreds.
In the section that follows we will discuss the virtual world learning environment (the ‘laboratory’) that was built in Second Life in order to conduct this research experiment.
====3.6.1.1 Building the Virtual Learning Environment: Design Considerations====
There are two general approaches to the design layout of a virtual space (Corbit, 2002). One separates places within the space into discrete areas where users move around using portals (known as teleports in Second Life), the other is more representative of the real world where users navigate to different places using such things as pathways between buildings or rooms within the virtual space. Both of these constructs offer advantages depending upon the circumstances for example, the former method of using portals offers a more simple method for the user to navigate the space easily and quickly whereas if one wanted to assist the user in obtaining a sense of placement, presence and collaboration within the virtual environment then latter may be more appropriate (S. Clark & Maher, 2006) where the user is encouraged to explore the virtual space in order to form a relationship with the environment (Corbit, 2002).
This virtual learning environment was built largely around the first approach where a series of rooms were built and participants navigated the environment using teleports in order to complete the appropriate stage within the experiment, but with the rooms themselves emulating a real world environment with chairs for sitting, lecture rooms with projection screens and foyers, teller machines for delivering participant fees, etc.
The use of teleports not only offered simplicity for navigation but also enabled the control required over the steps in the process for the experimental design approach taken in this research. Teleports allowed the environment to be easily automated for participants to operate without the intervention or the assistance from the researcher so as to uphold the positivist research approach and remaining unbiased and inquiry free, and independent of the experiment under study (Orlikowski & Baroudi, 1991). Furthermore, the use of distinct purpose specific and separate rooms connected only by teleports was also indicated due to technical and security reasons that will be discuss later in the System Controls section below.
A further consideration was given to the construction of the rooms themselves, including the look and content of each room. Bellman and Landauer (2000) believe that a key question of the implementation and application of a virtual world is decide what reality should be made virtual by incorporating “functional realism”. Functional realism is purpose built realism that maintains sufficient realism for illusionary effects for presence and immersion but does not support the goal of absolute realism. Absolute realism in most instances, they believe, only distracts from the real objectives of the environment. For example, implementing window scenes in a university lecture room that have passing cars, jets flying though the sky and construction to a neighbouring building may be a realistic scene in the real world but in a virtual world it would only distract the students from their learning objectives. Applying functional realism not only provides focussed design but also enhances the virtual world by only including key components and excluding any adversities that may be disruptive in real world. [24]
This virtual learning environment build was based upon a real world setting, using a theatre theme, in rooms that were self contained with only the essentials elements included in order to complete the learning task at hand.
====3.6.1.2 Virtual Learning Campus Overview====
The overall virtual learning campus consisted of a Welcome Room, a Pre-Quiz Room, 6 Lecture Room complexes (containing an arrival foyer, theatre, exit foyer and theatre control room), a Post-Quiz/Survey Room and a central Control Room; Figure 49 provides an overview of the process flow of the virtual learning campus.
The starting area for all visitors was the Welcome Room, in this room the participant could read about the research, the rules, authority, and standards, etc. From this room a participant could take a teleport to the Pre-Quiz room. On arrival avatar identity keys were automatically recorded.
After completing the pre-quiz in the pre-quiz room participants were paid a minimum amount for attending and they could decide either to leave the research project, or continue onto a lecture. On commencement and completion of quizzes avatar identity keys were recorded.
There were 6 Lecture Rooms divided evenly into 2 types of lectures – a 2D audio-slide show presentation or a 3D augmented audio-slide show presentation. Each lecture theatre could hold up to 18 seated participants and were timed to commence every 10 minutes in pairs.
If participants continued onto the lecture their completion of the pre-quiz was automatically verified and they were randomly allocated on teleportation to either one of these lectures. Once the lecture completed they could then teleport to the Post-Quiz/Survey Room to be tested on their learning outcome and surveyed on their experience and finally they were paid for their participation in the research project.
This entire process took approximately 30 minutes for the participant to complete.
The entire virtual campus build time took approximately one man month to build [25] with the 3D presentation content taking approximately 3 times longer to build than the 2D presentation content (approximately 3 days to build for the 3D presentation and 1 day for the 2D presentation).
In the section that follows a detailed view of each room is provided along with the function of the room.
Figure 49. Environment: Virtual Learning Campus Flow Chart
====3.6.1.3 Welcome Room====
The Welcome Room provided the entry point into the virtual campus (Figure 50). Here the participants were provided information about the research and if they decided to participate what could be expected of them within the research experiment.
This room contained four large wall signs and four smaller floor signs in each corner.
The wall signs provided the following information (see Appendix C: Welcome Room Information Content for more details):
*The aim of this research;
*What can I expect?
*How long will it take?
*Payment?
The floor signs provided the participant with a web link to the research explanatory statement (see Appendix C: Welcome Room Information Content for more details) and a virtual note card providing them with the welcome room information that they could hold in their inventory to take away from the research location.
If the participant decided to take part in this research then they took a teleport (the gold rings partially visible in the image) from this room, which transported them to the Pre-Quiz Room.
Figure 50. Environment: Welcome Room
====3.6.1.4 Pre-Quiz Room====
The Pre-Quiz Room was a common area where all participants were given a Pre-Quiz to obtain their level of knowledge of the subject prior to the delivery of the lecture.
A participant would be teleported from the Welcome Room into the centre of this room and provided with instructions by the large sign on the main wall to be seated in order to take the pre-survey (Figure 51, Left). Once seated a web-link would be provided to them to take the pre-quiz. This web-link was connected to a survey engine that operated over the internet and stored details into a database outside of the Second Life environment. The survey database recorded the participant’s answers to the pre-quiz along with other details such as the participant’s avatar key (the unique identify of the Second Life user). The avatar’s key was used to verify that the participant had completed the pre-quiz prior to payment and teleportation into the next scheduled lecture.
Once the participant had completed the pre-quiz they could collect part payment for completion of this stage of the research from an ATM along the back wall (Figure 51, Right) and then could use a teleport, situated next to the ATMs, to transport them to the next scheduled lecture. The lectures were scheduled every 10 minutes for both the 2D and 3D presentations. If the blue beam on the teleport was displayed then this showed the participant that the next lecture was available for them to teleport. Timers beside the ATMs showed the time until the next lecture. On teleportation a participant was randomly allocated to either a 2D or 3D lecture.
Figure 51. Environment: Left Pre-Quiz Room, Right ATMs & Teleporters
====3.6.1.5 Lecture Theatre====
The participant would arrive in the foyer of the lecture theatre where they were instructed via floor signs to switch on their audio and video controls and to be seated inside the lecture theatre (Figure 52).
The slide presentation was delivered using streaming in world web-technology where PowerPoint slides were constructed and saved as html files and streamed into Second Life using an in world constructed HTML viewer. Audio streams were also recorded and synchronised to each of these slides throughout to the presentation.
Figure 52. Environment: Lecture Theatre
Both the 2D and 3D theatres were setup essentially the same and delivered within the same time frame: which took approximately 20 minutes of instructional delivery. The only variable that changed was the presence or absence of 3D objects in the delivery method of the presentation.
In the 2D presentation a participant would continue to be seated to watch and listen to the 2D lecture (Figure 53, Left) throughout the lecture. In the 3D presentation the participant would commence the session seated, but on commencement of the lecture a room would open up behind the front 2D presentation screen and the participant would be automatically transported in their chairs and dropped into the 3D presentation space to view the 3D slide show presentation in a specially designed 3D viewing area (Figure 53, Right). Participants in the 3D presentation were then left standing in this space and were able to move around in the 3D space if they wished. In the 2D mode the front facing projection screen displayed the slides, while in the 3D space, the 2D slides were projected on the walls around the 3D viewing space, with the 3D objects created and removed automatically in synch with the slides and audio in the centre of (and around) the 3D viewing space.
Figure 53. Environment: Learning Delivery Method
Careful consideration was given so that both groups obtained the same instructional information. The only exception was that the pictures contained in the 2D slide presentation was translated into 3D form and either rotated and animated, or positioned for ‘walking on’ or exploration in front of the participant.
Once the lecture had completed the participants for both groups were instructed to move to the exit foyer and teleport to the next phase of the research project via teleports located in the exit foyer. The entrance to the exit foyer and the teleports therein were only switched on after the last slide had been delivered (Figure 54).
Figure 54. Environment: Lecture Room Teleporters
Each lecture theatre contained a hidden control room and separate bank of teleports (restricted to the administration avatar) connecting them and the central control room that allowed for independent movement and invisible monitoring of the lecture rooms, and contained the control system and communication devices for that lecture theatre.
====3.6.1.6 Post-Quiz Room====
The final phase for the participants was to take a post-quiz and survey. This room operated the same as the Pre-Quiz room.
The Post-Quiz Room was a common room where all participants would be teleported into the middle of the room after their lecture. A participant would be instructed via the main sign on the wall to be seated in order to take the quiz and survey (Figure 55). Once they had completed the quiz and survey they were then instructed to go to the back of the room to collect their payment from an ATM for the final payment for their research participation. The survey engine would note they had completion of this survey and only then allow payment if completed.
Figure 55. Environment: Post-Quiz Room
====3.6.1.7 Control Room====
At the centre of this system was a Control Room. The Control Room was responsible for managing the 28 public teleports as well as containing separate teleports for members of the administration team. At any one time a member of the administrator team could bypass the controls contained within the system and move to any room within environment (Figure 56).
Figure 56. Environment: Control Room
====3.6.1.8 System Controls====
In the design consideration section it was mentioned that this environment was best setup using separated rooms with teleports to navigate the system. This decision allowed for an increase in security as well as allowing the use of teleports to operate as control gates.
Within Second Life you can use what is called roaming camera mode to navigate around without moving your avatar. A person can use this mode to move around to view other locations within a definable distance and even operate controls like the sit command therefore providing a security risk that a participant could bypass steps within the research process. Having rooms located far away from each other at random distances in 3D space and connected with teleports prevented this from occurring. Even if a participant found of way of teleporting to a location that was out of sequence to the research process (eg they had visited before and created a landmark to teleport back or had given away this landmark to another avatar) then the teleports, seats and ATMs all communicated with a central off-world web site (containing the survey engine) which verified the proper completion of each required step and acted as a gatekeeper to stop a person from breaching the system.
At every stage when an avatar used a teleport, used a quiz seat, or used an ATM, these teleports, seats or ATMs, connected to an external database that would look-up the avatar’s key to ensure that the appropriate stage had been completed prior to allowing access. For example, a participant had to have completed their pre-quiz survey prior to entry into a lecture theatre. If they tried to breach this sequence then the teleport reported an error message and would not allow them to teleport. A further example, a participant was required to complete an entire lecture prior to completing the post-survey. The exit Lecture Room teleports were disabled until lecture finished after which a participant could take a teleport to the Post-Quiz room in doing so the participant was flagged as having completed the lecture which enabled them take the post-quiz and survey.
Other controls were built into the ATM machines so that a participant could only be paid once and also built into the survey system so that a participant could only undertake the research once (although they were allowed to attend again if they chose they just could not take the quizzes or survey again).
This design construction of the virtual learning campus allowed for an automated system that could be operated over 24 hours for multiple participants. It was also fault tolerant to possible SIM crashes with the entire system to be able to automatically restart and recover correctly unattended.
Lastly, driven entirely by a specially designed control language in replaceable text files, the design made for an easily modifiable and manageable system with minimum scripting change to introduce any new rules. An entirely new lecture and testing set can be loaded into the system in less than 5 minutes (once the content has been written or built).
==3.7 Learning Task Design==
===3.7.1 Subject Matter===
The subject matter that was chosen was the Physics of Bridges. This topic was chosen both for its familiarity (everyone knows what a bridge is) and obscurity (they don’t generally know as much as they might initially believe about the detail of how they work) and because the content could be easily adapted for both forms of delivery. The level of difficulty was aimed at approximately a year 12 level high school student. The content of information was mainly sourced from academic and government information web-sites. Appendix D:Instruction: Slide Presentation contains the delivered presentation along with a references list on the last page of this presentation.
===3.7.2 Instruction Delivery===
A virtual learning system, no matter how good its delivery design is only as good as the instructional design of the learning task. As discussed in Chapter 2 Learning and Instructional Design Theory, the instruction methods used to assist in the delivery and assessment of the course was Gagne’s Nine Events of Instruction and the revised Bloom’s Taxonomy Cognitive domain.
This section provides details of how both the 2D and 3D materials were constructed, for the differences within these deliveries refer to section 3.6.1.5 Lecture Theatre in this chapter.
====3.7.2.1 Gagne====
The theme of this lecture was how the various bridge designs handled the key forces of tension and compression. A variety of bridge designs were explored with respect to these two forces.
Gagne’s 9 stages of instructional delivery were provided for as follows:
#Gaining Attention (Reception): This stage grabs the attention of the participant. A slide show was given while participants arrived in the theatre prior to the commencement of the formal presentation that contained a variety of bridge structures that were the ‘best of’ bridges along with music to motivate and excite the participant for the lecture that was to follow (see Appendix E: Pre-Presentation Slide Show).
#Informing Learners of the Objective (Expectancy): This stage informs the participant what new knowledge they can expect to learn. The 2nd Slide obtained the objectives of the presentation (see Appendix D: Instruction: Slide Presentation). These objectives were also written in conjunction using the revised Bloom’s taxonomy.
#Stimulating Recall of Prior Learning (Retrieval): This stage tries to place the new information that will be delivered in the form of current knowledge so that they can relate better to the newly presented information. Every slide that introduced a new bridge structure contained a picture of a real bridge so that the participant could relate to real life experience to the new information that would be presented.
#Presenting the Stimulus (Selective Perception): This is where the learning (or new knowledge) was presented, each bridge form was presented with an overview, its relationship to tension and compression and the limitations of the bridge design. The information was chunked into a logical structure. Stages (4) and (5) are interrelated which tries to provide the participant new knowledge in a logical and meaningful context.
#Providing Learning Guidance (Semantic Encoding): This stage presents the information in a deeper form allowing the participant to encode the new information into their long-term memory. Here the information was presented in different forms using both pictures (and in the case of the 3D group, 3D models) and text. Furthermore, three different concepts (ie overview, tension and compression and limitations) were provided for each bridge to enhance a participant’s breath of knowledge of that bridge. The bridges were also presented to the participant from simplest to most complex so that they could gradually understand the concept of a bridge structure and its relationship to tension and compression.
#Eliciting Performance (Responding): This stage of instructional delivery allows the participant to ‘do something’ with their new knowledge. Given we only had 20 minutes to deliver the material this stage was not performed. If Bloom’s cognitive process of Apply was tested then inclusion of this stage would have been imperative. The researcher recognises that although time was a limitation to this study, ultimately, this stage would have been interesting to include.
#Providing Feedback (Reinforcement): The stage of instructional delivery is usually performed with feedback from the lecturer to confirm that the participant understood the new knowledge presented. Again due to time constraints and the type research method used (experimental design) direct lecturer interaction was not an option, so in order to hold this experiment constant for all participants’ summary slides where used. These provided a form of feedback by presenting the information again but in a different form to that that was initially used in the main body of the presentation, forcing some degree of participant thought to process the summary information (and of course, the post quiz served a similar purpose, but without the learning confirmation).
#Assessing Performance (Retrieval): In this research study this was the final stage of delivery where the participant’s were provided with the post-quiz to assess their learning outcome.
#Enhancing Retention and Transfer (Generalisation) The final stage of Gagne’s instructional delivery is to generalise and transfer the information delivered in light of new information that may be presented in future. This step was partly performed at stage (7) were the information was summarised. Transfer in normal situations (ie non-experimental) would allow the student to take away their new knowledge, ie the lecture materials. Although in Second Life this is possible as this was under experimental conditions that had to be controlled the lecture materials were not transferred to the participant.
====3.7.2.2 Bloom’s====
The revised Bloom’s taxonomy (Anderson et al., 2001) provided the overall learning objectives of the course content (and therefore the new knowledge presented throughout the instruction) as well as the way in which participants were tested on this new knowledge. The two learning outcomes this research assessed were ‘Remember’ and ‘Understand’ of Factual Knowledge dimensions of the revised Bloom’s cognitive process as can be seen in Figure 57 below.
Figure 57. The Revised Bloom’s Taxonomy Table: Tested Process Dimensions
Bloom defines ‘remember’ of Factual Knowledge as knowledge that is presented to participants in the learning instruction, which are the basic elements of the subject matter. For example, Bridge Types presented were: Beam, Truss, Arch and Suspension. To recall the names of these bridges is the cognitive process of ‘remember’ of Factual information. Participants either remember or they do not when tested.
Bloom defines ‘understand’ of Factual Knowledge as a means to promote retention of ‘remember’ by linking the new knowledge of ‘remember’ with prior knowledge of the participant to be able to achieve more than just remember but utilise this new knowledge in other forms like interpreting, comparing, explaining etc which is not necessarily presented to them in instruction but rather it is assimilated from the entire information that is presented to them through instruction. For example, participants were tested on hybrid bridges but were never instructed on these forms of bridges in the lecture. The participant should have been able to construct this knowledge based upon the basic bridge forms presented in the lecture.
In application of the revised Bloom’s taxonomy the researcher identified the learning objectives, defined these learning objectives in terms of one of Bloom’s 19 levels of Cognitive Process (noting that each cognitive category contains specific cognitive processes), facilitated these objectives into instruction then assessed these objectives.
==3.8 Instrumentation==
The instrument used to assess a participant’s learning outcome as well as their overall learning experience was in survey form. Below is the survey structure that was used in this research study (Table 8):
{|align="center"
|-bgcolor="lightgrey"
|Pre-Survey
|''Total questions: 8''
|-bgcolor=white
|Pre-Quiz
|8 multi-choice questions
|-bgcolor=lightgrey
|Post-Survey
|''Total question: 32''
|-bgcolor=white
|Post-Quiz
|20 multi-choice questions
|-bgcolor=lightgrey
|Survey
|2 content knowledge: self-assessment of pre & post knowledge
3 Delivery Method : self-assessment of quality of learning materials
2 Technology: Assess technical difficulties
5 Learning Experience: Assess satisfaction level in learning method
|}
<p align=center >
'''''Table 8. Pre and Post Survey Structure'''''
</p >
The survey system that was used to record the data was a web based survey system as discussed in this chapter The Virtual Learning Environment section (Figure 58).
Figure 58. Web-Based Survey System
===3.8.1 Pre and Post Quiz===
A total of 28 quiz questions were prepared which were divided into 2 groups of Bloom’s Factual Knowledge of ‘remember’ and ‘understand’ (see section 3.7.2.2 Bloom’s for more details for the difference between these two cognitive dimensions). A total of 8 of these questions were given to all participants as a pre-quiz and 20 in the post-quiz.
A participant was never tested on the same question twice or provided the answers for either quiz, reducing the likelihood that a participant would learn from quiz questions rather than the lecture material presented. The pre-quiz was delivered to the participant prior to the lecture (see Appendix F: Pre-Quiz) and the post-quiz and survey was delivered directly after the lecture (see Appendix G: Post Quiz & Appendix H: Survey).
In order to construct these questions Bloom’s Taxonomy provides sample objectives and corresponding assessment examples within each cognitive category. The format of the multiple choice questions contained both direct selection and cueing as the question format. For example a direct selection question proposes a statement or asks a question and provides the participant with a list from which to select an answer while a cueing question provides the participant with a sentence that contained a blank space for which the responder selects an appropriate response from a multiple choice list.
===3.8.2 Survey: Learning Experience===
After a participant completed the post-quiz a brief survey made up of 12 questions was given (questions 21-32) to assess a participant’s own perception of their prior and post content knowledge, the delivery method, technological constraints and their learning experience. The structure of these questions used 6 Likert scale questions (5-point scales), 1 yes/no question for technical difficulty along with a general comment to explain difficulty, 2 questions to list both positive and negative experiences they perceived about the technology as a learning tool, and 2 open-ended questions for general comments about the course delivery and the participants overall experience (see Appendix H: Survey Q21-32).
The survey was implemented in order to assist the researcher as to whether there may had been any adverse effects that may have affected a participant’s performance in completing the knowledge quiz as well as to assist the researcher into gaining a better understanding of the overall research results and a participant’s relative experiences across the two delivery methods.
===3.8.3 Instrument Reliability===
Kuder-Richardson Formula 20 (KR-20) was the selected reliability test for the pre and post test quiz questions due to the design of the instrument. As the pre-test and post-test were not equivalent K20 measures internal consistency on a single set of survey results (Burns, 2000; Siegle, 2008). KR-20 is widely accepted by those educators and psychologist who support the instrument reliability concept to be a satisfactory method to measure the reliably of a testing instrument (Yount, 2006).
In order to test the Likert scales in the post survey Cronbach's Alpha was used to measure reliability. Similar to K20 in concept, but Cronbach's Alpha allows for testing of data across scales. K20 requires the data to be dichotomously scored (although both in reality produce the same results on dichotomously scored data).
The overall results of the instrument reliability test were low. The problem with the instrument reliability test is that there were too few questions within each group to obtain a true value for the reliability test. The results along with a discussion of the instrument reliability tests performed are provided in Appendix L: Instrument Reliability Results.
==3.9 Analysis Method==
===3.9.1 Introduction===
As discussed in the Research Method section of this chapter this research has generally taken a positivist research approach as opposed to an interpretive research approach. A purest approach to research from either side can lead to weaknesses when interpreting results (Onwuegbuzie, 2002; Richardson, 2005; Walsham, 1995; Weber, 2004), critics argue:
*Positivist: that this method can lead to narrow, non-innovative and repetitive thought, while failing to understand that the selection of data, the method of collection, form of quantification and the tests applied are not themselves objective processes.
*Interpretive: that this method can lead to unresolvable propositions, contextually isolated understandings, non-reproducible observations and ideas sustainable only in the mind of the interpreter.
Thus, in order to minimise the weakness of positivist research the researcher has used triangulation. Triangulation in research can be applied in many forms; in this research it has been used as ‘theory triangulation’ as described by Denzin (1978) which involves using multiple theoretical perspectives in order to interpret the data results. Although unlike the Denzin perspective where triangulation is used as a means of avoiding bias and validating the data results this researcher’s reasoning for the application of theory triangulation is to gain a greater understanding of the results by adding range and depth to the quantitative data analysis (Fielding & Fielding, 1986; Olsen, 2004).
===3.9.2 Data Processing===
The survey data was extracted from the database along with survey start and finishing times of participants and processed in Microsoft’s Excel spreadsheets. After conducting a small number of trials with independent trusted respondents, not otherwise part of the assessment, to determine the minimum practical time for completion of the quiz and survey, it was decided that a cut-off time of 2 minutes would be used as the basis filtering post-surveys. Post-quiz/surveys completed under this time were examined and removed. This time was based upon how long it took the researcher and the trusted responders to read and respond to only the quiz questions at a medium speed. Each survey was also reviewed for possible fake entry of the quiz answers eg selecting the first or last value for every question for their given answers. By extracting these surveys it was hoped to lessen the chance of erroneous results.
No missing data was contained in the survey because every field except the general comments and technical comment questions were all required response fields before a quiz/survey was accepted by the system and saved to the database.
===3.9.3 Software===
The software used to analyse the data results was Microsoft Excel 2007 Data Analysis add-in, STATGRAPHICS Centurion (2009) which is a statistical software package similar to SPSS, StatCal developed by David Moriarty (2008) an excel spreadsheet for testing normal distribution and Del Siegle (2008) excel spreadsheet for testing instrument reliability.
===3.9.4 Quantitative Analysis Methods===
Quantitative research methods are a natural fit with the principles of positivist research, which requires a scientific approach to analysis. Quantitative research can be described as a process of presenting and interpreting data that follows a linear research path using logical models to measure variables and test a hypothesis that is directly linked to a cause. Analysis is performed using hard data, (i.e. numerical) but soft data (i.e. non-numerical) may also be assessed by transforming natural phenomena into numbers using quantification techniques (Neuman, 2006).
====3.9.4.1 Operational Hypotheses====
Quantitative analysis methods require a research hypothesis (as given early in the Problem Statement and Research Hypothesis section) to be re-expressed into operational hypotheses so that each hypothesis forms a tighter a more testable statement (Burns, 2000). From the research hypothesis the following operational hypotheses were formed:
#(H1): That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
#(H2): That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
Statistical analysis requires testing be performed on a hypothesis where no difference exist thus known as a null hypothesis (H0). Since H1 and H2 are expressed in terms of differences the null hypotheses H01 and H02 respectively was tested for no significant difference. If either null hypothesis H01 and H02 measures a statistically significant result then null hypothesis of either H01 and H02 gets rejected thus accepting the probability that the results of the experiment are unlikely to be a random variation in sampling error and that the conclusions drawn from the sampled population in the experiment can be drawn for the entire research population (Burns, 2000).
The experimental data results used to test the above hypotheses were based upon participant multiple-choice post-quiz achievement scores. These multiple-choice answers were dichotomously scored (ie 0 wrong answer, 1 correct answer) and analysed as will be discussed next.
====3.9.4.2 Statistical Significance====
This study used the non-parametric Mann-Whitney U Test to test H01 and the parametric t-test for independent groups to test H02.. All significance tests used a critical alpha level (α) 0.05, i.e. the probability (p) that 95% of the results were not due to chance. The selection of this test was based upon the way in which the hypothesis was formed and whether the results data met the assumptions of parametric test selection.
Burns (2000, p. 155) provides a flowchart to assist in the selection of a statistical test. As can be seen in Figure 59, the highlighted statistical tests are the test options available in this research study. The test selection is based upon a combination of the data type, hypothesis statement and the sample population selection.
Figure 59. Significance Test Selection
Burns (2000) states that if a researcher has a choice between the selection of a parametric or non-parametric test it is best to select the parametric test. Parametric tests are more powerful at picking up significant differences than a non-parametric test because parametric tests not only take into account the rank order of scores but also calculate variances between these scores. The selection of a parametric test should only be chosen if the experimental data results meet three assumptions, which are that the data be – naturally numerical using interval or ratio scales, of normal distribution and homogeneity of variance.
Using Burns’ diagram above, in this study we measure the differences between 2 groups (2D and 3D) were the population was randomly selected therefore the data was in 2 independent groups. From Burns diagram[26] this research study should either use the parametric independent t-test or the non-parametric Mann-Whitney U test. If the data meets the three parametric test assumptions then a parametric test should be chosen over the non- parametric test.
Within the data analysis for significance, it was decided that the significant difference would be based upon a 2-tail hypothesis. Due to the lack of research that had been performed in this area the researcher was not able to come to a strong conclusion that either method would produce a significant difference in their test results.
=====3.9.4.2.1 Assumptions of Parametric Testing: Tests Performed=====
Prior to testing for significance the results data was tested to see if the data met the assumptions of parametric testing, that is that the data be; 1) naturally numerical using interval or ratio scales, 2) of normal distribution and 3) homogeneity of variance as provided by Burns above.
The first assumption is that the data be naturally numeric. The data type of the pre and post quiz scores was interval scaled therefore the first assumption of parametric testing was met.
The second assumption is that the data is of normal distribution. There are various methods with which you can test for normal distribution (Fife-Schaw, 2007). This research has adopted the following approach:
*The measure skewness and kurtosis can be used to test for normal distribution. If either skewness and kurtosis departs significantly from zero[27] (±2 standard errors of skewness (ses) or standard errors of kurtosis (sek)) then the results cannot be assumed to be normality distributed (Brown, 1997).
*D’Agostino-Pearson K2 omnibus test (K2) was chosen as the statistical test to measure whether the data does not deviate from normal distribution significantly. This test is known as the most powerful Gaussian test as it is not affected by duplicate values in the data (which the result data contains) (Fife-Schaw, 2007; Graphpad, 2009).
The third assumption is that the data between the two groups do not vary significantly. Levene's F-test was applied to measure if the standard deviation variance between the groups varied significantly (NIST, 2006).
====3.9.4.3 Other Tests Performed====
Other tests performed that will be discussed in the results section are statistical descriptive analysis for each group using both the pre-post quiz data and the survey data. These tests will provide further insight into the research results and the differences obtained in this experiment.
The Likert scales in the survey was treated as ordinal data and therefore where not seen to have the same variance and thus treated as 3 groups positive, neutral and negative (Jacoby & Matell, 1971).
===3.9.5 Qualitative Analysis Methods===
Qualitative research methods are a natural fit with an interpretive research approach. Qualitative research is a process of interpreting the data by applying ‘logic in practice’ using a non linear research path. The emphasis is on constructionism, using inductive analysis for the generation of theory. Data used in analysis is soft data, the researcher will analysis the data looking at ways in which an individual interprets their social construct (Neuman, 2006).
Unlike quantitative analysis, no hypothesis is formed at the start of a study. It is an inductive process where the main concern of the researcher is to generate and develop new theories based upon interpretation. Qualitative research analysis relies heavily on the application of phenomenological sociology, hermeneutics and ethnography in order to interpret their findings (A. Lee, 1991).
In this study we have used qualitative methods as a way to gain an understanding of the overall experience of a participant learning experience in a virtual world as well as any differences that they may have experienced in the alternative delivery methods of the lecture.
====3.9.5.1 Analysis Data====
The data in this research study that was analysed using qualitative analysis methods was the post-survey data (see Appendix H: Survey). This survey contained open questions to enable a participant to provided feedback on their learning experience, instructional delivery and any technical constraints that they may have had during their lecture delivery. The technical difficulty question was straight forward; if they answered yes then they could comment on what went wrong. The questions that were asked in order to understand their perception of virtual world learning and delivery method were as follows:
*'''DELIVERY METHOD ASSESSMENT''' (Q 25) General Comment:
*'''VIRTUAL WORLD LEARNING EXPERIENCE'''
**(Q 30) List 3 positive experiences you had with using this technology to learn:
**(Q 31) List 3 negative experiences you had with using this technology to learn:
**(Q 32) General Comment:
Qualitative analysis of these questions required the application of hermeneutic method which is the process of analysing verbal conversations, text, journals, pictures etc looking for meaning in the detail and as a whole to reveal the deeper meaning contained within - i.e. ‘reading between the lines’ in order to extract meaning. Within this method a hermeneutic circle is preformed were interpretation takes an iterate approach interpreting as a whole and of its parts then reinterpreting in light of the new understanding (Klein & Myers, 1999; A. Lee, 1991).
====3.9.5.2 Coding====
Using hermeneutic method on the survey data as described above data was coded into patterns, themes and contextual structures in light of the research problem and literature review. Coding generally takes 3 stages in qualitative analysis – Open, Axial and Selective coding (Neuman, 2006).
Open coding was performed as a preliminary analysis to develop codes to condense data into specific meanings and themes. This process was preformed several times prior and after the quantitative analysis was preformed.
Axial coding was then performed to develop possible relationships between the coded data.
Selective coding, the final stage, was performed to extract major themes and general theory that emerged which will be discussed in the Results section of this paper.
==3.10 Summary==
In this chapter the researcher has discussed the research design that required the construction of the virtual learning campus and learning materials. The instrument used to collect the data was a pre and post quiz and survey.
This research will be applying theory triangulation, which represents a mixed method approach to the analysis. An operational hypothesis was drawn from the research problem that will be assessed using quantitative analysis methods. Qualitative analysis will be used in order to gain a better understanding of the quantitative results as well as the learning experience of participants.
The next chapter discusses the results of this research project using the methods that were discussed under Analysis Method in this chapter.
</div >
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
ccccfae42a86ad81647f0dc7e9d1c0ef9e7c70bf
Real Learning in Virtual Worlds - CHAPTER 4: Results.
0
282
311
2010-08-05T13:14:05Z
Bishopj
1
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 4: Results=
==4.1 Introduction==
In this chapter the researcher provides the results of the experiment using the methods discussed in the previous chapter. The results presented are the quantitative and the qualitative results for the virtual world learning experiment conducted in Second Life between two groups of participants the 2D group and 3D group that undertook different methods of delivery of a lecture on The Physics of Bridges.
A quantitative analysis was performed on the pre and post quiz scores of the two groups. This analysis includes the statistical test for significant difference of the pre-quiz results and the hypothesis of this experiment which measured the differences in the learning outcome between the 2D and 3D groups for Bloom’s cognitive processes of ‘remember’ and ‘understand’.
The finding for the post quiz survey Likert scale questions will be presented that measured the responses from the two groups learning experience survey.
A qualitative analysis was performed on the post-survey open questions of both groups where the data was coded into themes in order to gain a further understanding of the quantitative results and as well as the learning experiences of the two groups.
==4.2 Quantitative Analysis Results: Achievement Scores==
In this section the researcher provides the quantitative results for the pre and post quiz score results, the significance results for our operational hypothesis and conclude with the quantitative results of the post survey results.
===4.2.1 Overview of Results===
The results of the pre and post quiz totals can be seen below in the charted box plots (Figure 60). The left box plot is a traditional box plot, which provides consolidated information into a single graph.[28] The right plot is the same plot but referenced in percentiles in order to display the variance of the pre to post quiz scores. The number of questions in the pre-quiz was 8 and the post-quiz 20, each of which were evenly divided into Bloom’s cognitive process of ‘remember’ and ‘understand’.
Figure 60. Results: Pre & Post Quiz- Box Plot
===4.2.2 Pre-Quiz Results===
Table 9 provides the overall results of the 2D and 3D groups for the pre-quiz achievement scores. The pass rate is a measure of how many participants scored 50% or higher their quiz scores.[29] The pre-quiz was an 8 question quiz that tested the prior knowledge of a participant before the lecture.
{|align=center width=50%
|-
|
|colspan="3" align=center bgcolor=lightblue |2D Group
|colspan="3" align=center bgcolor=#DDADAF |3D Group
|-bgcolor=lightgrey padding=4
|
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|-
|'''Pass Rate'''
|align="right"|80%
|align="right"|35%
|align="right"|51%
|align="right"|66%
|align="right"|52%
|align="right"|55%
|-bgcolor=lightgrey
|'''Average Score'''
|align="right"|2.44
|align="right"|1.25
|align="right"|3.69
|align="right"|2.071
|align="right"|1.60
|align="right"|3.68
|-
|'''Median Score'''
|align="right"|2
|align="right"|1
|align="right"|4
|align="right"|2
|align="right"|2
|align="right"|4
|-bgcolor=lightgrey
|'''Mode Score'''
|align="right"|3
|align="right"|1
|align="right"|3
|align="right"|3
|align="right"|1
|align="right"|4
|-
|'''Minimum Score'''
|align="right"|0
|align="right"|0
|align="right"|1
|align="right"|0
|align="right"|0
|align="right"|0
|-bgcolor=lightgrey
|'''Maximum Score'''
|align="right"|4
|align="right"|3
|align="right"|6
|align="right"|4
|align="right"|4
|align="right"|7
|-
|'''Standard Deviation'''
|align="right"|1.032
|align="right"|0.775
|align="right"|1.372
|align="right"|1.263
|align="right"|0.867
|align="right"|1.479
|-bgcolor=lightgrey
|'''Skewness'''
|align="right"| -0.138
|align="right"|0.261
|align="right"|0.007
|align="right"| -0.195
|align="right"|0.351
|align="right"| -0.188
|-
|'''Kurtosis'''
|align="right"| -0.730
|align="right"| -0.150
|align="right"| -0.718
|align="right"| -1.008
|align="right"|0.037
|align="right"| -0.278
|-bgcolor=lightgrey
|'''Number of Participants'''
|align="right"|55
|align="right"|55
|align="right"|55
|align="right"|56
|align="right"|56
|align="right"|56
|}
<p align=center >'''''Table 9. Pre-Quiz Descriptive Statistical Results'''''</p>
Figure 61 provides an inverse cumulative normal distribution graph for the total pre-quiz scores. This graph tells us what percentage (y-axis) of participants scored under a nominated score (x-axis). For example 50% of participants for both 2D and 3D scored under 4 in pre-quiz total score. As can be seen both the 2D and the 3D pre-quiz total scores were the same. For a detailed analysis of each of the Bloom’s cognitive processes for the pre-quiz see Appendix J: Pre-Quiz Score Results.
Figure 61. Results: Pre-Quiz Totals - Inverse Cumulative Normal Distribution Graph
Figure 62 provides a histogram and normal distribution curve of the total pre-quiz achievement scores. Both graphs provide frequency distributions but in different forms. The histogram provides for the number of participants (frequency y-axis) that scored between 1 and 8 (x-axis). The Gaussian distribution (or bell curve) provides the probability (y-axis) that a participant that would score between 1 and 8 (x-axis) based upon the average and standard deviation of the scores within each group. For a detailed analysis of each of the Bloom’s cognitive processes for the pre-quiz see Appendix J: Pre-Quiz Score Results.
Figure 62. Results: Pre-Quiz Totals - Histogram & Bell Curve
====4.2.2.1 Pre-Quiz Significant Results====
An independent t-test was performed on the pre-quiz total scores to ensure that the groups did not differ significantly in their prior knowledge of the lecture content on ‘The Physics of Bridges’, they did not (t = -0.367, df = 119, two-tailed p = 0.714, α = 0.05).
Although no significant difference was found between the two groups pre-quiz total scores, the scores for each of the Bloom’s cognitive processes of ‘remember’ and ‘understand’ did differ significantly between the groups. The 2D pre-quiz scored significantly higher than the 3D scores for the Bloom’s cognitive process of ‘remember’ (t = 1.665, df = 109, one-tailed p = 0.0494, α = 0.05). The 3D pre-quiz scored significantly higher than the 2D pre-quiz scores for the Bloom’s cognitive process of ‘understand’ (t = -3.03167, df = 109, one-tailed p = 0.0014, α = 0.05). Appendix J: Pre-Quiz Score Results provides a detailed analysis of these results.
===4.2.3 Post-Quiz Results===
Table 10 provides the results of the 2D and 3D groups for the post-quiz achievement scores. The post-quiz contained 20 questions of which were divided evenly into two groups of Bloom’s Factual cognitive processes of ‘remember’ and ‘understand’. The number of questions within each cognitive process was 10. As with the pre-quiz, the pass rate is a measure of how many participants scored 50% or higher on their quiz scores.
{|align=center width=50%
|-
|
|colspan="3" align=center bgcolor=lightblue |2D Group
|colspan="3" align=center bgcolor=#DDADAF |3D Group
|-bgcolor=lightgrey padding=4
|
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|-
|'''Pass Rate'''
|align="right"| 85%
|align="right"|35%
|align="right"|67%
|align="right"|93%
|align="right"|36%
|align="right"|77%
|-bgcolor=lightgrey
|'''Average Score'''
|align="right"| 7
|align="right"|3.98
|align="right"|10.98
|align="right"|7.32
|align="right"|4.04
|align="right"|11.36
|-
|'''Median Score'''
|align="right"|8
|align="right"|4
|align="right"|11
|align="right"|8
|align="right"|4
|align="right"|12
|-bgcolor=lightgrey
|'''Mode Score'''
|align="right"|8
|align="right"|4
|align="right"|11
|align="right"|8
|align="right"|4
|align="right"|12
|-
|'''Minimum Score'''
|align="right"|3
|align="right"|0
|align="right"|5
|align="right"|3
|align="right"|1
|align="right"|6
|-bgcolor=lightgrey
|'''Maximum Score'''
|align="right"|10
|align="right"|8
|align="right"|17
|align="right"|10
|align="right"|8
|align="right"|17
|-
|'''Standard Deviation'''
|align="right"|1.846
|align="right"|1.484
|align="right"|2.468
|align="right"|1.597
|align="right"|1.464
|align="right"|2.347
|-bgcolor=lightgrey
|'''Skewness'''
|align="right"| -0.642
|align="right"|0.068
|align="right"|0.052
|align="right"| -0.941
|align="right"|0.332
|align="right"| -0.229
|-
|'''Kurtosis'''
|align="right"| -0.729
|align="right"| 0.558
|align="right"| -0.152
|align="right"| 0.672
|align="right"|0.010
|align="right"| 0.265
|-bgcolor=lightgrey
|'''Number of Participants'''
|align="right"|55
|align="right"|55
|align="right"|55
|align="right"|56
|align="right"|56
|align="right"|56
|}
<p align=center >'''''Table 10. Post-Quiz Descriptive Statistical Results'''''</p>
Figure 63 provides an inverse cumulative normal distribution graph for the total post-quiz scores. As was provided above this graph displays what percentage of participants scored under a nominated score.
Figure 63. Results: Post-Quiz Totals Inverse - Cumulative Normal Distribution Graph
Figure 64 provides a histogram and normal distribution curve of the post-quiz scores. As provided above with the pre-quiz graphs these graphs measure the frequency distribution of both the 2D and 3D groups.
Figure 64. Results: Post-Quiz Totals - Histogram & Bell Curve
====4.2.3.1 Post-Quiz Significant Results====
An independent t-test was performed on the post-quiz total scores of the 2D group and the 3D group showed that there was no significant difference between the results of these groups (t = -0.8212, df = 119, two-tailed p = 0.4133, α = 0.05). Appendix K: Post-Quiz Score Results provides a detailed analysis of these results.
The next section provides an analysis of the results for each of the Bloom’s cognitive process to test for signification difference between the post-quiz results for the tested hypotheses.
===4.2.4 Hypotheses Results===
As stated in Chapter 3 the operational hypotheses for this research study were as follows:
:(H1): That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
:(H2): That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
This section will discuss test results for no significant difference using the null hypothesis of H01 and H02.
====4.2.4.1 Hypothesis One: Post-Quiz Remember====
Figure 65 provides the histogram and the density traces graphs for the post-quiz results where 10 questions were given to both the 2D and 3D groups for the Bloom’s cognitive process of ‘remember’. As discussed in the previous section the histogram provides the frequency distribution of a participant’s scores. The density traces graph has been provided instead of the normal distribution graph as the results of these scores was not of normal distribution. The density traces graph provides alternative view of frequency that is similar to the histogram graph.
Figure 65. Results: Post-Quiz Remember - Histogram & Density Traces
'''Hypothesis H<sub>01</sub>'''
The null hypothesis tested H<sub>01</sub>:
:That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in no significant difference in post-quiz scores between 2D and 3D participants.
H<sub>01</sub> was tested using the non-parametric Mann-Whitney U Test as the results for the post-quiz ‘remember’ scores did not meet the assumptions for parametric testing, which requires the scores to be normality distributed. The 3D scores failed the D’Agostino-Pearson (K2) normal distribution test (p = 0.01161 ie < 0.05) therefore the scores from this group deviate significantly from normal distribution. Appendix K: Post-Quiz Score Results provides a detailed analysis of the parametric testing results.
'''Formula H<sub>01</sub>'''
Using the following Mann-Whitney U Test formula to find U;
Where:
n1 = number of group 1 subjects
n2 = number of group 2 subjects
R1 = rank total for group which smallest rank sum
W = the critical value of U1
'''Results H<sub>01</sub>'''
The results of Mann-Whitney U Test when applied found that there was no significant difference between the 2D and 3D post-quiz ‘remember’ scores where the average ranked scores was 2D = 53.9364 and 3D = 58.0268 resulted in U = 1653.5, W = 113.5, 2 tailed p = 0.493107, thus we do not reject the null hypothesis for α = 0.05. (Note: There is a distinct “observable” difference between these two groups, just not a statistically significant difference. This is explored in the next chapter).
====4.2.4.2 Hypothesis Two: Post-Quiz Understand====
Figure 66 provides the histogram and normal distribution curve for Bloom’s cognitive ‘understand’ results of the 2D and 3D groups for the post-quiz achievement scores. As discussed above these graphs display the frequency distribution of both the 2D and 3D groups where 10 questions were given in the post-quiz for Bloom’s cognitive process of ‘understand’.
Figure 66. Results: Post-Quiz Understand - Histogram & Bell Curve
'''Hypothesis H<sub>02</sub>'''
The null hypothesis tested H02:
:That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in no significant difference in post-quiz scores between 2D and 3D participants.
H<sub>02</sub> was tested using the parametric independent t-test of equal variance as the results met the assumptions for parametric testing. Appendix K: Post-Quiz Score Results provides a detailed analysis of the parametric testing results.
'''Formula H<sub>021</sub>'''
Using the following t-test formula to find t;
Where:
= the mean of group 1
= the mean of group 2
= number of group 1 subjects
= number of group 2 subjects
= the standard deviation of group 1
= the standard deviation of group 2
'''Results H<sub>02</sub>'''
The results of an independent t-test found no significant difference (t = -0.1926, df = 109, two-tailed p = 0.8477, α = 0.05) between the results of the 2D (x1 = 3.982, s1 = 1.484) and 3D (x2 = 4.036, s2 = 1.464) post-quiz ‘understand’ scores, thus we do not reject the null hypothesis.
===4.2.5 Survey Results: Likert Scales===
Table 11 displays the percentages of the post survey results divided into content knowledge, delivery method and technology. The content knowledge and delivery method questions were standardised into a 3 point scales for analysis.
{|align=center width=80%
|-
|
|
|colspan="3" align=center bgcolor=lightblue |2D Group
|colspan="3" align=center bgcolor=#DDADAF |3D Group
|-bgcolor=lightyellow
|
|align=center |'''''Content Knowledge'''''
|align=center |'''''Low'''''
|align=center |'''''Med'''''
|align=center |'''''High'''''
|align=center |'''''Low'''''
|align=center |'''''Med'''''
|align=center |'''''High'''''
|-
|21
| My level of understanding of the topic PRIOR to subject delivery.
|align="right"| 89%
|align="right"| 9%
|align="right"| 2%
|align="right"| 91%
|align="right"| 5%
|align="right"| 4%
|-bgcolor=lightgrey
|22
| My level of understanding of the topic AFTER to subject delivery.
|align="right"| 22%
|align="right"| 51%
|align="right"| 27%
|align="right"| 23%
|align="right"| 50%
|align="right"| 27%
|-bgcolor=lightyellow
|
|align=center |'''''Delivery Method & Learning Experience'''''
|align=center |'''''Positive'''''
|align=center |'''''Neutral'''''
|align=center |'''''Negative'''''
|align=center |'''''Positive'''''
|align=center |'''''Neutral'''''
|align=center |'''''Negative'''''
|-
|23
|Outline of subject material was clear and informative.
|align="right"| 98%
|align="right"| 2%
|align="right"| 0%
|align="right"| 100%
|align="right"| 0%
|align="right"| 0%
|-bgcolor=lightgrey
|24
|align="right"|The lecture was detailed enough to provide an understanding of subject matter.
|align="right"| 100%
|align="right"| 0%
|align="right"| 0%
|align="right"| 93%
|align="right"| 7%
|align="right"| 0%
|-
|28
|I found the in-world experienced offered me a better learning experience than my usual methods of learning
|align="right"| 74%
|align="right"| 22%
|align="right"| 4%
|align="right"| 73%
|align="right"| 25%
|align="right"| 2%
|-bgcolor=lightgrey
|29
|I found the subject material to be appropriate to virtual world learning
|align="right"| 84%
|align="right"| 13%
|align="right"| 3%
|align="right"| 79%
|align="right"| 18%
|align="right"| 3%
|-bgcolor=lightyellow
|
|align=center |'''''Technology'''''
|align=center |'''''No'''''
|align=center |'''''Yes'''''
|align=center |
|align=center |'''''No'''''
|align=center |'''''Yes'''''
|align=center |
|-
|26
|During the course I experienced technical difficulties with the environment
|align="right"|91%
|align="right"|9%
|align="right"|
|align="right"|93%
|align="right"|7%
|align="right"|
|}
<p align=center >'''''Table 11. Survey Likert Scales Results'''''</p>
The content knowledge questions addressed the participant’s subjective impression of their knowledge before and after attending the presentation. Both groups perceived an increase in their understanding of the subject matter after the lecture. The delivery method questions measured the subjective satisfaction levels with the virtual world 2D or 3D delivery methods (as appropriate). Both 2D and 3D indicated very high levels of satisfaction. The technology question assessed if a participant had any technological constraints to their reception of the learning material. From the results presented above a few participants experienced technological problems.
==4.3 Qualitative Analysis Results==
===4.3.1 Introduction===
Qualitative analysis was performed using methods discussed in 3.9.4 Quantitative Analysis Methods section of this thesis for the 2D and 3D groups’ open question set (25, 30, 31 and 32) contained in the post survey. In this section we present a brief overview of how the analysis was performed and the major themes that emerged from the qualitative analysis results. Interpretation of these results will be discussed in the next chapter of this thesis.
===4.3.2 Analysis Approach===
Hermeneutic analysis of the post survey open questions was performed using an iterative approach in order to code data into contextual structures and common themes amongst 2D and 3D post survey responses. Data was first condensed into 2D and 3D categories and further into the individual question categories. Open coding uncovered general themes within each question and to further assist in this stage of coding a participant’s entire question responses were read as a whole in order to reveal the entire context of their individual responses. Axial coding was performed once a generic set of themes emerged to form relationships between the entire set of 2D and 3D group question responses. Opening coding and axial coding took several iterations before selective coding was preformed revealing 4 major themes along with sub themes that can be seen in Table 12 below. These themes along with their meaning are discussed below.
===4.3.3 Themes of the Open Survey Questions===
The open questions were as follows:
*DELIVERY METHOD ASSESSMENT (Q 25) General Comment:
*VIRTUAL WORLD LEARNING EXPERIENCE
**(Q 30) List 3 positive experiences you had with using this technology to learn:
**(Q 31) List 3 negative experiences you had with using this technology to learn:
**(Q 32) General Comment:
{| align="center" style="border-collapse: collapse; border-width: 1px; border-style: solid; border-color: #000"
|-
|align=center|'''''Theme'''''
|align=center|'''''Sub-Theme'''''
|-bgcolor=lightgrey
|'''Virtual World Learning'''
|
|-
|'''Virtual Learning Campus'''
|
|-bgcolor=lightgrey
|'''Lecture Delivery'''
|
*Format
*Information Content
*Learning
*Facets of 3D Learning
*Instruction
*Focus
*Navigation
*Technical Constraints
|-
|'''Survey Instrument'''
|
|}
<p align=center >'''''Table 12. Qualitative Analysis Reuslts: Themes'''''</p>
The above themes were classified as follows:
*'''Virtual World Learning''': This category included the aspects of a participant’s experience while using the virtual world as a learning platform. The types of comments contained in this category were not specific to the experiment but rather to the experience of the virtual world medium as a learning tool. The general features and characteristics of a virtual world that a participant disliked or liked about using this method of learning and their over-all impression that they had using the virtual world as a learning platform.
*'''Virtual Learning Campus''': This category included comments about the virtual learning campus experience. These comments related specifically to the set-up and operation of the entire virtual learning environment within the virtual world.
*'''Lecture Delivery''': This category was the major category that included comments about the lecture experience of a participant that was specific to the lecture delivery treatment they received. This category contained sub-themes as follows:
**'''Format''': The style and layout of the presentation, how the information was presented.
**'''Information Content''': The depth and breadth of information content presented about the topic (The Physics of Bridges).
**'''Learning''': The aspects of obtaining new knowledge.
**'''Facets of 3D Learning''': This theme contained only comments from the 3D group, their perception of the use of 3D models as a learning tool in delivery.
**'''Instruction''': The method by which knowledge was transferred from the instructor to the learner, the interface between the presentation and the learner.
**'''Focus''': The observations affecting attention and the temporal experience of a participant within the virtual world whilst they were learning.
**'''Navigation''': Comments that related to the controlling their avatar within the lecture theater.
**'''Technical Constraints''': Comments that related to technical constraints that a participant experienced during the lecture.
*'''Survey Instrument''': This category included comments that related to the pre or post quiz of the participant.
Figure 67 provides a diagram of the relationship of these themes in the context of the qualitative analysis performed on the survey results. In the next chapter we will discuss the results of this qualitative analysis.
Figure 67. Qualitative Analysis: Relationship of Vitrual World Learning Themes
==4.4 Summary==
In this chapter we presented the quantitative and qualitative results of the research study.
A quantitative analysis was performed for both the 2D and 3D groups where the number of participants was 55 and 56 respectively. The pass rate for both the 2D and 3D groups’ pre-quiz scores was 51% and 55% respectively.
A significance test performed on the results of the total pre-quiz showed no significant difference between the scores of each group. Significance tests performed on Bloom’s cognitive processes of ‘remember’ and ‘understand’ showed a significant difference between the groups. The 2D group scored significantly higher than the 3D group for the Bloom’s cognitive process of ‘remember’ and the 3D group scored significantly higher than the 2D group for the Bloom’s cognitive process of ‘understand’.
The post-quiz pass rate for both the 2D and 3D groups’ total post-quiz score was 67% and 77% respectively. In spite of this, the results for the significance tests performed for Bloom’s cognitive process of ‘remember’ and ‘understand’ for the hypothesis showed no significance differences between the 2D and the 3D groups learning outcomes.
The post-survey results for the Likert scale questions was presented that provided the results dividend into positive, neutral and negative percentiles for both of the groups.
A qualitative analysis performed on the open-questions contained in the post survey revealed 4 major themes in the survey comments of both groups combined, these themes were:
#Virtual world learning environment,
#Virtual learning campus,
#Lecture delivery and
#Survey components of the research study.
A definition for each of these themes was provided along with a relationship diagram.
The next chapter we discuss the results presented in this chapter.
</div>
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
bef6a7b89e0b84e573ccf0862b187de979e91196
365
311
2010-08-05T13:14:05Z
Bishopj
1
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 4: Results=
==4.1 Introduction==
In this chapter the researcher provides the results of the experiment using the methods discussed in the previous chapter. The results presented are the quantitative and the qualitative results for the virtual world learning experiment conducted in Second Life between two groups of participants the 2D group and 3D group that undertook different methods of delivery of a lecture on The Physics of Bridges.
A quantitative analysis was performed on the pre and post quiz scores of the two groups. This analysis includes the statistical test for significant difference of the pre-quiz results and the hypothesis of this experiment which measured the differences in the learning outcome between the 2D and 3D groups for Bloom’s cognitive processes of ‘remember’ and ‘understand’.
The finding for the post quiz survey Likert scale questions will be presented that measured the responses from the two groups learning experience survey.
A qualitative analysis was performed on the post-survey open questions of both groups where the data was coded into themes in order to gain a further understanding of the quantitative results and as well as the learning experiences of the two groups.
==4.2 Quantitative Analysis Results: Achievement Scores==
In this section the researcher provides the quantitative results for the pre and post quiz score results, the significance results for our operational hypothesis and conclude with the quantitative results of the post survey results.
===4.2.1 Overview of Results===
The results of the pre and post quiz totals can be seen below in the charted box plots (Figure 60). The left box plot is a traditional box plot, which provides consolidated information into a single graph.[28] The right plot is the same plot but referenced in percentiles in order to display the variance of the pre to post quiz scores. The number of questions in the pre-quiz was 8 and the post-quiz 20, each of which were evenly divided into Bloom’s cognitive process of ‘remember’ and ‘understand’.
Figure 60. Results: Pre & Post Quiz- Box Plot
===4.2.2 Pre-Quiz Results===
Table 9 provides the overall results of the 2D and 3D groups for the pre-quiz achievement scores. The pass rate is a measure of how many participants scored 50% or higher their quiz scores.[29] The pre-quiz was an 8 question quiz that tested the prior knowledge of a participant before the lecture.
{|align=center width=50%
|-
|
|colspan="3" align=center bgcolor=lightblue |2D Group
|colspan="3" align=center bgcolor=#DDADAF |3D Group
|-bgcolor=lightgrey padding=4
|
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|-
|'''Pass Rate'''
|align="right"|80%
|align="right"|35%
|align="right"|51%
|align="right"|66%
|align="right"|52%
|align="right"|55%
|-bgcolor=lightgrey
|'''Average Score'''
|align="right"|2.44
|align="right"|1.25
|align="right"|3.69
|align="right"|2.071
|align="right"|1.60
|align="right"|3.68
|-
|'''Median Score'''
|align="right"|2
|align="right"|1
|align="right"|4
|align="right"|2
|align="right"|2
|align="right"|4
|-bgcolor=lightgrey
|'''Mode Score'''
|align="right"|3
|align="right"|1
|align="right"|3
|align="right"|3
|align="right"|1
|align="right"|4
|-
|'''Minimum Score'''
|align="right"|0
|align="right"|0
|align="right"|1
|align="right"|0
|align="right"|0
|align="right"|0
|-bgcolor=lightgrey
|'''Maximum Score'''
|align="right"|4
|align="right"|3
|align="right"|6
|align="right"|4
|align="right"|4
|align="right"|7
|-
|'''Standard Deviation'''
|align="right"|1.032
|align="right"|0.775
|align="right"|1.372
|align="right"|1.263
|align="right"|0.867
|align="right"|1.479
|-bgcolor=lightgrey
|'''Skewness'''
|align="right"| -0.138
|align="right"|0.261
|align="right"|0.007
|align="right"| -0.195
|align="right"|0.351
|align="right"| -0.188
|-
|'''Kurtosis'''
|align="right"| -0.730
|align="right"| -0.150
|align="right"| -0.718
|align="right"| -1.008
|align="right"|0.037
|align="right"| -0.278
|-bgcolor=lightgrey
|'''Number of Participants'''
|align="right"|55
|align="right"|55
|align="right"|55
|align="right"|56
|align="right"|56
|align="right"|56
|}
<p align=center >'''''Table 9. Pre-Quiz Descriptive Statistical Results'''''</p>
Figure 61 provides an inverse cumulative normal distribution graph for the total pre-quiz scores. This graph tells us what percentage (y-axis) of participants scored under a nominated score (x-axis). For example 50% of participants for both 2D and 3D scored under 4 in pre-quiz total score. As can be seen both the 2D and the 3D pre-quiz total scores were the same. For a detailed analysis of each of the Bloom’s cognitive processes for the pre-quiz see Appendix J: Pre-Quiz Score Results.
Figure 61. Results: Pre-Quiz Totals - Inverse Cumulative Normal Distribution Graph
Figure 62 provides a histogram and normal distribution curve of the total pre-quiz achievement scores. Both graphs provide frequency distributions but in different forms. The histogram provides for the number of participants (frequency y-axis) that scored between 1 and 8 (x-axis). The Gaussian distribution (or bell curve) provides the probability (y-axis) that a participant that would score between 1 and 8 (x-axis) based upon the average and standard deviation of the scores within each group. For a detailed analysis of each of the Bloom’s cognitive processes for the pre-quiz see Appendix J: Pre-Quiz Score Results.
Figure 62. Results: Pre-Quiz Totals - Histogram & Bell Curve
====4.2.2.1 Pre-Quiz Significant Results====
An independent t-test was performed on the pre-quiz total scores to ensure that the groups did not differ significantly in their prior knowledge of the lecture content on ‘The Physics of Bridges’, they did not (t = -0.367, df = 119, two-tailed p = 0.714, α = 0.05).
Although no significant difference was found between the two groups pre-quiz total scores, the scores for each of the Bloom’s cognitive processes of ‘remember’ and ‘understand’ did differ significantly between the groups. The 2D pre-quiz scored significantly higher than the 3D scores for the Bloom’s cognitive process of ‘remember’ (t = 1.665, df = 109, one-tailed p = 0.0494, α = 0.05). The 3D pre-quiz scored significantly higher than the 2D pre-quiz scores for the Bloom’s cognitive process of ‘understand’ (t = -3.03167, df = 109, one-tailed p = 0.0014, α = 0.05). Appendix J: Pre-Quiz Score Results provides a detailed analysis of these results.
===4.2.3 Post-Quiz Results===
Table 10 provides the results of the 2D and 3D groups for the post-quiz achievement scores. The post-quiz contained 20 questions of which were divided evenly into two groups of Bloom’s Factual cognitive processes of ‘remember’ and ‘understand’. The number of questions within each cognitive process was 10. As with the pre-quiz, the pass rate is a measure of how many participants scored 50% or higher on their quiz scores.
{|align=center width=50%
|-
|
|colspan="3" align=center bgcolor=lightblue |2D Group
|colspan="3" align=center bgcolor=#DDADAF |3D Group
|-bgcolor=lightgrey padding=4
|
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|-
|'''Pass Rate'''
|align="right"| 85%
|align="right"|35%
|align="right"|67%
|align="right"|93%
|align="right"|36%
|align="right"|77%
|-bgcolor=lightgrey
|'''Average Score'''
|align="right"| 7
|align="right"|3.98
|align="right"|10.98
|align="right"|7.32
|align="right"|4.04
|align="right"|11.36
|-
|'''Median Score'''
|align="right"|8
|align="right"|4
|align="right"|11
|align="right"|8
|align="right"|4
|align="right"|12
|-bgcolor=lightgrey
|'''Mode Score'''
|align="right"|8
|align="right"|4
|align="right"|11
|align="right"|8
|align="right"|4
|align="right"|12
|-
|'''Minimum Score'''
|align="right"|3
|align="right"|0
|align="right"|5
|align="right"|3
|align="right"|1
|align="right"|6
|-bgcolor=lightgrey
|'''Maximum Score'''
|align="right"|10
|align="right"|8
|align="right"|17
|align="right"|10
|align="right"|8
|align="right"|17
|-
|'''Standard Deviation'''
|align="right"|1.846
|align="right"|1.484
|align="right"|2.468
|align="right"|1.597
|align="right"|1.464
|align="right"|2.347
|-bgcolor=lightgrey
|'''Skewness'''
|align="right"| -0.642
|align="right"|0.068
|align="right"|0.052
|align="right"| -0.941
|align="right"|0.332
|align="right"| -0.229
|-
|'''Kurtosis'''
|align="right"| -0.729
|align="right"| 0.558
|align="right"| -0.152
|align="right"| 0.672
|align="right"|0.010
|align="right"| 0.265
|-bgcolor=lightgrey
|'''Number of Participants'''
|align="right"|55
|align="right"|55
|align="right"|55
|align="right"|56
|align="right"|56
|align="right"|56
|}
<p align=center >'''''Table 10. Post-Quiz Descriptive Statistical Results'''''</p>
Figure 63 provides an inverse cumulative normal distribution graph for the total post-quiz scores. As was provided above this graph displays what percentage of participants scored under a nominated score.
Figure 63. Results: Post-Quiz Totals Inverse - Cumulative Normal Distribution Graph
Figure 64 provides a histogram and normal distribution curve of the post-quiz scores. As provided above with the pre-quiz graphs these graphs measure the frequency distribution of both the 2D and 3D groups.
Figure 64. Results: Post-Quiz Totals - Histogram & Bell Curve
====4.2.3.1 Post-Quiz Significant Results====
An independent t-test was performed on the post-quiz total scores of the 2D group and the 3D group showed that there was no significant difference between the results of these groups (t = -0.8212, df = 119, two-tailed p = 0.4133, α = 0.05). Appendix K: Post-Quiz Score Results provides a detailed analysis of these results.
The next section provides an analysis of the results for each of the Bloom’s cognitive process to test for signification difference between the post-quiz results for the tested hypotheses.
===4.2.4 Hypotheses Results===
As stated in Chapter 3 the operational hypotheses for this research study were as follows:
:(H1): That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
:(H2): That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
This section will discuss test results for no significant difference using the null hypothesis of H01 and H02.
====4.2.4.1 Hypothesis One: Post-Quiz Remember====
Figure 65 provides the histogram and the density traces graphs for the post-quiz results where 10 questions were given to both the 2D and 3D groups for the Bloom’s cognitive process of ‘remember’. As discussed in the previous section the histogram provides the frequency distribution of a participant’s scores. The density traces graph has been provided instead of the normal distribution graph as the results of these scores was not of normal distribution. The density traces graph provides alternative view of frequency that is similar to the histogram graph.
Figure 65. Results: Post-Quiz Remember - Histogram & Density Traces
'''Hypothesis H<sub>01</sub>'''
The null hypothesis tested H<sub>01</sub>:
:That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in no significant difference in post-quiz scores between 2D and 3D participants.
H<sub>01</sub> was tested using the non-parametric Mann-Whitney U Test as the results for the post-quiz ‘remember’ scores did not meet the assumptions for parametric testing, which requires the scores to be normality distributed. The 3D scores failed the D’Agostino-Pearson (K2) normal distribution test (p = 0.01161 ie < 0.05) therefore the scores from this group deviate significantly from normal distribution. Appendix K: Post-Quiz Score Results provides a detailed analysis of the parametric testing results.
'''Formula H<sub>01</sub>'''
Using the following Mann-Whitney U Test formula to find U;
Where:
n1 = number of group 1 subjects
n2 = number of group 2 subjects
R1 = rank total for group which smallest rank sum
W = the critical value of U1
'''Results H<sub>01</sub>'''
The results of Mann-Whitney U Test when applied found that there was no significant difference between the 2D and 3D post-quiz ‘remember’ scores where the average ranked scores was 2D = 53.9364 and 3D = 58.0268 resulted in U = 1653.5, W = 113.5, 2 tailed p = 0.493107, thus we do not reject the null hypothesis for α = 0.05. (Note: There is a distinct “observable” difference between these two groups, just not a statistically significant difference. This is explored in the next chapter).
====4.2.4.2 Hypothesis Two: Post-Quiz Understand====
Figure 66 provides the histogram and normal distribution curve for Bloom’s cognitive ‘understand’ results of the 2D and 3D groups for the post-quiz achievement scores. As discussed above these graphs display the frequency distribution of both the 2D and 3D groups where 10 questions were given in the post-quiz for Bloom’s cognitive process of ‘understand’.
Figure 66. Results: Post-Quiz Understand - Histogram & Bell Curve
'''Hypothesis H<sub>02</sub>'''
The null hypothesis tested H02:
:That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in no significant difference in post-quiz scores between 2D and 3D participants.
H<sub>02</sub> was tested using the parametric independent t-test of equal variance as the results met the assumptions for parametric testing. Appendix K: Post-Quiz Score Results provides a detailed analysis of the parametric testing results.
'''Formula H<sub>021</sub>'''
Using the following t-test formula to find t;
Where:
= the mean of group 1
= the mean of group 2
= number of group 1 subjects
= number of group 2 subjects
= the standard deviation of group 1
= the standard deviation of group 2
'''Results H<sub>02</sub>'''
The results of an independent t-test found no significant difference (t = -0.1926, df = 109, two-tailed p = 0.8477, α = 0.05) between the results of the 2D (x1 = 3.982, s1 = 1.484) and 3D (x2 = 4.036, s2 = 1.464) post-quiz ‘understand’ scores, thus we do not reject the null hypothesis.
===4.2.5 Survey Results: Likert Scales===
Table 11 displays the percentages of the post survey results divided into content knowledge, delivery method and technology. The content knowledge and delivery method questions were standardised into a 3 point scales for analysis.
{|align=center width=80%
|-
|
|
|colspan="3" align=center bgcolor=lightblue |2D Group
|colspan="3" align=center bgcolor=#DDADAF |3D Group
|-bgcolor=lightyellow
|
|align=center |'''''Content Knowledge'''''
|align=center |'''''Low'''''
|align=center |'''''Med'''''
|align=center |'''''High'''''
|align=center |'''''Low'''''
|align=center |'''''Med'''''
|align=center |'''''High'''''
|-
|21
| My level of understanding of the topic PRIOR to subject delivery.
|align="right"| 89%
|align="right"| 9%
|align="right"| 2%
|align="right"| 91%
|align="right"| 5%
|align="right"| 4%
|-bgcolor=lightgrey
|22
| My level of understanding of the topic AFTER to subject delivery.
|align="right"| 22%
|align="right"| 51%
|align="right"| 27%
|align="right"| 23%
|align="right"| 50%
|align="right"| 27%
|-bgcolor=lightyellow
|
|align=center |'''''Delivery Method & Learning Experience'''''
|align=center |'''''Positive'''''
|align=center |'''''Neutral'''''
|align=center |'''''Negative'''''
|align=center |'''''Positive'''''
|align=center |'''''Neutral'''''
|align=center |'''''Negative'''''
|-
|23
|Outline of subject material was clear and informative.
|align="right"| 98%
|align="right"| 2%
|align="right"| 0%
|align="right"| 100%
|align="right"| 0%
|align="right"| 0%
|-bgcolor=lightgrey
|24
|align="right"|The lecture was detailed enough to provide an understanding of subject matter.
|align="right"| 100%
|align="right"| 0%
|align="right"| 0%
|align="right"| 93%
|align="right"| 7%
|align="right"| 0%
|-
|28
|I found the in-world experienced offered me a better learning experience than my usual methods of learning
|align="right"| 74%
|align="right"| 22%
|align="right"| 4%
|align="right"| 73%
|align="right"| 25%
|align="right"| 2%
|-bgcolor=lightgrey
|29
|I found the subject material to be appropriate to virtual world learning
|align="right"| 84%
|align="right"| 13%
|align="right"| 3%
|align="right"| 79%
|align="right"| 18%
|align="right"| 3%
|-bgcolor=lightyellow
|
|align=center |'''''Technology'''''
|align=center |'''''No'''''
|align=center |'''''Yes'''''
|align=center |
|align=center |'''''No'''''
|align=center |'''''Yes'''''
|align=center |
|-
|26
|During the course I experienced technical difficulties with the environment
|align="right"|91%
|align="right"|9%
|align="right"|
|align="right"|93%
|align="right"|7%
|align="right"|
|}
<p align=center >'''''Table 11. Survey Likert Scales Results'''''</p>
The content knowledge questions addressed the participant’s subjective impression of their knowledge before and after attending the presentation. Both groups perceived an increase in their understanding of the subject matter after the lecture. The delivery method questions measured the subjective satisfaction levels with the virtual world 2D or 3D delivery methods (as appropriate). Both 2D and 3D indicated very high levels of satisfaction. The technology question assessed if a participant had any technological constraints to their reception of the learning material. From the results presented above a few participants experienced technological problems.
==4.3 Qualitative Analysis Results==
===4.3.1 Introduction===
Qualitative analysis was performed using methods discussed in 3.9.4 Quantitative Analysis Methods section of this thesis for the 2D and 3D groups’ open question set (25, 30, 31 and 32) contained in the post survey. In this section we present a brief overview of how the analysis was performed and the major themes that emerged from the qualitative analysis results. Interpretation of these results will be discussed in the next chapter of this thesis.
===4.3.2 Analysis Approach===
Hermeneutic analysis of the post survey open questions was performed using an iterative approach in order to code data into contextual structures and common themes amongst 2D and 3D post survey responses. Data was first condensed into 2D and 3D categories and further into the individual question categories. Open coding uncovered general themes within each question and to further assist in this stage of coding a participant’s entire question responses were read as a whole in order to reveal the entire context of their individual responses. Axial coding was performed once a generic set of themes emerged to form relationships between the entire set of 2D and 3D group question responses. Opening coding and axial coding took several iterations before selective coding was preformed revealing 4 major themes along with sub themes that can be seen in Table 12 below. These themes along with their meaning are discussed below.
===4.3.3 Themes of the Open Survey Questions===
The open questions were as follows:
*DELIVERY METHOD ASSESSMENT (Q 25) General Comment:
*VIRTUAL WORLD LEARNING EXPERIENCE
**(Q 30) List 3 positive experiences you had with using this technology to learn:
**(Q 31) List 3 negative experiences you had with using this technology to learn:
**(Q 32) General Comment:
{| align="center" style="border-collapse: collapse; border-width: 1px; border-style: solid; border-color: #000"
|-
|align=center|'''''Theme'''''
|align=center|'''''Sub-Theme'''''
|-bgcolor=lightgrey
|'''Virtual World Learning'''
|
|-
|'''Virtual Learning Campus'''
|
|-bgcolor=lightgrey
|'''Lecture Delivery'''
|
*Format
*Information Content
*Learning
*Facets of 3D Learning
*Instruction
*Focus
*Navigation
*Technical Constraints
|-
|'''Survey Instrument'''
|
|}
<p align=center >'''''Table 12. Qualitative Analysis Reuslts: Themes'''''</p>
The above themes were classified as follows:
*'''Virtual World Learning''': This category included the aspects of a participant’s experience while using the virtual world as a learning platform. The types of comments contained in this category were not specific to the experiment but rather to the experience of the virtual world medium as a learning tool. The general features and characteristics of a virtual world that a participant disliked or liked about using this method of learning and their over-all impression that they had using the virtual world as a learning platform.
*'''Virtual Learning Campus''': This category included comments about the virtual learning campus experience. These comments related specifically to the set-up and operation of the entire virtual learning environment within the virtual world.
*'''Lecture Delivery''': This category was the major category that included comments about the lecture experience of a participant that was specific to the lecture delivery treatment they received. This category contained sub-themes as follows:
**'''Format''': The style and layout of the presentation, how the information was presented.
**'''Information Content''': The depth and breadth of information content presented about the topic (The Physics of Bridges).
**'''Learning''': The aspects of obtaining new knowledge.
**'''Facets of 3D Learning''': This theme contained only comments from the 3D group, their perception of the use of 3D models as a learning tool in delivery.
**'''Instruction''': The method by which knowledge was transferred from the instructor to the learner, the interface between the presentation and the learner.
**'''Focus''': The observations affecting attention and the temporal experience of a participant within the virtual world whilst they were learning.
**'''Navigation''': Comments that related to the controlling their avatar within the lecture theater.
**'''Technical Constraints''': Comments that related to technical constraints that a participant experienced during the lecture.
*'''Survey Instrument''': This category included comments that related to the pre or post quiz of the participant.
Figure 67 provides a diagram of the relationship of these themes in the context of the qualitative analysis performed on the survey results. In the next chapter we will discuss the results of this qualitative analysis.
Figure 67. Qualitative Analysis: Relationship of Vitrual World Learning Themes
==4.4 Summary==
In this chapter we presented the quantitative and qualitative results of the research study.
A quantitative analysis was performed for both the 2D and 3D groups where the number of participants was 55 and 56 respectively. The pass rate for both the 2D and 3D groups’ pre-quiz scores was 51% and 55% respectively.
A significance test performed on the results of the total pre-quiz showed no significant difference between the scores of each group. Significance tests performed on Bloom’s cognitive processes of ‘remember’ and ‘understand’ showed a significant difference between the groups. The 2D group scored significantly higher than the 3D group for the Bloom’s cognitive process of ‘remember’ and the 3D group scored significantly higher than the 2D group for the Bloom’s cognitive process of ‘understand’.
The post-quiz pass rate for both the 2D and 3D groups’ total post-quiz score was 67% and 77% respectively. In spite of this, the results for the significance tests performed for Bloom’s cognitive process of ‘remember’ and ‘understand’ for the hypothesis showed no significance differences between the 2D and the 3D groups learning outcomes.
The post-survey results for the Likert scale questions was presented that provided the results dividend into positive, neutral and negative percentiles for both of the groups.
A qualitative analysis performed on the open-questions contained in the post survey revealed 4 major themes in the survey comments of both groups combined, these themes were:
#Virtual world learning environment,
#Virtual learning campus,
#Lecture delivery and
#Survey components of the research study.
A definition for each of these themes was provided along with a relationship diagram.
The next chapter we discuss the results presented in this chapter.
</div>
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
bef6a7b89e0b84e573ccf0862b187de979e91196
Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion
0
283
313
2010-08-05T13:15:05Z
Bishopj
1
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 5: Discussion & Conclusion=
==5.1 Introduction==
This chapter provides the analysis of the results presented in the previous chapter along with a discussion of these results and opportunities for further research.
In analysis of the results the researcher has applied both quantitative and qualitative methods in order to answer the research question: How effective is it to learn in a virtual world using a traditional 2D slide show method compared to that of a 3D interactive simulation?
Quantitative methods were performed on participant’s achievement scores for the pre and post quiz and Likert scale results. Qualitative methods were used on responses from participant’s post survey open questions results.
Discussion of results applied triangulation combining both the quantitative and qualitative results in order to better understand the 2D and 3D group’s learning experience and any differences that were observed between these groups.
This chapter concludes with a discussion on the opportunities for further research.
==5.2 Quantitative Analysis==
===5.2.1 The Results of the Hypothesis===
The aim of this study was to determine if two lectures differing only in the presence or absence of 3D models (and therefore employing either 2D or 3D learning delivery) in an online 3D virtual world would produce different learning outcomes for Bloom’s cognitive processes of ‘remember’ or ‘understand’. The following hypotheses were formed:
*(H<sub>1</sub>): That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
*(H<sub>2</sub>): That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
Measured statistically, neither of the above hypotheses were sustained by the scored (quiz) testing results as there was no significant statistical difference between the results of the two groups. The researcher applied statistical significance testing as the foundation for rejection of the null hypothesis formation of the above hypotheses (i.e. that, in each case, the process will result in NO significant difference) based upon a statistically measurable difference. If there is no measurable difference found between the samples; the primary hypotheses remains unconfirmed. An unconfirmed hypothesis does not mean the hypothesis is false rather it means it is capable of disproof thus unconfirmed (Karl Popper’s principals of falsifiability).
As the researcher was not able to refute the null hypothesis on the basis of a raw statistical comparison of the test scores, the researcher turned to the real data results to see if there was an actual (although possibly not significant) difference between the results of the two groups, or any clearly emerging or suggested trends that might qualify the implications of the raw statistical comparison.
===5.2.2 The Results of the Pre-Quiz===
====5.2.2.1 Pre-Quiz Total Scores====
Analysis of the results in the previous chapter for the total pre-quiz scores (i.e. both cognitive processes combined) between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 51% and 55% respectively, therefore 4% of 3D participants scored better than the 2D participants for a pass rate of 4 out of 8.
*Average scores (mean) for the 2D and 3D groups were 3.69 and 3.68 respectively. Both groups’ average scores were effectively the same.
*Median scores for the 2D and 3D groups were both the same with a value of 4.
*Mode for the 2D group was lower than the 3D group, 3 and 4 respectively. Effectively demonstrating that more 2D participants scored a 3 whereas more 3D participants scored a 4. A score of 3 for the 2D and 3D groups were 31% and 23% respectively and a score of 4 for the 2D and 3D groups were 20% and 23% respectively.
*The range of scores for the 2D group was less than the 3D group, 1-6 and 0-7 respectively.
*Standard deviation for the 2D groups was less than the 3D group 1.372 and 1.479 respectively, therefore the 2D total groups’ scores were closer to the centre of the mean (average score) than the 3D group.
*Skewness was positive for the 2D group and negative for the 3D group, 0.007 and -0.188 respectively. This demonstrates that the *3D groups’ scores were slightly higher than the 2D scores. This skewness difference is due to the mode difference between the groups as both the median and average scores where equal.
*Kurtosis was negative (platykurtic) for the both groups. Platykurtic distributions are flatter at the top of a distribution curve and less peaked around the average score (mean). The slight difference in value of kurtosis across the two groups accounts for the probability density value being lower in the Gaussian distribution graph in Figure 62. Results: Pre-Quiz Totals - Histogram & Bell Curve
'''Summary & Interpretation: Pre-Quiz Total Scores'''
There was a 4% higher pass rate for the 3D group and the mode value of the 3D group was higher than the 2D groups’ total pre-quiz scores. The pass rate was higher because of the greater mode value obtained by the 3D group. The 3D group obtained a greater range of scores than the 2D group thus providing the 2D group with a tighter (smaller) distribution of scores around the mean.
Given the distribution of scores between the two groups the 2D group had a higher probability of scoring around the mean than the 3D group (28% and 26% respectively). Thus, although the 3D group obtained a higher pass rate and mode value, a participant in the 2D group was 2% more likely of scoring a 4 than a participant in the 3D group. This small percentage difference can be seen in Figure 61 inverse normal distribution graph, in the lower and higher quartiles the 2D group varied away from the 3D group. In the lower, quartile participants in the 2D group scored higher. In the higher quartile, participants in the 2D group scored lower. Thus this slight shift away from the 3D group curve toward the mean demonstrates that the 2D group was more likely to obtain the mean value than the 3D group.
Although there was a difference in the 2D and 3D group pre-quiz scores the percentage difference was, in the opinion of this researcher, effectively immaterial; showing that both groups stated with the same level knowledge on the topic ‘The Physics of Bridges’ prior to the lecture.
The result of the question 21 in the Likert scale survey is comparative with the above analysis. When asked to scale their level of knowledge on the topic ‘prior’ to the subject the low plus medium scores for the 2D and 3D participants were 98% and 96% respectively. The response that their knowledge was high from the 2D and 3D participants was 2% and 4% respectively. This provides a 2% difference for both responses, which is comparative to the real results of the data analysis above. So the difference in the participant group’s subjective assessment matches that showed by the tested assessment.
====5.2.2.2 Pre-Quiz Remember and Understand Scores====
In the previous chapter we found that when a significance test was performed independently on Bloom’s cognitive processes of ‘remember’ and ‘understand’ for the pre-quiz a significant difference was found between the two groups. The 2D group scored significantly higher than the 3D group for the Bloom’s cognitive process of ‘remember’ (t = 1.665, df = 109, one-tailed p = 0.0494, α = 0.05), and the 3D group scored significantly higher than the 2D group for the Bloom’s cognitive process of ‘understand’ (t = -3.03167, df = 109, one-tailed p = 0.00138, α = 0.05).
The pass rates for Bloom’s ‘remember’ cognitive process for the 2D and 3D groups were 80% and 66% respectively. The pass rates of Bloom’s ‘understand’ cognitive process for the 2D and 3D groups were 35% and 52% repetitively. The average score for the 2D and 3D groups for Bloom’s ‘remember’ was 2.44 and 2.071 and ‘understand’ 1.25 and 1.60 respectively. The standard deviation for the 2D and 3D groups for Bloom’s ‘remember’ were 1.032 and 0.775 and for Bloom’s ‘understand’ were 1.263 and 0.867 respectively.
The scores for the Bloom’s splits at the pre-quiz stage are of passing interest in this experiment (independent of the post-quiz results) and the significant differences found for these figures were not especially surprising.
This experiment was not designed to measure and compare pre versus post learning outcomes of the participants. Rather, it was designed to find differences between the 2D and 3D groups comparative learning outcomes (i.e. the post-quiz results). In other words, the research was not trying to measure ‘by how much’ learning or understanding improves, but rather the relative difference in the final results between the 2D and 3D groups.
The pre-quiz was given to obtain an indicator of the general knowledge of the material that was to be delivered so that relative differences in outcomes could be normalised against the initial positions.
With the total number of pre-quiz questions being 8, of which both of the Bloom’s cognitive process were represented by only 4 questions each, there were not enough questions in each group to test reliably the true levels each of Bloom’s cognitive processes of ‘remember’ and ‘understand’ prior to the lecture. With so few data points for the individual processes, small variations in responses produce large variations in final scores. Hence the 2D/3D group variations were not especially surprising.
The problem for the research design was to avoid impacting the outcomes with the measurement instrument itself. The post-quiz was taken within approx 30 minutes of the pre-quiz, and only a single lecture was delivered, between those two measurement points. Providing more than 8 questions in the pre-quiz for a single 20 minute lecture would have increased the risk that the participants learnt from the pre-quiz questions relative to the lecture.
Furthermore, the concept of ‘remember’ and ‘understand’ for Bloom’s cognitive processes prior to instruction does not especially make sense in the context of the experiment. As discussed in Chapter 3 (instrument design), the development of the questions within the instrument was based upon the lecture. ‘Remember’ questions were extracted from the instructional content of the lecture whereas the ‘understand’ questions were derived from material not taught in the lecture. The pre-quiz questions were also specifically targeted at the four bridge types covered in the lecture to calibrate the extent of pre-existing content knowledge.
A participant being tested within each of these levels prior to instruction (over which no certainty of prior topic learning experience can be established) can only be measured with respect to their pre-existing general knowledge of the topic. This may reflect either memory or understanding. The extent to which this analysis grouped the pre-quiz questions into ‘remember’ or ‘understand’ in this discussion, reflects only the researcher’s perfect knowledge of the lecture content as to whether the topic of the question was subsequently directly taught or not in the lecture – not whether the participant was actually remembering or understanding at the pre-quiz stage.
The extent to which the split at the pre-quiz stage matters to the discussion is that if a participant already had an indicative level of ‘understanding’ prior to the lecture, that ‘understanding’ should improve when assessed after the lecture. If one group, for example, starts with a level of 60% and ends with 61%, this is possibly a worse outcome than the other group starting with 45% and ending with 58% (although there is also some discussion that could qualify even that conclusion).
===5.2.3 The Results of the Post-Quiz===
====5.2.3.1 Post-Quiz Total Scores====
An analysis of the results in the previous chapter for the total (i.e. combined Bloom’s) post-quiz scores between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 67% and 77% respectively, therefore 10% of 3D participants scored better than the 2D participants for a pass rate of 10 out of 20.
*Average scores for the 2D and 3D groups were 10.98 and 11.36 respectively. A 3D participant scored on average 0.38 higher than a 2D participant.
*Median scores for the 2D and 3D groups were 11 and 12 respectively. The 3D participants scored higher in the 2 quartile than the 2D participants.
*Mode for the 2D group was lower than the 3D group, 11 and 12 respectively. Effectively demonstrating that more 2D participants scored 11 and more 3D participants scored 12. A score of 11 for the 2D and 3D groups were 20% and 21% respectively and a score of 12 for the 2D and 3D groups were 11% and 29% respectively.
*The range of scores for the 2D group was more than the 3D group, 5-17 and 6-17 respectively.
Standard deviation for the 2D group was slightly more than the 3D group 2.468 and 2.347 respectively, therefore the 3D total groups’ scores were slightly closer to the centre of the mean (average score) than the 2D group.
*Skewness was positive for the 2D group and negative for the 3D group, 0.052 and -0.229 respectively. This demonstrates that the 3D groups’ scores were slightly higher than the 2D scores. This skewness difference is due to the mean, median and mode differences between the two groups’ scores.
*Kurtosis was negative (platykurtic) for the 2D group and positive (leptokurtic) for the 3D group, -0.2 and 0.3 respectively. As mentioned above platykurtic distributions are flatter at the top of a distribution curve whereas leptokurtic distributions are higher and peaked around the mean score. The differences in value of kurtosis between the two groups account for the probability density value being higher for the 3D group in the Gaussian distribution graph in Figure 64.
'''Summary & Interpretation: Post-Quiz Total Scores'''
The above analysis finds that the 3D participants scored overall better than the 2D participants in the post-quiz. Although this difference was not statistically significant from the t-test results (t = -0.8212, df = 119, two-tailed p = 0.4133, α = 0.05) the real results indicate that there was a slight difference between the two group results. Analysing the Gaussian distribution curve (Figure 64) shows that the 2D and 3D participants had a 15% and 16% likelihood respectively of scoring a 12 in their total post-quiz score. In general the overall results showed that the 3D group performed better by 1%, this can also be seen on the inverse distribution graph (Figure 63) where the two groups almost run parallel to one another with the 3D group performing approximately 1% better in their overall test results.
The results of question 22 in the Likert scale, when asked to scale their level of knowledge on the topic ‘after’ the lecture, the 2D and 3D participants low response was 22% and 23% respectively and medium response 73% and 74% respectively. At the medium level the self assessment was consistent with the test results of a 1% difference. At the low level the 3D group seemed to be more conservative in their response perceiving that their knowledge was less than the 2D group although the real result showed the contrary. In either case a 1% difference is within the margin of error.
====5.2.3.2 Post-Quiz Remember Scores====
Analysis of the results in the previous chapter for the post-quiz ‘remember’ scores between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 85% and 93% respectively, therefore 8% of 3D participants scored better than the 2D participants for a pass rate of 5 out of 10).
*Average scores for the 2D and 3D groups were 7 and 7.32 respectively. The 3D participants scored on average 0.32 higher than the 2D participants.
*Median and mode scores for the 2D and 3D group was 8 for both groups.
*The range of scores for both groups was the same, 3-8.
*Standard deviation for the 2D group was higher than the 3D group 1.8 and 1.6 respectively, with a 0.2 difference between the groups.
*Skewness was negative for both groups with the 2D and 3D skew of -0.6 and -0.9 respectively. As both groups were close to 0 with a 0.3 difference between the two groups this demonstrates that the distribution of the results for both groups was almost symmetrical.
*Kurtosis was negative (platykurtic) for the 2D group and positive (leptokurtic) for the 3D group, -0.7 and 0.7 respectively.
'''Summary & Interpretation: Post-Quiz Remember Scores'''
The post-quiz scores mask a complexity that requires further consideration. Although the 2D group was normality distributed, the 3D group failed D’Agostino-Pearson (K2) normal distribution test (p = 0.01161 ie < 0.05). In order to compare meaningfully the results of the 2D and 3D groups, the researcher needed to look into why the 3D group failed normal distribution and what, if anything, it implies to the interpretation of the apparently “better” 3D pass rates.
Analysis of the histogram and density traces graph Figure 65 show that both the 2D and 3D graph displays a bimodal distribution in the histogram graph with 2 peaks at 3 and 8. As can be seen on the density traces graph, for the 2D scores between the scores of 3-8, the variance was greater. This causes the curve to flatten prior to its peak.
Although the statistical analysis determined that difference between the pass rates and mean (by which the 3D group was higher than the 2D group) was not significant when taken as a whole there is a clear visual difference between the graphs that deserves explanation. When considered within specific score ranks the outcome slightly favours the 3D group because:
#2D group participants were 8% more to likely to score 4 or below,
#3D group participants were 6% more likely to score 8 or above, and
#3D group participants were 2% more likely to score 9 or and above.
This analysis can be easily seen in frequency table: below (Table 13. Frequency Table: Post-Quiz Remember).
{| align="center" width="50%" style="background-color:#ffffcc; "
|-
|colspan="4" align="center" |'''Post-Quiz Remember'''
|-
|align=center|'''Score'''
|align=center bgcolor="#DDADAF" |'''2D'''
'''(Cumulative)'''
|align=center bgcolor="lightblue" |'''3D'''
'''(Cumulative)'''
|align=center bgcolor="lightgrey"|'''Difference'''
'''3D vs. 2D'''
|-
|align=right |0
|align=right | 0%
|align=right | 0%
|align=right | 0%
|- bgcolor="lightgrey"
|align=right |1
|align=right | 0%
|align=right | 0%
|align=right | 0%
|-
|align=right |2
|align=right | 0%
|align=right | 0%
|align=right | 0%
|- bgcolor="lightgrey"
|align=right |3
|align=right | 4%
|align=right | 4%
|align=right | 0%
|-
|align=right |4
|align=right | 15%
|align=right | 7%
|align=right | -8%
|- bgcolor="lightgrey"
|align=right |5
|align=right | 25%
|align=right | 13%
|align=right | -12%
|-
|align=right |6
|align=right | 33%
|align=right | 27%
|align=right | -6%
|- bgcolor="lightgrey"
|align=right |7
|align=right | 47%
|align=right | 41%
|align=right | -6%
|-
|align=right |8
|align=right | 78%
|align=right | 80%
|align=right | 2%
|- bgcolor="lightgrey"
|align=right |9
|align=right | 98%
|align=right | 96%
|align=right | -2%
|-
|align=right |10
|align=right | 100%
|align=right | 100%
|align=right | 0
|}
<p align="center" >'''''Table 13. Frequency Table: Post-Quiz Remember (Rounded)'''''</p>
The frequency table show a cumulative analysis of each group at a particular score. As can be seen in the table, the 3D scores in general were lower than the 2D scores for each level of score below 8. The implication is therefore that the relative performance of 3D versus 2D ‘remember’ outcomes is slightly better at the higher rankings (80% and above), but slightly worse at the lower pass mark scores.
While the difference in the means may not be statistically significant, the results suggest that the outcomes at particular bands are potentially significant. To put this into context; if the desired group learning outcome is to achieve a pass or better, both methods of delivery were similar, but if the desired outcome is to maximise the potential scores, the 3D delivery might be indicated.
In general, the overall performance of both groups was better than for the score obtained in Bloom’s cognitive process of ‘understand’ which we will discuss in the next section.
====5.2.3.3 Post-Quiz Understand Scores====
Analysis of the results in the previous chapter for the post-quiz ‘understand’ scores between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 35% and 36% respectively, therefore 1% of 3D participants scored better than the 2D participants for a pass rate of 5 out of 10).
*Average scores for the 2D and 3D groups were 3.98 and 4.04. A 3D participant scored on average 0.05 higher than a 2D participant.
*Median and mode scores for the 2D and 3D group was 4 for both groups.
*The range of scores for the 2D group was more than the 3D group, 0-8 and 1-8 respectively.
*Standard deviation for the 2D group was slightly higher than the 3D group 1.48 and 1.46 respectively. A 0.02 difference between the groups shows very little difference in standard deviation.
*Skewness was positive for both groups the 2D and 3D was 0.068 and 0.332 respectively. As both groups were close to 0 with a 0.27 difference between the two groups this demonstrates that the distribution of the results for both groups was almost symmetrical.
*Kurtosis was positive (leptokurtic) for both groups with the 2D and 3D groups being 0.558 and 0.010 respectively. With a result of a 0.55 difference between the two groups shows very differences between the two groups kurtosis values.
'''Summary & Interpretation: Post-Quiz Understand Scores'''
From the above analysis both groups scored almost the same for Bloom’s post-quiz ‘understand’ results. This is clear from a study of the histogram and Gaussian distribution curve in Figure 66: both the 2D and 3D data points are almost identical.
Further, the frequency distribution comparison of the two groups confirms that the scored results at each rating band of the 2D and 3D groups exhibit no considerable difference.
Bloom’s cognitive process of ‘understand’ is a higher level cognitive process than ‘remember’. Given the pass results and the mean, median and mode scores both groups scored ‘badly’ (35% – 36%) in Bloom’s cognitive process of ‘understand’. On the face of it, the results suggest that both groups did not show a ‘high’ level of understanding of the subject matter after training; however, it should be remembered that the mean, median and mode results are a reflection of the difficulty relationship between the questions testing understanding and the lecture itself. The decision was made during the design stage to include some ‘very high’ difficulty questions in the understanding question set to ensure real test of the achieved level of understanding. Some additional light is shed on these results in the Likert scale and qualitative analysis that follows.
This research is primarily interested in the comparative difference of the 2 delivery methods, rather than the absolute scores, and for this purpose the results suggest that there is no significant or effective difference between the 2D and 3D group testing (quiz) results for the ‘understand’ cognitive process, within the confines of this experimental process.
===5.2.4 Likert Scale Analysis===
The above analysis of the quiz results showed that there was a positive result for the Bloom’s cognitive process of ‘remember’ whereas for Bloom’s ‘understand’ there seemed to be fewer participants in both groups that understood the subject matter on ‘The Physics of Bridges’ to the same level that they remembered it. In order to understand this result we will turn to the Likert scales where we asked the participants to assess the quality of the deliver method. Questions 23 and 24 specially answered these questions.
*Question 23 asked whether “the subject matter was clear and informative”. The 2D and 3D groups’ responses were positive 98% and 100% and neutral 2% and 0% respectively. With exception to the 2% neutral response it would seem that the majority of people found the subject matter to be clear and informative. Of interest the 2% neutral result was a single participant who actually performed better than group’s average score for the post-quiz results in both cognitive processes of ‘remember’ and ‘understand’ with a z-score of 0.54 and 0.69 respectively. Given their actual results it seems that within their group that this participant understood the material better than they remembered it.
*Question 24, was the lecture detailed enough to understand the subject matter. The 2D and 3D groups’ responses were positive 100% and 93% and neutral 0% and 7% respectively. Of interest were the neutral responses that came from the 3D group. These responses were made up of 4 participants all of whose post-quiz results in both cognitive processes of ‘remember’ and ‘understand’ scored less than the group’s average in their z-scores, with exception to one that scored better on their ‘understand’ post-quiz score than the ‘remember’ score.
From the above results of questions 23 and 24 the majority of participants perceived that the lecture material was clear, informative and detailed enough in order for them to understand the subject matter. The few in the 3D group who were only neutrally satisfied that the level of information detail was sufficient to understand the topic achieved post-quiz z-scores that were below average for the total group so their self assessment seemed to be correct.
Question 29 asked if the topic was appropriate to virtual world learning. This question was asked in order to gain an understanding of a participant’s view on the choice of topics that was delivered for instruction. The majority response for both groups was positive with the 2D and 3D group’s responses positive 84% and 79% respectively and neural 13% and 18% respectively. Within the 2D and 3D groups the neutral scores accounted for 7 and 10 participants respectively. For these participants in the 2D group the z-scores showed that 4 performed below average for the cognitive process of ‘remember’ and 2 for the cognitive process of ‘understand’. Within the 3D group the z-scores showed that 5 performed below average for the cognitive process of ‘remember’ and 7 for the cognitive process of ‘understand’. It seems from these results that although the majority of the participants where positive about the choice of topics a few were neutral with the appropriateness of the material to the environment, and more so in the 3D group, in spite of the fact that the material was identical in both cases. Given their z-score results from the neutral responses the 2D participants still performed better for ‘understand’ than ‘remember’, while within the 3D group the neutral responders appeared to not ‘remember’ or ‘understand’ the topic well – suggesting their relative (to the group) self assessment was consistent with their relative scored outcomes.
Question 28 asked a participant whether the in world learning method offered a better learning experience than their usual (real world) learning methods. The results showed between the 2D and 3D groups positive 74% and 73%, neutral 13% and 18% and negative 3% and 3% respectively. Although the overall results showed a positive result there was more variance with respect to quiz scores in their responses on this question.
Question 26 asked participants if they experienced any technical difficulties. The majority of participants in both groups did not indicate that they had had any technical difficulties. The responses for the 2D and 3D groups ‘No’ 91% and 93% and ‘Yes’ 9% and 7% respectively. For the participants that answered yes to this question the major problems were sound and picture loading delay (lag). All of these people commented that it was only for a short period and the problem was rectified quickly. Although a small number of participants answered yes to this question that they had no technological constraint, the open format questions showed slightly more experienced some technical issues (although apparently not perceived as sufficient to rank a “yes” in this question), which will be discussed in the next section.
This group of questions essentially assessed the participant’s perception of quality, appropriateness, purpose and “fit” to the medium of the experience. Necessarily the responses to these questions are likely to be coloured by the participant’s perception of the lecture delivery system experienced (i.e. 2D or 3D). Throughout this group of questions the responses were very strongly positive while the worst grade with a significant number of responders was neutral (excluding Q26). With the exception of the assessment of the clarity of the material, the Likert assessments slightly favoured the 2D delivery method.
The slight favouring of the 2D delivery could be either an absolute result, or a result coloured by raised expectations of one or other of the two delivery methods. We need to investigate, therefore, the qualitative analysis of the open questions to adequately interpret this slight bias in the results.
Question 26 was a check-question to allow explanation of the results in the other questions should the results therein had proven dramatically negative.
==5.3 Qualitative Analysis==
From the qualitative analysis of the post-survey responses many aspects came out about the learning experience of participants as well as the differences between the two groups in this study.
===5.3.1 Thematic Analysis Results===
As discussed in the previous chapter the results of the post survey open questions were grouped into themes and coded for qualitative analysis in order to provide further insight into the achievement results and the learning experience of participants. There were four themes that were found on analysis of the data as follows:
*Virtual World Learning
*Virtual Learning Campus
*Lecture Delivery
*Survey Instrument
In this section we provide a thematic analysis of these themes that emerged from the post-survey.
====5.3.1.1 Virtual World Learning====
This theme was specially related to the use of the virtual world platform as a learning tool rather than the delivery method of the presentation.
Convenience was the main factor mentioned from both groups. The theme identified included: doing it from home, in my own time and not having to travel in order to learn. These sorts of comments are not specific to virtual world learning technology as today many educational courses cater for students via online courses. However, there was a sense of presence that the participants felt from “being there with other people” and seeing others learn that seemed to make the experience more enjoyable to them over traditional or alternative learning methods. Quite a few commented on how the experience felt “personal like they were really sitting in a lecture room taking the course”, the atmosphere was relaxed, soothing, and providing less pressure than traditional class room methods of learning. These comments are interesting, partly because the lecture mirrored a real-world lecture in that it could not be “paused” by a participant and ran for a fixed time per slide, and a fixed time in total, so to some extent it was more rigid in delivery format than a real-world lecture in which the lecture might be paused while a question is asked and answered.
Another theme that emerged was that this medium offered a new way of learning where it was ‘on demand’ rather than a planned course where one would have to prepare in advance. Similar to searching the web to find out about a specific topic, participants felt that this medium offered them a way learn new material when they wanted and to experience this material rather than just read it over a webpage. The lectures ran on a continuous loop over the experimental period – so this perception is reasonable, in spite of the fact that the lectures were not actually ‘on demand’.
The technology seemed to offer a learning medium to reach people that traditionally would not formally learn or even use the virtual world for learning which they had not done before. It seemed to inspire people to want to learn more and do more learning exercises in and out of Second Life. For many participants this was a new experience they had never thought about using online virtual worlds as a learning platform, for them they had only used the medium as a game rather than taking a course. After experiencing this study many were inspired to seek out more leaning in Second Life or even in real life.
The overall impression from all the participants was that the virtual world learning experience was fun and enjoyable. Very few negative comments were made about the experience other than they could see that this may have the potential to not be taken seriously or possibly cheat. The experience seemed to open people’s minds about the opportunities that virtual world technology could be used seriously rather than just as a gaming environment. A comment from a participant that sums up the general impression of this technology being used as a learning tool:
<blockquote >
I'm still not convinced that virtual learning can replace learning in real world but now I think it might be possible.
</blockquote >
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
====5.3.1.2 Virtual Learning Campus====
This theme included comments made about the virtual learning campus, the setup and operations of the entire virtual learning environment in which the experiment was conducted.
The majority of comments were that the participants found it to be ‘user friendly’ and ‘easy to use’. The layout of the different rooms seemed to provide a fun way for them learn. There were only 2 people that commented on having a problem with the signage when they got to the post survey room they missed the board that told them how to take the post-quiz.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
====5.3.1.3 Lecture Delivery====
This theme is where the majority of comments were made from participants. These comments directly related a participants learning experience of the research project. The range of comments was coded into sub-categories; format, information content, learning, facets of 3D learning, instruction, focus, navigation and technology constraints.
=====5.3.1.3.1 Format=====
This theme included comments on the layout and format of the slide presentation. The comments from both groups were mostly positive. Participants could offer comments in positive, negative or general sections of the survey. In total, across the 2D and 3D groups, there were comments clearly identified as positive 11, 24 and negative 3, 1 respectively in this theme.
The positive comments liked the layout of the slides and the way the information was presented. A few more negative comments came from the 2D group; one that they wished they had the ability to interact with the pictures on the screen, another wanted annotation on the images (similar to the interaction question) and someone had problems with the colour differentiation of the tension and compression markings (tension and compression was shown in red and green respectively suggesting either colour blindness or graphic card faults). Only one person from the 3D group made a negative comment in this area identifying a desire for more pictures on the slides (the slides in the 2D and 3D lectures were identical).
While the largest proportion of the responses to the general comments question were provided by the 3D group, a common suggestion received from both groups concerning the format was that they wished the presentation could be paused or controlled such as by forwarding or rewinding. As a proportion of each group that actually provided a comment at all, this comment was marginally more frequent among the 2D participants.
With respect to the 3D group’s comments about presentation speed, it seemed that although they had been presented with a model and voice over that mirrored the images of the slides and the text therein, they still desired the opportunity to read the slides to view the information. The time per slide and the slides themselves were identical in both the 2D and 3D lectures and set to allow sufficient time for reading the slide – in fact the voice over effectively read the slide to the participant. In the 3D case the addition of the 3D models in the same time window meant that participants had an additional vector of information to absorb in the same amount of time as the 2D participants. The researcher’s impression from the comments in this respect is that in the 2D case the motivator was about the desired to review and contemplate the information, while in the 3D case it was more to do with their ability absorb multiple information vectors simultaneously.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.2 Information Content=====
This theme included comments to do with information content in the presentation. There were 56 comments from the 2D group and 33 from the 3D group.
On the most part people found that the presentation very interesting and informative but in this area the 2D group seemed to be more satisfied than the 3D group. Within the 3D group a number of people desired more information or perceived the information was too technical to appreciate without additional enquiry or time – yet the information in both cases was identical.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.3 Learning=====
This theme included comments to do with people obtaining new information. Both group comments here were very positive. All participants that commented in this group stated they enjoyed the experience of learning and gaining the new knowledge. Most seemed to enjoy the topic and the new knowledge that they took away with them on bridges and/or considered that the material was well thought out and presented. Some commented that they enjoyed the opportunity to obtaining new knowledge in the virtual world/game space were inspired to seek additional in-world learning.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.4 Facets of 3D Learning=====
The comments in this category were specific to the 3D lecture with the use of models. The participants in the 3D group were universally positive about the use of 3D models. Many seemed to believe that having a model of the presentation assisted them in the understanding of the subject matter. (Note, however, that the test scores did not reflect a significant advantage from the 3D models with respect to understanding, although there were indications of an advantage in remembering).
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.5 Instruction=====
The comments in this category had to do with the method by which the new knowledge transferred to the participant. In this area a small but significant number of participants in both groups commented that they missed not having a real person to ask questions to clarify the information but more so in the 3D group which seemed to want to find out more information about the topic than was presented to them. (Note, as mentioned, the information was identical in both cases).
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.6 Focus=====
The comments in this category had to do with observations affecting attention and the temporal learning experience of a participant.
This theme emerged though the general comments throughout the survey. There seemed to two broad sub groups of comments in the focus them: the presence of distractions during the learning experience and the participant’s perception of the available time per slide for learning. Although both groups experienced the same general learning conditions and real-world times, there seemed to be opposing perception of the significance of sources of distraction and perceptions of time across the two groups during the presentation. We will break this category into these two sub-themes (distractions and time) to better understand the focus aspect of the participant groups.
'''Distractions'''
The sources of distractions seemed to come from either the outside world or the inside world.
:'''Inside world distractions'''
:Only 3 comments were made from the 2D group with distractions from the inside world experience: distracting avatars, a participant’s outfit getting in the way of their view and a participant distracted by their curiosity with the technology setup used to deliver and manage the lectures.
:Whereas with the 3D group quite a number of people complained about inside world distractions, particularly being annoyed with other avatars disrupting their learning. As a group, the 3D participants were comparatively emotional/animated (with respect to the 2D group) in their response to these distractions and in a number of cases complained that the other people were not taking education as seriously as them.
'''Outside world distractions'''
:A small number of the 2D group complained/commented about outside world distraction or commented upon the advantages of staying in touch with the outside world. Such comments as being able to answer the phone, using yahoo messaging, doing things at their desk and people in real life talking to them were some of the comments made from the 2D participants.
:Whereas there was only one member of the 3D group commenting upon outside world distractions.
'''Time'''
The main theme that emerged from the 2D group was that a small number of participants commented that the presentation was a bit slow and/or that their attention wandered and/or that they “zoned out” during some slides. Contrast this with the 3D group who tended to say that the presentation was fast or a reasonable number even complained that it went too fast. The 3D group commented that the material kept them engaged and the presentation held their attention. In both cases the real-world times were identical – so the observations are directly related to perception, and in the light of other comments made, the implication is that there was a difference in perceived ‘engagement’ that arose from the single variable of the presence of the 3D objects.
The 2D participants who observed that occasionally they ‘zoned’ out during some of the slides also commented that the voice over was too smooth/calm. Nobody in the 3D group observed this problem, and conversely a number commented on how the voice over was exactly right for the presentation and kept their attention during the presentation. Interestingly the voiceover was identical in each case – but the presence of the 3D objects appearing around participants may have presented an additional level of stress that was properly countered by the voice over.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.7 Navigation=====
Traditionally a significant problem in virtual-world training experiments, learning the appropriate method of avatar navigation has typically been compounded by the use of first-time virtual world participants unfamiliar with the control of their avatar. This researcher considered this a flaw in previous studies that distorted the results with a temporary experience that would be overcome with only a small amount of in-world experience. The participants in this study, therefore, were intentionally recruited from users already present in second-life rather than brought into the virtual world specifically for the purpose of the experiment.
Consequently the negative comments on navigation were lower than in previous studies, and not generally of the same fundamental ‘how do I operate my avatar?’ nature present in a number of the studies considered in the literature review. In any case the campus and lecture environment was specifically designed to minimise the likelihood of these types of problems, and required only minimal knowledge of avatar controls (sufficient for someone with about 30 minutes of experience – based on the packaged avatar training in the second-life orientation islands).
The comments in this category had to do with how their avatar viewed the presentation. These comments were complaints from the 2D and 3D participants about some viewing aspect of the presentation.
Three (3) of the 2D group complained that the chairs blocking their view of the presentation. It was obvious from this comment that these people lacked the knowledge to use mouse view and used third person view, and did not understand how to control the third person roaming camera effectively.
The 3D group’s complaints provided the most insight as to how they viewed the presentation. A small, but significant, number of the participants complained that the 3D models of the bridges ‘got in the way’ of their reading of the slides (a function of navigation) or that they could not both read the slides and look at the models (a function of time). Although avatars were not seated once the 3D presentation began, and avatars were free to wander around the space, with slides projected onto the walls around the models, some users clearly did not realise the additional freedom allowed them to position their avatar for clear slide viewing at any time. Further, it seemed, although presented with a 3D model and the voice over that covered the entire slide content a number of the 3D group still attempted to use the traditional method of viewing the slides whilst looking at the models.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.8 Technology Constraints=====
This category contained comments by participants about the technology constraints that they had experience during the lecture deliver. Although this question was also asked in the Likert questions as provided in the previous section above where the 2D and 3D groups responded ‘9% and 7% respectively, more participants made a comment identifying technical problems in their open comments.
From the 2D and 3D groups’ comments 20% and 18% respectively identified at least one technology constraint. Of these all of the participants had already answered yes in the Likert question therefore a further 11% in both groups commented upon having a technology related problems. The technical difficulties were due to sound and lag/object rezzing, the same problems given by the participants in the Likert questions.
As discussed in the literature review this technology is streamed in real time therefore ‘lag’ is a common risk in using this technology and will vary with network connection speed (real lag) and individual computer problem (false lag – but possibly the single most common culprit). Although no one made any comments that the lag affected their ability to learn. In most cases where it was reported the lag caused only a slight delay in the slide show with comment being that they experienced ‘some’ lag. As each slide, audio and object was independently synched together, lag problems could not accumulate across the slides and any synching problems were corrected with the next slide (or in some cases half way through a slide).
The sound constraints were only temporary in all cases. This problem was due to drop outs of the presentation voice-over. This problem was picked-up early in the testing phase where occasionally the audio would stop and a re-log of the application was required in order to get the audio back. As this was picked up in testing signs were placed around the lecture screens instructing the participant to re-log if they experienced audio dropouts. In all case if they complained about the audio drop, participants also noted that a re-log solved their problem quickly. The impact of an immediate re-log on the learning would be at most half the content of a slide would have been lost. As all slides were summarised at points during the presentation the participant was unlikely to completely miss the associated material.
====5.3.1.4 Survey Instrument====
This category included comments that related to the pre or post survey instrument.
There were 6 participants from both groups that commented upon the size of the pictures in the diagrams of the post-quiz being too small. From their comments they had trouble distinguishing some of the bridges in the pictures.
As the display size is based upon a person’s monitor size people that had small monitors may have had problems distinguishing the details in the pictures. The survey viewed correctly on a 17 inch monitor at 96 dpi, but anyone with a smaller than this or unusual resolution settings may (possibly) have had problems.
This problem was not realised until quite a number of participants had already completed the research. It was therefore decided that any change in the picture size in the survey would only corrupt the experiment conditions and may bias the results so no modification was made. Therefore all participants that undertook this research operated under the same picture constraints in the survey.
On review of the results of the 6 participants that complained 3 of these were from the 2D group and 3 from the 3D group. The participants results for the post-quiz scores for the 2D for ‘remember’ and ‘understand’ were; 9, 7; 9, 4; 9, 4 and the 3D group; 8, 4; 8, 5; 8, 4 respectively. All of these participants passed both Blooms’ cognitive processes categories. The average z-scores for their groups for ‘remember’ were all above average but the ‘understand’ showed that these participants were either on average score or scored below average.
There were 9 ‘remember’ questions and 8 ‘understand’ questions in the survey that required the participant to use pictures in answering the question. The Bloom’s cognitive process of ‘understand’ would have been more affected by the picture constraints. The questions in the Bloom’s ‘understand’ cognitive process were substantially more difficult with material that was not presented during the lecture therefore the participant had to use the picture to recognised and assimilate information in order to answer the question.
The researcher notes that this problem may have contributed to some of the low scores results especially within the Bloom’s cognitive process of ‘understand’. Although from the comments only 6 out of 111 people complained about this problem there is no way to know how much of a problem this presented, from the lack of comments we can only assume that this was not a constraint for most participants – or, at least, not one they were realising they were experiencing.
===5.3.2 Qualitative Analysis of Thematic Results===
====5.3.2.1 Introduction====
The Survey comment questions were not compulsory, but less than 4% reflected nonsense or non-responses with an average of 100 words per person, and 3D participants providing approximately 12% more comment volume than the 2D participants.
Interpreting the collected thematic responses was aided by the consistency of the emotion and approval expressed by participants, and the surprising number of instant messages sent directly to the researcher by participants in thanks for the experience, and the range of both supportive comments and recommendations provided in the open comments. To that end the researcher offers the following generalised collation of the qualitative opinions expressed by participants.
The general lack of negative observations reflects that same proportion in the underlying data. Three positive and three negative observations were requested as well as open/general comments. Overwhelmingly, the positive question was populated while the negative question was generally underpopulated, or comments like ‘I have none’. The most frequent negative comments were an expressed desire to control the delivery speed, acquire additional information in some way, or the opportunity for distraction. In some cases these were also identified as positives. The lack of colour in the negative comments was contrasted by the diversity of positive comments. Different participants chose to comment on different positive aspects of the experience, and an individual participant tended to concentrate comments within a theme.
To aid in interpretation of the analysis while avoiding the implication of hard statistical interpretation, where some degree of researcher subjectivity and ‘translation’ is involved, the researcher has used the following terms with some degree of overlap at the margins:
*Few – 5% or less of comments
*A number – 5% to 15% of comments
*A significant number – 15% to 25% of comments
*Many – More than 25% of comments
*A majority – More than 50% of comments
*Most – More than 60% of comments
Outside of these terms the researcher has provided clear absolute percentage counts where the numbers are at the extremes.
====5.3.2.2 The Virtual Learning Experience: Both Groups====
The two most used words to describe their experience were ‘fun’ and ‘interesting’. The frequency and strength of these positive comments surprised the researcher, representing over 60% of the participants.
The virtual world seemed to offer participants with a fun way to learn with the convenience of learning on line in their own time but further, at least as the experimental campus and lecture rooms were constructed in this experiment, offering a participant with a sense of presence that provided them with the perception of a similar experience to that of learning in a real world learning environment. Seeing others in the environment while attending a lecture as their avatar in a simulated theatre, gave the participant more of a connection to the learning process than one might expect if they were doing an online purely HTML page based traditional distance education learning course. To the majority of participants this experience felt personal, and the atmosphere relaxed and many found that it offered a more pleasurable experience than the traditional learning method of attending a lecture class in the real world.
The environment seemed to promote a favourable attitude to learning. Not only did the majority of the participants say it was “fun” but a number commented that they felt inspired to learn more about the topic, wanted to ask further questions on the same, or seek for more details and a significant number expressed surprise that although they clearly had experience of the topic in real life, they had never really considered how exciting a bridge could be. Only one participant expressed a non favourable attitude to this form of learning and/or the topic.
Based on the comments, the average participant was clearly immersed in this aspect of virtual learning as reflected by many comments that expressed varying degrees of ownership over the experience – and even, in some cases, resentment of others or extraneous circumstances had interfered in their learning.
To many this was a new experience in a virtual world and although they initially saw the offer of ‘linden’ as an easy way to make fast money, by the end of their experience instead of thanking the researcher for the money they thanked the researcher for the learning experience. The content of some of comments expressed surprise that the game they had known before was no longer ‘just’ a game to them. Participation had opened the possibility for a whole new world of learning, inside and outside of Second Life.
The virtual learning campus provided the participant with a seamless way to learn. Many liked the staged approach reflected by the testing and learning process (necessary as part of the automated control regime for the experimental process) - finding it a novel approach to the learning experience. Going from room to room to complete the each stage in the learning process possibly made this more fun than an alternative virtual world learning approach utilising a single class room in which all stages of a process might occur. Not knowing where the teleports would lead them in the next stage of their journey provided an exploratory feel to the environment. For most participants they found the environment very easy to use and welcoming.
The format and the information provided in the slide presentation received, on the most part, positive feedback. The requirement for more control over the slide show to pause, forward and rewind came from both groups. Enabling user control like this in this experiment was not an option as control over the information delivery for both groups’ had to be placed under strict experimental conditions so that only one independent variable changed in the experiment – the presence or absence of the 3D models.
Even so, if this or a similar lecture was not under experimental conditions the researcher cannot help but question if this addition would have lessened the entire experience of the participant. Sharing in the learning process within a set time frame and the pressure of the quiz after completion may have also added to the positive experience felt by the participants. Possibly allowing the user to walk away with additional material may have assisted in providing the participant with the convenience to learn more than just the information presented. In addition to this a live lecturer as some participants would have like to have seen may have also satisfied the participant’s requirements for more controlled information.
Technology constraints certainly presented itself in this experiment with approximately 20% of the participants from both groups commenting upon a technology issues to varying degrees. The major problems related to network latency (lag) and audio dropouts. In a streamed world (such as Second Life) especially when there are many avatars in a SIM lag is a typical problem. Audio although not as bad or as frequent as visual lag does occasionally present a problem in Second Life. The audio stream occasionally is lost and the only way to fix the problem is to re-log the application. Both problems from participant’s comments did not seem to affect their learning experience, and for only 7-9% warranted rating as having an impact. In the experience of this researcher, the majority of lag class problems are in fact not network lag but recipient computer performance issues. The entire sim and the various lecture rooms were monitored continually during the experiment and true (network) lag was not observed on the researcher’s computers during the experiment, nor did the SIM performance statistics monitored during the period demonstrate any significant decrease in performance.
Approximately 5% people from both groups complained that some of the pictures were too small in the survey instrument thus potentially obscuring the details of the effected bridges displayed. This could have been a major constraint on a participant’s ability to answer the Bloom’s cognitive process of ‘understand’ questions more than the ‘remember’ questions, and therefore may have contributed to perceptions of difficulty in Bloom’s ‘understand’ cognitive process portion of the post-quiz.
====5.3.2.3 The Participants: Differences Between Groups====
Whilst the 3D participants were presented with 3D models to aid learning, a number still seemed to be reading the slide show presentation. This effectively provided the 3D participants with 4 channels of learning; slide show pictures, slide show text, audio and models, whereas the 2D participants only had 3 of these channels.
There were 24 slides 20 of which were learning slides provided within a 20 minute lecture session for both groups. This meant a participant had approximately one minute per slide where they were presented with something new. There were 11 3D models of 4 bridge types therefore a new model was presented to them approximately every 2 minutes. Combining the models with the slides in the same time frame as the 2D participants may have disadvantaged the 3D participants.
The information content that was delivered to both groups was the same. No more or less technical or providing anything new with exception to the 3D models for the 3D group. Yet from the 3D groups’ comments some participants seemed to want more information or simpler explanations. Within the 2D group many had commented that it was easy to follow not too technical and easy to comprehend – none commented that the material was complex. Possibly this difference is not that they needed more information but rather that with 4 information channels there was too much information provided in the time allocated for the 3D group. Alternatively the difference might also reflect as case of ‘not knowing what you don’t know’ in the 2D group, while the addition of accurately constructed 3D models raised additional questions in the minds of the participants, or improved their general level of attentiveness.
The 3D group found the addition of 3D models to be a useful learning tool. From their comments it seemed that 3D models of the bridges were perceived to have helped them understand the subject matter better than they perceived they would with a lecture without the models. (Note, however that in this case the perception is not supported by the test results). Many participants perceived that the 3D models also made the entire lecture experience more engaging than whatever assumed alternative against which they were measuring the experience.
The focus of the 3D participants was more strongly inside the world rather than their outside world. Furthermore the extent to which their focus inside the world provided distraction brought about more emotional response than distractions noted by from the 2D participants. The former tended to use repetition, descriptive adjectives and emphatic declamations concerning distractions, while the latter tended to merely note or comment favourably about the ability to be distracted! This seems to suggest that the 3D participants experienced a greater feeling of presence and possibly immersion in their virtual world learning experience.
To appreciate these comments, the reader is referred to the literature review where the difference between immersion and presence is discussed (see page 39). Immersion or ‘system immersion’ is an objective measure it is the extent to which a person becomes removed from their outside world to operate within the virtual world space. Whereas, presence is a subjective measure it is the extent to which a person feels connected inside the virtual world or the feeling of ‘being there’ and their ‘willingness to suspend disbelief’ they are a part of, and inside, the virtual world.
The classification model presented by Benford (see Figure 9. Shared Space Technology According to Artificiality and Transportation) virtual reality environments are placed on a scale of artificiality and transportation. The degree to which a participant becomes removed from their local space to operate in a remote space is transportation that from Benford model is purely based upon the physical aspect of the virtual environment.
In this study the strong difference in the emotion and terms consistently used by participants in the 2D versus 3D lectures seemed to suggest that given the same virtual reality technology (desktop CVE) a greater transportation occurred for the 3D participants. The 3D participants become removed from their local world distractions and were transported into the virtual remote world. Thus in turn lead to a higher degree of presence within the virtual environment. The 2D comments of distraction compares with the results obtained by Martinez, Martinez, & Warkentin (2007) reviewed in Chapter Two Literature Review. They found when participants were presented with a 2D lecture in world participants reported distractions or a ‘disconnect’ from the lecture in world (see p. 86).
The degree of presence in the environment is often linked with desktop virtual worlds based around social interaction. As discussed the literature review Schroeder defines presence in terms of presence, copresence and connected presence (see Figure 10) which can be described respectively as ‘being there’, ‘being there together’ and ‘being connected together’. As discussed in the literature review for a social virtual world the level of presence is greater than a game virtual world due to the social connective aspects that occur within the virtual world. Heeter also defines that the presence of an individual is increased when social relationships are formed within the environment. Whereas, in this study, both groups where given the same social interactive aspects but it seems that the introduction of 3D models produced a higher level of presence for the 3D participant. The 3D participants clearly displayed more ‘ownership’ over their learning experience than the 2D group.
Of interest this higher level of engagement by the 3D group carried over to the volume of survey responses. The 3D group provided more descriptive and richer comments than the 2D group. Rather than a short dot points as often used by 2D participants, the 3D participants tended to use sentences in their open comments. The researcher was left with the subjective impression that the 3D participants, as a group, were motivated to greater detail and consideration of their comments, than was typical of the 2D group. Although not specifically measured, it is possible that the 3D group were still engaged with the experience even after they had left the lecture environment.
A further noticeable difference between the two groups was their relative concept of time. The 2D group made more comments that the slide show was a bit slow, whereas the 3D group made more comments that the lecture was too fast. (Note the actual timing and content were identical). This differing perception of time is most likely is due to a combination to the extra channel of information delivered to the 3D participants (being the 3D models) that had to be absorbed in the same time span as the 2D participants and the higher level of engagement the 3D participants expressed about their learning experience. One cannot rule out the effects of a possible unmeasured elevation of participant stress from the more “intense” learning experience vectored on the addition of the extra information channel.
==5.4 Discussion of Results==
This research sought to find the difference in learning outcomes of participants that were presented with two different forms of delivery methods; a 2D slide show and the same 2D slide show augmented with 3D models and simulations.
For the quantitative analysis the level of learning outcomes was the difference in the measure of achievement scores between the 2D group and 3D group.
Did they learn more after being presented with a 2D slide show or a 3D simulation model? From the results of both groups there was a slight, not statistically significant, lean towards the 3D group’s results on the total post-quiz scores. When analysed within each of Bloom’s cognitive process of ‘remember’ and ‘understand’, the 3D group performed slightly better than the 2D group (most notably at the upper score ranges) in the ‘remember’ dimension but there was no appreciable difference in the ‘understand’ dimension. The subjective interpretation might be that, with respect to the ‘remember’ outcome, the 3D approach may assist ‘stronger’ students to do better than they would otherwise do under the 2D approach, but that there was little impact on the ‘average’ student. The study did measure the ‘instantaneous’ ‘remember’ outcome, not the ‘remember’ outcome over an extended period, which might reveal greater differences.
Regardless of any anecdotal differences that may have been found, and the foregoing comments, the results of the statistical analysis of the post-quiz score across both groups revealed no statistically significant difference between the two groups learning outcomes within the confines of this experimental model. Thus the hypotheses defined for the quantitative analyses for this experiment remains unconfirmed.
Learning outcomes for a student traditionally are measured by a student’s achievement scores. Although an important measure, this does not provide any insight to the learning experience of the student. A student that obtains a high learning outcome by quantitative methods is not a measure of success from a qualitative approach. Quantitative methods focus on outcomes, qualitative methods focus upon the journey that leads the student to their end results.
While both the 2D and 3D groups were strongly positive of the learning experience, the qualitative analysis of both groups’ open comments revealed noticeable differences between the two groups’ journey to their end results. The 3D group tended towards greater ‘ownership’ of their learning experience, and while the 2D tended to merely observe (in some cases as a benefit) with the opportunity for distraction, the 3D almost universally, expressed resentment, or even anger, about the same distractions.
The experimental constraint of ‘same time’ may have adversely impacted the 3D group’s scored outcome due to the delivery of an additional information channel over the same time frame – even though at least 2 of the channels were effectively redundant. As the two groups performed the same and if anything the 3D group did slightly better, such a conclusion is by no means certain. The affect may rather have been to induce greater involvement by raising the stress factor for the 3D group and force greater participation in order to ‘keep up’ with the information flow.
The presence of the 3D models was widely perceived by the participants to enhance their understanding of the subject matter – although the scoring suggests that they assisted with remembering rather than understanding.
From the literature review of previous research it was found virtual world learning does take longer than traditional methods (Arreguin, 2007; Joseph, 2007). In this lecture we provided 20 minutes to both groups for a post-quiz of 20 questions. Although the 2D participants did not display a problem with the time allocated to the lecture from their comments given the results of the post-quiz particularly with Bloom’s ‘understand’, possibly both groups needed more time in which to understand the material, and particularly the 3D group where they were presented with an extra channel of information which could be interactively explored by which to learn.
Of the Likert scale questions 28 and 29 showed the most variation across the participants. The questions were specific to a participant’s learning experience. Question 28 asked if they found the learning experience better than their usual methods of learning. The vast majority from both groups agreed.
When asked in the Likert scale if the information provided was enough to understand the topic the 2D group was slightly more satisfied than the 3D group. The open questions shed some light on this issue, with more 3D group participants expressing a desire for more time to assimilate what was provided and more opportunity for self driven information collection, questioning and investigation – rather than merely more information per se. This difference might also reflect the greater level of participation, immersion, presence or transportation evidenced in the 3D group.
==5.5 Conclusion==
In answering the research question: How effective is it to learn in a virtual world using a traditional 2D slide show method compared to that of a 3D interactive simulation? The conclusions from this research are clear, and not necessarily as expected by the researcher at the commencement of the study:
#Transportation of a 2D real world lecture presentation into a virtual world situation is an acceptable use of the virtual world technology producing no statistically different outcome for Bloom’s ‘remember’ and ‘understand’ and combined cognitive processes at the mean, although there are some indicators that the ‘remember’ outcome might be enhanced at the upper and lower deciles of participant ability through augmentation of the 2D presentation with 3D representation and simulation.
#Adoption of 3D visual aids is not a pre-requisite for successful learning in a virtual space.
#The presence of 3D visual aids assisted participant’s perceptions of enjoyment, engagement, presence, immersion and/or transportation, and may therefore have a longer term effect on participation rates where participation in learning is purely voluntary.
Projecting these conclusions into a practical teaching scenario, where outcomes are the same, and only instantaneous outcome measures are considered (as the researcher did not examine long term outcomes) after taking account of the input costs of material preparation, it is clearly more cost effective to use the 2D presentation strategy for delivering virtual world courses. This conclusion is sustained where cost is measured in terms of time required for input preparation regardless of the sourcing (i.e. where the 3D models are acquired for no input hours and no financial cost the cost measure would void the observation), and outcomes are measured in terms of test scored results taken within a short period of the learning.
Where the outcome measure includes participant perception of the experience, the 3D augmented learning approach is indicated, but in this scenario, grading the relative ‘worth’ of the greater experiential outcome is more difficult and it is less clear how it can be factored absolutely into a cost benefit analysis.
==5.6 Opportunities for Further Research==
Experimental research as the name suggests is applying scientific methods and analysis to learn new insights so that other researchers can pick up from the experiment to reproduce, reform and critique. In this section the researcher proposes some opportunities for further research based upon the analysis of the results discovered in this research.
===5.6.1 Improving Instrument Reliability===
One limitation that is difficult to avoid was found in the analysis of the instrument reliability using formal (statistical) instrument reliability testing. Essentially in this experiment there were too few questions within each of the two Bloom’s cognitive process test sets to provide a conclusive reliability measure of the instrument. Increasing the number of questions within each group would certainly provide more data points in which to measure achievement results, and as a consequence of how the reliability measure algorithm works, would improve instrument reliability. The first obvious problem faced with the pre-quiz and post-quiz design for this type of experiment is that, as the number of test questions (data points) is increased, there is a point at which the testing might materially affect the training experience and therefore the outcomes, as the participants would eventually start learning from the quiz questions.
If the number of question were to be increase the range of information presented to the participant would also have to increase. Increasing the range of information provided would require additional time to be allocated to the lecture and possibly each topic therein. There is a point at which the length of time required to complete the lecture and quiz / survey combined would affect the quality of the results as the voluntary participants might judge the exercise was taking too much time, and rush the final testing / survey stages.
===5.6.2 Course versus Lecture===
The experiment focussed on a single lecture, measuring the affordances over a sequence of lectures using a similar experimental model would provide additional depth of analysis and would neutralise any initial ‘wow’ factor that might have influenced participation and attentiveness in this single event based experiment. It is possible that differences in outcomes might be more apparent between the two groups if a course was involved rather than a single lecture. There are other factors that might influence such an experiment design – such as motivation for attending the course in the first place.
===5.6.3 Introducing a Real and Robot Presenter to the Experience===
The 3D group displayed a higher level of presence in this research study. The contributing factor in this observed difference between the two groups was prima-facie, the 3D models. The opportunity for further research lies in the introduction of a presenter (even an automated robot presenter) into the lecture experience to see if this increased the level of presence had by the 3D group would occur for both groups given a live or virtually-live lecturer. As presence is generally shown to be increased by relationships with other people within a virtual world the introduction of a lecturer may add further insight as to why the 3D group displayed a higher level of presence given they only had the addition of 3D models.
===5.6.4 Testing Other Bloom’s Cognitive Processes===
The 3D group seemed to believe that the models contributed to their understanding of the subject matter. Testing higher levels of Bloom’s cognitive processes such as Apply, Analysis, Evaluate and Create may reveal that this increase in understanding may present differences between the two groups for the higher levels of Bloom’s cognitive processes.
===5.6.5 Outcome Measurement Over Time===
In this experiment the post-quiz was given directly after the lecture. Re-testing the participants over a number of periods to assess which group retained the information better for longer, and the extent to which the two approaches impacted understanding outcomes over time. The experiment would probably require a vastly greater number of initial participants so that each time lagged testing group could be tested once at different intervals, rather than re-tested, so that the testing itself did not colour the results. The researcher suspects that the greater level of post lecture engagement demonstrated by 3D participants might result in both slower degradation of the ‘remember’ outcome and a post lecture improvement in the ‘understand’ outcome over time.
===5.6.6 Comparison to Real-World Training===
Perhaps the most obvious inquiry that presents itself for further research is to include another experimental group. As the virtual world 2D lecture was effectively a real world lecture delivered in a virtual world, the addition of a real world participant group operating under the same constraints as the virtual world groups would provide an interesting control reference for virtual-real world comparison of outcomes. Providing the 2D presentation to real life participants may provide further insight to the differences of the virtual learning experience in addition providing a control group that was based around more traditional learning methods.
</div>
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
e0dc814868f08786a70ffb525e6714fceaef6213
367
313
2010-08-05T13:15:05Z
Bishopj
1
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 5: Discussion & Conclusion=
==5.1 Introduction==
This chapter provides the analysis of the results presented in the previous chapter along with a discussion of these results and opportunities for further research.
In analysis of the results the researcher has applied both quantitative and qualitative methods in order to answer the research question: How effective is it to learn in a virtual world using a traditional 2D slide show method compared to that of a 3D interactive simulation?
Quantitative methods were performed on participant’s achievement scores for the pre and post quiz and Likert scale results. Qualitative methods were used on responses from participant’s post survey open questions results.
Discussion of results applied triangulation combining both the quantitative and qualitative results in order to better understand the 2D and 3D group’s learning experience and any differences that were observed between these groups.
This chapter concludes with a discussion on the opportunities for further research.
==5.2 Quantitative Analysis==
===5.2.1 The Results of the Hypothesis===
The aim of this study was to determine if two lectures differing only in the presence or absence of 3D models (and therefore employing either 2D or 3D learning delivery) in an online 3D virtual world would produce different learning outcomes for Bloom’s cognitive processes of ‘remember’ or ‘understand’. The following hypotheses were formed:
*(H<sub>1</sub>): That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
*(H<sub>2</sub>): That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
Measured statistically, neither of the above hypotheses were sustained by the scored (quiz) testing results as there was no significant statistical difference between the results of the two groups. The researcher applied statistical significance testing as the foundation for rejection of the null hypothesis formation of the above hypotheses (i.e. that, in each case, the process will result in NO significant difference) based upon a statistically measurable difference. If there is no measurable difference found between the samples; the primary hypotheses remains unconfirmed. An unconfirmed hypothesis does not mean the hypothesis is false rather it means it is capable of disproof thus unconfirmed (Karl Popper’s principals of falsifiability).
As the researcher was not able to refute the null hypothesis on the basis of a raw statistical comparison of the test scores, the researcher turned to the real data results to see if there was an actual (although possibly not significant) difference between the results of the two groups, or any clearly emerging or suggested trends that might qualify the implications of the raw statistical comparison.
===5.2.2 The Results of the Pre-Quiz===
====5.2.2.1 Pre-Quiz Total Scores====
Analysis of the results in the previous chapter for the total pre-quiz scores (i.e. both cognitive processes combined) between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 51% and 55% respectively, therefore 4% of 3D participants scored better than the 2D participants for a pass rate of 4 out of 8.
*Average scores (mean) for the 2D and 3D groups were 3.69 and 3.68 respectively. Both groups’ average scores were effectively the same.
*Median scores for the 2D and 3D groups were both the same with a value of 4.
*Mode for the 2D group was lower than the 3D group, 3 and 4 respectively. Effectively demonstrating that more 2D participants scored a 3 whereas more 3D participants scored a 4. A score of 3 for the 2D and 3D groups were 31% and 23% respectively and a score of 4 for the 2D and 3D groups were 20% and 23% respectively.
*The range of scores for the 2D group was less than the 3D group, 1-6 and 0-7 respectively.
*Standard deviation for the 2D groups was less than the 3D group 1.372 and 1.479 respectively, therefore the 2D total groups’ scores were closer to the centre of the mean (average score) than the 3D group.
*Skewness was positive for the 2D group and negative for the 3D group, 0.007 and -0.188 respectively. This demonstrates that the *3D groups’ scores were slightly higher than the 2D scores. This skewness difference is due to the mode difference between the groups as both the median and average scores where equal.
*Kurtosis was negative (platykurtic) for the both groups. Platykurtic distributions are flatter at the top of a distribution curve and less peaked around the average score (mean). The slight difference in value of kurtosis across the two groups accounts for the probability density value being lower in the Gaussian distribution graph in Figure 62. Results: Pre-Quiz Totals - Histogram & Bell Curve
'''Summary & Interpretation: Pre-Quiz Total Scores'''
There was a 4% higher pass rate for the 3D group and the mode value of the 3D group was higher than the 2D groups’ total pre-quiz scores. The pass rate was higher because of the greater mode value obtained by the 3D group. The 3D group obtained a greater range of scores than the 2D group thus providing the 2D group with a tighter (smaller) distribution of scores around the mean.
Given the distribution of scores between the two groups the 2D group had a higher probability of scoring around the mean than the 3D group (28% and 26% respectively). Thus, although the 3D group obtained a higher pass rate and mode value, a participant in the 2D group was 2% more likely of scoring a 4 than a participant in the 3D group. This small percentage difference can be seen in Figure 61 inverse normal distribution graph, in the lower and higher quartiles the 2D group varied away from the 3D group. In the lower, quartile participants in the 2D group scored higher. In the higher quartile, participants in the 2D group scored lower. Thus this slight shift away from the 3D group curve toward the mean demonstrates that the 2D group was more likely to obtain the mean value than the 3D group.
Although there was a difference in the 2D and 3D group pre-quiz scores the percentage difference was, in the opinion of this researcher, effectively immaterial; showing that both groups stated with the same level knowledge on the topic ‘The Physics of Bridges’ prior to the lecture.
The result of the question 21 in the Likert scale survey is comparative with the above analysis. When asked to scale their level of knowledge on the topic ‘prior’ to the subject the low plus medium scores for the 2D and 3D participants were 98% and 96% respectively. The response that their knowledge was high from the 2D and 3D participants was 2% and 4% respectively. This provides a 2% difference for both responses, which is comparative to the real results of the data analysis above. So the difference in the participant group’s subjective assessment matches that showed by the tested assessment.
====5.2.2.2 Pre-Quiz Remember and Understand Scores====
In the previous chapter we found that when a significance test was performed independently on Bloom’s cognitive processes of ‘remember’ and ‘understand’ for the pre-quiz a significant difference was found between the two groups. The 2D group scored significantly higher than the 3D group for the Bloom’s cognitive process of ‘remember’ (t = 1.665, df = 109, one-tailed p = 0.0494, α = 0.05), and the 3D group scored significantly higher than the 2D group for the Bloom’s cognitive process of ‘understand’ (t = -3.03167, df = 109, one-tailed p = 0.00138, α = 0.05).
The pass rates for Bloom’s ‘remember’ cognitive process for the 2D and 3D groups were 80% and 66% respectively. The pass rates of Bloom’s ‘understand’ cognitive process for the 2D and 3D groups were 35% and 52% repetitively. The average score for the 2D and 3D groups for Bloom’s ‘remember’ was 2.44 and 2.071 and ‘understand’ 1.25 and 1.60 respectively. The standard deviation for the 2D and 3D groups for Bloom’s ‘remember’ were 1.032 and 0.775 and for Bloom’s ‘understand’ were 1.263 and 0.867 respectively.
The scores for the Bloom’s splits at the pre-quiz stage are of passing interest in this experiment (independent of the post-quiz results) and the significant differences found for these figures were not especially surprising.
This experiment was not designed to measure and compare pre versus post learning outcomes of the participants. Rather, it was designed to find differences between the 2D and 3D groups comparative learning outcomes (i.e. the post-quiz results). In other words, the research was not trying to measure ‘by how much’ learning or understanding improves, but rather the relative difference in the final results between the 2D and 3D groups.
The pre-quiz was given to obtain an indicator of the general knowledge of the material that was to be delivered so that relative differences in outcomes could be normalised against the initial positions.
With the total number of pre-quiz questions being 8, of which both of the Bloom’s cognitive process were represented by only 4 questions each, there were not enough questions in each group to test reliably the true levels each of Bloom’s cognitive processes of ‘remember’ and ‘understand’ prior to the lecture. With so few data points for the individual processes, small variations in responses produce large variations in final scores. Hence the 2D/3D group variations were not especially surprising.
The problem for the research design was to avoid impacting the outcomes with the measurement instrument itself. The post-quiz was taken within approx 30 minutes of the pre-quiz, and only a single lecture was delivered, between those two measurement points. Providing more than 8 questions in the pre-quiz for a single 20 minute lecture would have increased the risk that the participants learnt from the pre-quiz questions relative to the lecture.
Furthermore, the concept of ‘remember’ and ‘understand’ for Bloom’s cognitive processes prior to instruction does not especially make sense in the context of the experiment. As discussed in Chapter 3 (instrument design), the development of the questions within the instrument was based upon the lecture. ‘Remember’ questions were extracted from the instructional content of the lecture whereas the ‘understand’ questions were derived from material not taught in the lecture. The pre-quiz questions were also specifically targeted at the four bridge types covered in the lecture to calibrate the extent of pre-existing content knowledge.
A participant being tested within each of these levels prior to instruction (over which no certainty of prior topic learning experience can be established) can only be measured with respect to their pre-existing general knowledge of the topic. This may reflect either memory or understanding. The extent to which this analysis grouped the pre-quiz questions into ‘remember’ or ‘understand’ in this discussion, reflects only the researcher’s perfect knowledge of the lecture content as to whether the topic of the question was subsequently directly taught or not in the lecture – not whether the participant was actually remembering or understanding at the pre-quiz stage.
The extent to which the split at the pre-quiz stage matters to the discussion is that if a participant already had an indicative level of ‘understanding’ prior to the lecture, that ‘understanding’ should improve when assessed after the lecture. If one group, for example, starts with a level of 60% and ends with 61%, this is possibly a worse outcome than the other group starting with 45% and ending with 58% (although there is also some discussion that could qualify even that conclusion).
===5.2.3 The Results of the Post-Quiz===
====5.2.3.1 Post-Quiz Total Scores====
An analysis of the results in the previous chapter for the total (i.e. combined Bloom’s) post-quiz scores between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 67% and 77% respectively, therefore 10% of 3D participants scored better than the 2D participants for a pass rate of 10 out of 20.
*Average scores for the 2D and 3D groups were 10.98 and 11.36 respectively. A 3D participant scored on average 0.38 higher than a 2D participant.
*Median scores for the 2D and 3D groups were 11 and 12 respectively. The 3D participants scored higher in the 2 quartile than the 2D participants.
*Mode for the 2D group was lower than the 3D group, 11 and 12 respectively. Effectively demonstrating that more 2D participants scored 11 and more 3D participants scored 12. A score of 11 for the 2D and 3D groups were 20% and 21% respectively and a score of 12 for the 2D and 3D groups were 11% and 29% respectively.
*The range of scores for the 2D group was more than the 3D group, 5-17 and 6-17 respectively.
Standard deviation for the 2D group was slightly more than the 3D group 2.468 and 2.347 respectively, therefore the 3D total groups’ scores were slightly closer to the centre of the mean (average score) than the 2D group.
*Skewness was positive for the 2D group and negative for the 3D group, 0.052 and -0.229 respectively. This demonstrates that the 3D groups’ scores were slightly higher than the 2D scores. This skewness difference is due to the mean, median and mode differences between the two groups’ scores.
*Kurtosis was negative (platykurtic) for the 2D group and positive (leptokurtic) for the 3D group, -0.2 and 0.3 respectively. As mentioned above platykurtic distributions are flatter at the top of a distribution curve whereas leptokurtic distributions are higher and peaked around the mean score. The differences in value of kurtosis between the two groups account for the probability density value being higher for the 3D group in the Gaussian distribution graph in Figure 64.
'''Summary & Interpretation: Post-Quiz Total Scores'''
The above analysis finds that the 3D participants scored overall better than the 2D participants in the post-quiz. Although this difference was not statistically significant from the t-test results (t = -0.8212, df = 119, two-tailed p = 0.4133, α = 0.05) the real results indicate that there was a slight difference between the two group results. Analysing the Gaussian distribution curve (Figure 64) shows that the 2D and 3D participants had a 15% and 16% likelihood respectively of scoring a 12 in their total post-quiz score. In general the overall results showed that the 3D group performed better by 1%, this can also be seen on the inverse distribution graph (Figure 63) where the two groups almost run parallel to one another with the 3D group performing approximately 1% better in their overall test results.
The results of question 22 in the Likert scale, when asked to scale their level of knowledge on the topic ‘after’ the lecture, the 2D and 3D participants low response was 22% and 23% respectively and medium response 73% and 74% respectively. At the medium level the self assessment was consistent with the test results of a 1% difference. At the low level the 3D group seemed to be more conservative in their response perceiving that their knowledge was less than the 2D group although the real result showed the contrary. In either case a 1% difference is within the margin of error.
====5.2.3.2 Post-Quiz Remember Scores====
Analysis of the results in the previous chapter for the post-quiz ‘remember’ scores between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 85% and 93% respectively, therefore 8% of 3D participants scored better than the 2D participants for a pass rate of 5 out of 10).
*Average scores for the 2D and 3D groups were 7 and 7.32 respectively. The 3D participants scored on average 0.32 higher than the 2D participants.
*Median and mode scores for the 2D and 3D group was 8 for both groups.
*The range of scores for both groups was the same, 3-8.
*Standard deviation for the 2D group was higher than the 3D group 1.8 and 1.6 respectively, with a 0.2 difference between the groups.
*Skewness was negative for both groups with the 2D and 3D skew of -0.6 and -0.9 respectively. As both groups were close to 0 with a 0.3 difference between the two groups this demonstrates that the distribution of the results for both groups was almost symmetrical.
*Kurtosis was negative (platykurtic) for the 2D group and positive (leptokurtic) for the 3D group, -0.7 and 0.7 respectively.
'''Summary & Interpretation: Post-Quiz Remember Scores'''
The post-quiz scores mask a complexity that requires further consideration. Although the 2D group was normality distributed, the 3D group failed D’Agostino-Pearson (K2) normal distribution test (p = 0.01161 ie < 0.05). In order to compare meaningfully the results of the 2D and 3D groups, the researcher needed to look into why the 3D group failed normal distribution and what, if anything, it implies to the interpretation of the apparently “better” 3D pass rates.
Analysis of the histogram and density traces graph Figure 65 show that both the 2D and 3D graph displays a bimodal distribution in the histogram graph with 2 peaks at 3 and 8. As can be seen on the density traces graph, for the 2D scores between the scores of 3-8, the variance was greater. This causes the curve to flatten prior to its peak.
Although the statistical analysis determined that difference between the pass rates and mean (by which the 3D group was higher than the 2D group) was not significant when taken as a whole there is a clear visual difference between the graphs that deserves explanation. When considered within specific score ranks the outcome slightly favours the 3D group because:
#2D group participants were 8% more to likely to score 4 or below,
#3D group participants were 6% more likely to score 8 or above, and
#3D group participants were 2% more likely to score 9 or and above.
This analysis can be easily seen in frequency table: below (Table 13. Frequency Table: Post-Quiz Remember).
{| align="center" width="50%" style="background-color:#ffffcc; "
|-
|colspan="4" align="center" |'''Post-Quiz Remember'''
|-
|align=center|'''Score'''
|align=center bgcolor="#DDADAF" |'''2D'''
'''(Cumulative)'''
|align=center bgcolor="lightblue" |'''3D'''
'''(Cumulative)'''
|align=center bgcolor="lightgrey"|'''Difference'''
'''3D vs. 2D'''
|-
|align=right |0
|align=right | 0%
|align=right | 0%
|align=right | 0%
|- bgcolor="lightgrey"
|align=right |1
|align=right | 0%
|align=right | 0%
|align=right | 0%
|-
|align=right |2
|align=right | 0%
|align=right | 0%
|align=right | 0%
|- bgcolor="lightgrey"
|align=right |3
|align=right | 4%
|align=right | 4%
|align=right | 0%
|-
|align=right |4
|align=right | 15%
|align=right | 7%
|align=right | -8%
|- bgcolor="lightgrey"
|align=right |5
|align=right | 25%
|align=right | 13%
|align=right | -12%
|-
|align=right |6
|align=right | 33%
|align=right | 27%
|align=right | -6%
|- bgcolor="lightgrey"
|align=right |7
|align=right | 47%
|align=right | 41%
|align=right | -6%
|-
|align=right |8
|align=right | 78%
|align=right | 80%
|align=right | 2%
|- bgcolor="lightgrey"
|align=right |9
|align=right | 98%
|align=right | 96%
|align=right | -2%
|-
|align=right |10
|align=right | 100%
|align=right | 100%
|align=right | 0
|}
<p align="center" >'''''Table 13. Frequency Table: Post-Quiz Remember (Rounded)'''''</p>
The frequency table show a cumulative analysis of each group at a particular score. As can be seen in the table, the 3D scores in general were lower than the 2D scores for each level of score below 8. The implication is therefore that the relative performance of 3D versus 2D ‘remember’ outcomes is slightly better at the higher rankings (80% and above), but slightly worse at the lower pass mark scores.
While the difference in the means may not be statistically significant, the results suggest that the outcomes at particular bands are potentially significant. To put this into context; if the desired group learning outcome is to achieve a pass or better, both methods of delivery were similar, but if the desired outcome is to maximise the potential scores, the 3D delivery might be indicated.
In general, the overall performance of both groups was better than for the score obtained in Bloom’s cognitive process of ‘understand’ which we will discuss in the next section.
====5.2.3.3 Post-Quiz Understand Scores====
Analysis of the results in the previous chapter for the post-quiz ‘understand’ scores between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 35% and 36% respectively, therefore 1% of 3D participants scored better than the 2D participants for a pass rate of 5 out of 10).
*Average scores for the 2D and 3D groups were 3.98 and 4.04. A 3D participant scored on average 0.05 higher than a 2D participant.
*Median and mode scores for the 2D and 3D group was 4 for both groups.
*The range of scores for the 2D group was more than the 3D group, 0-8 and 1-8 respectively.
*Standard deviation for the 2D group was slightly higher than the 3D group 1.48 and 1.46 respectively. A 0.02 difference between the groups shows very little difference in standard deviation.
*Skewness was positive for both groups the 2D and 3D was 0.068 and 0.332 respectively. As both groups were close to 0 with a 0.27 difference between the two groups this demonstrates that the distribution of the results for both groups was almost symmetrical.
*Kurtosis was positive (leptokurtic) for both groups with the 2D and 3D groups being 0.558 and 0.010 respectively. With a result of a 0.55 difference between the two groups shows very differences between the two groups kurtosis values.
'''Summary & Interpretation: Post-Quiz Understand Scores'''
From the above analysis both groups scored almost the same for Bloom’s post-quiz ‘understand’ results. This is clear from a study of the histogram and Gaussian distribution curve in Figure 66: both the 2D and 3D data points are almost identical.
Further, the frequency distribution comparison of the two groups confirms that the scored results at each rating band of the 2D and 3D groups exhibit no considerable difference.
Bloom’s cognitive process of ‘understand’ is a higher level cognitive process than ‘remember’. Given the pass results and the mean, median and mode scores both groups scored ‘badly’ (35% – 36%) in Bloom’s cognitive process of ‘understand’. On the face of it, the results suggest that both groups did not show a ‘high’ level of understanding of the subject matter after training; however, it should be remembered that the mean, median and mode results are a reflection of the difficulty relationship between the questions testing understanding and the lecture itself. The decision was made during the design stage to include some ‘very high’ difficulty questions in the understanding question set to ensure real test of the achieved level of understanding. Some additional light is shed on these results in the Likert scale and qualitative analysis that follows.
This research is primarily interested in the comparative difference of the 2 delivery methods, rather than the absolute scores, and for this purpose the results suggest that there is no significant or effective difference between the 2D and 3D group testing (quiz) results for the ‘understand’ cognitive process, within the confines of this experimental process.
===5.2.4 Likert Scale Analysis===
The above analysis of the quiz results showed that there was a positive result for the Bloom’s cognitive process of ‘remember’ whereas for Bloom’s ‘understand’ there seemed to be fewer participants in both groups that understood the subject matter on ‘The Physics of Bridges’ to the same level that they remembered it. In order to understand this result we will turn to the Likert scales where we asked the participants to assess the quality of the deliver method. Questions 23 and 24 specially answered these questions.
*Question 23 asked whether “the subject matter was clear and informative”. The 2D and 3D groups’ responses were positive 98% and 100% and neutral 2% and 0% respectively. With exception to the 2% neutral response it would seem that the majority of people found the subject matter to be clear and informative. Of interest the 2% neutral result was a single participant who actually performed better than group’s average score for the post-quiz results in both cognitive processes of ‘remember’ and ‘understand’ with a z-score of 0.54 and 0.69 respectively. Given their actual results it seems that within their group that this participant understood the material better than they remembered it.
*Question 24, was the lecture detailed enough to understand the subject matter. The 2D and 3D groups’ responses were positive 100% and 93% and neutral 0% and 7% respectively. Of interest were the neutral responses that came from the 3D group. These responses were made up of 4 participants all of whose post-quiz results in both cognitive processes of ‘remember’ and ‘understand’ scored less than the group’s average in their z-scores, with exception to one that scored better on their ‘understand’ post-quiz score than the ‘remember’ score.
From the above results of questions 23 and 24 the majority of participants perceived that the lecture material was clear, informative and detailed enough in order for them to understand the subject matter. The few in the 3D group who were only neutrally satisfied that the level of information detail was sufficient to understand the topic achieved post-quiz z-scores that were below average for the total group so their self assessment seemed to be correct.
Question 29 asked if the topic was appropriate to virtual world learning. This question was asked in order to gain an understanding of a participant’s view on the choice of topics that was delivered for instruction. The majority response for both groups was positive with the 2D and 3D group’s responses positive 84% and 79% respectively and neural 13% and 18% respectively. Within the 2D and 3D groups the neutral scores accounted for 7 and 10 participants respectively. For these participants in the 2D group the z-scores showed that 4 performed below average for the cognitive process of ‘remember’ and 2 for the cognitive process of ‘understand’. Within the 3D group the z-scores showed that 5 performed below average for the cognitive process of ‘remember’ and 7 for the cognitive process of ‘understand’. It seems from these results that although the majority of the participants where positive about the choice of topics a few were neutral with the appropriateness of the material to the environment, and more so in the 3D group, in spite of the fact that the material was identical in both cases. Given their z-score results from the neutral responses the 2D participants still performed better for ‘understand’ than ‘remember’, while within the 3D group the neutral responders appeared to not ‘remember’ or ‘understand’ the topic well – suggesting their relative (to the group) self assessment was consistent with their relative scored outcomes.
Question 28 asked a participant whether the in world learning method offered a better learning experience than their usual (real world) learning methods. The results showed between the 2D and 3D groups positive 74% and 73%, neutral 13% and 18% and negative 3% and 3% respectively. Although the overall results showed a positive result there was more variance with respect to quiz scores in their responses on this question.
Question 26 asked participants if they experienced any technical difficulties. The majority of participants in both groups did not indicate that they had had any technical difficulties. The responses for the 2D and 3D groups ‘No’ 91% and 93% and ‘Yes’ 9% and 7% respectively. For the participants that answered yes to this question the major problems were sound and picture loading delay (lag). All of these people commented that it was only for a short period and the problem was rectified quickly. Although a small number of participants answered yes to this question that they had no technological constraint, the open format questions showed slightly more experienced some technical issues (although apparently not perceived as sufficient to rank a “yes” in this question), which will be discussed in the next section.
This group of questions essentially assessed the participant’s perception of quality, appropriateness, purpose and “fit” to the medium of the experience. Necessarily the responses to these questions are likely to be coloured by the participant’s perception of the lecture delivery system experienced (i.e. 2D or 3D). Throughout this group of questions the responses were very strongly positive while the worst grade with a significant number of responders was neutral (excluding Q26). With the exception of the assessment of the clarity of the material, the Likert assessments slightly favoured the 2D delivery method.
The slight favouring of the 2D delivery could be either an absolute result, or a result coloured by raised expectations of one or other of the two delivery methods. We need to investigate, therefore, the qualitative analysis of the open questions to adequately interpret this slight bias in the results.
Question 26 was a check-question to allow explanation of the results in the other questions should the results therein had proven dramatically negative.
==5.3 Qualitative Analysis==
From the qualitative analysis of the post-survey responses many aspects came out about the learning experience of participants as well as the differences between the two groups in this study.
===5.3.1 Thematic Analysis Results===
As discussed in the previous chapter the results of the post survey open questions were grouped into themes and coded for qualitative analysis in order to provide further insight into the achievement results and the learning experience of participants. There were four themes that were found on analysis of the data as follows:
*Virtual World Learning
*Virtual Learning Campus
*Lecture Delivery
*Survey Instrument
In this section we provide a thematic analysis of these themes that emerged from the post-survey.
====5.3.1.1 Virtual World Learning====
This theme was specially related to the use of the virtual world platform as a learning tool rather than the delivery method of the presentation.
Convenience was the main factor mentioned from both groups. The theme identified included: doing it from home, in my own time and not having to travel in order to learn. These sorts of comments are not specific to virtual world learning technology as today many educational courses cater for students via online courses. However, there was a sense of presence that the participants felt from “being there with other people” and seeing others learn that seemed to make the experience more enjoyable to them over traditional or alternative learning methods. Quite a few commented on how the experience felt “personal like they were really sitting in a lecture room taking the course”, the atmosphere was relaxed, soothing, and providing less pressure than traditional class room methods of learning. These comments are interesting, partly because the lecture mirrored a real-world lecture in that it could not be “paused” by a participant and ran for a fixed time per slide, and a fixed time in total, so to some extent it was more rigid in delivery format than a real-world lecture in which the lecture might be paused while a question is asked and answered.
Another theme that emerged was that this medium offered a new way of learning where it was ‘on demand’ rather than a planned course where one would have to prepare in advance. Similar to searching the web to find out about a specific topic, participants felt that this medium offered them a way learn new material when they wanted and to experience this material rather than just read it over a webpage. The lectures ran on a continuous loop over the experimental period – so this perception is reasonable, in spite of the fact that the lectures were not actually ‘on demand’.
The technology seemed to offer a learning medium to reach people that traditionally would not formally learn or even use the virtual world for learning which they had not done before. It seemed to inspire people to want to learn more and do more learning exercises in and out of Second Life. For many participants this was a new experience they had never thought about using online virtual worlds as a learning platform, for them they had only used the medium as a game rather than taking a course. After experiencing this study many were inspired to seek out more leaning in Second Life or even in real life.
The overall impression from all the participants was that the virtual world learning experience was fun and enjoyable. Very few negative comments were made about the experience other than they could see that this may have the potential to not be taken seriously or possibly cheat. The experience seemed to open people’s minds about the opportunities that virtual world technology could be used seriously rather than just as a gaming environment. A comment from a participant that sums up the general impression of this technology being used as a learning tool:
<blockquote >
I'm still not convinced that virtual learning can replace learning in real world but now I think it might be possible.
</blockquote >
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
====5.3.1.2 Virtual Learning Campus====
This theme included comments made about the virtual learning campus, the setup and operations of the entire virtual learning environment in which the experiment was conducted.
The majority of comments were that the participants found it to be ‘user friendly’ and ‘easy to use’. The layout of the different rooms seemed to provide a fun way for them learn. There were only 2 people that commented on having a problem with the signage when they got to the post survey room they missed the board that told them how to take the post-quiz.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
====5.3.1.3 Lecture Delivery====
This theme is where the majority of comments were made from participants. These comments directly related a participants learning experience of the research project. The range of comments was coded into sub-categories; format, information content, learning, facets of 3D learning, instruction, focus, navigation and technology constraints.
=====5.3.1.3.1 Format=====
This theme included comments on the layout and format of the slide presentation. The comments from both groups were mostly positive. Participants could offer comments in positive, negative or general sections of the survey. In total, across the 2D and 3D groups, there were comments clearly identified as positive 11, 24 and negative 3, 1 respectively in this theme.
The positive comments liked the layout of the slides and the way the information was presented. A few more negative comments came from the 2D group; one that they wished they had the ability to interact with the pictures on the screen, another wanted annotation on the images (similar to the interaction question) and someone had problems with the colour differentiation of the tension and compression markings (tension and compression was shown in red and green respectively suggesting either colour blindness or graphic card faults). Only one person from the 3D group made a negative comment in this area identifying a desire for more pictures on the slides (the slides in the 2D and 3D lectures were identical).
While the largest proportion of the responses to the general comments question were provided by the 3D group, a common suggestion received from both groups concerning the format was that they wished the presentation could be paused or controlled such as by forwarding or rewinding. As a proportion of each group that actually provided a comment at all, this comment was marginally more frequent among the 2D participants.
With respect to the 3D group’s comments about presentation speed, it seemed that although they had been presented with a model and voice over that mirrored the images of the slides and the text therein, they still desired the opportunity to read the slides to view the information. The time per slide and the slides themselves were identical in both the 2D and 3D lectures and set to allow sufficient time for reading the slide – in fact the voice over effectively read the slide to the participant. In the 3D case the addition of the 3D models in the same time window meant that participants had an additional vector of information to absorb in the same amount of time as the 2D participants. The researcher’s impression from the comments in this respect is that in the 2D case the motivator was about the desired to review and contemplate the information, while in the 3D case it was more to do with their ability absorb multiple information vectors simultaneously.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.2 Information Content=====
This theme included comments to do with information content in the presentation. There were 56 comments from the 2D group and 33 from the 3D group.
On the most part people found that the presentation very interesting and informative but in this area the 2D group seemed to be more satisfied than the 3D group. Within the 3D group a number of people desired more information or perceived the information was too technical to appreciate without additional enquiry or time – yet the information in both cases was identical.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.3 Learning=====
This theme included comments to do with people obtaining new information. Both group comments here were very positive. All participants that commented in this group stated they enjoyed the experience of learning and gaining the new knowledge. Most seemed to enjoy the topic and the new knowledge that they took away with them on bridges and/or considered that the material was well thought out and presented. Some commented that they enjoyed the opportunity to obtaining new knowledge in the virtual world/game space were inspired to seek additional in-world learning.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.4 Facets of 3D Learning=====
The comments in this category were specific to the 3D lecture with the use of models. The participants in the 3D group were universally positive about the use of 3D models. Many seemed to believe that having a model of the presentation assisted them in the understanding of the subject matter. (Note, however, that the test scores did not reflect a significant advantage from the 3D models with respect to understanding, although there were indications of an advantage in remembering).
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.5 Instruction=====
The comments in this category had to do with the method by which the new knowledge transferred to the participant. In this area a small but significant number of participants in both groups commented that they missed not having a real person to ask questions to clarify the information but more so in the 3D group which seemed to want to find out more information about the topic than was presented to them. (Note, as mentioned, the information was identical in both cases).
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.6 Focus=====
The comments in this category had to do with observations affecting attention and the temporal learning experience of a participant.
This theme emerged though the general comments throughout the survey. There seemed to two broad sub groups of comments in the focus them: the presence of distractions during the learning experience and the participant’s perception of the available time per slide for learning. Although both groups experienced the same general learning conditions and real-world times, there seemed to be opposing perception of the significance of sources of distraction and perceptions of time across the two groups during the presentation. We will break this category into these two sub-themes (distractions and time) to better understand the focus aspect of the participant groups.
'''Distractions'''
The sources of distractions seemed to come from either the outside world or the inside world.
:'''Inside world distractions'''
:Only 3 comments were made from the 2D group with distractions from the inside world experience: distracting avatars, a participant’s outfit getting in the way of their view and a participant distracted by their curiosity with the technology setup used to deliver and manage the lectures.
:Whereas with the 3D group quite a number of people complained about inside world distractions, particularly being annoyed with other avatars disrupting their learning. As a group, the 3D participants were comparatively emotional/animated (with respect to the 2D group) in their response to these distractions and in a number of cases complained that the other people were not taking education as seriously as them.
'''Outside world distractions'''
:A small number of the 2D group complained/commented about outside world distraction or commented upon the advantages of staying in touch with the outside world. Such comments as being able to answer the phone, using yahoo messaging, doing things at their desk and people in real life talking to them were some of the comments made from the 2D participants.
:Whereas there was only one member of the 3D group commenting upon outside world distractions.
'''Time'''
The main theme that emerged from the 2D group was that a small number of participants commented that the presentation was a bit slow and/or that their attention wandered and/or that they “zoned out” during some slides. Contrast this with the 3D group who tended to say that the presentation was fast or a reasonable number even complained that it went too fast. The 3D group commented that the material kept them engaged and the presentation held their attention. In both cases the real-world times were identical – so the observations are directly related to perception, and in the light of other comments made, the implication is that there was a difference in perceived ‘engagement’ that arose from the single variable of the presence of the 3D objects.
The 2D participants who observed that occasionally they ‘zoned’ out during some of the slides also commented that the voice over was too smooth/calm. Nobody in the 3D group observed this problem, and conversely a number commented on how the voice over was exactly right for the presentation and kept their attention during the presentation. Interestingly the voiceover was identical in each case – but the presence of the 3D objects appearing around participants may have presented an additional level of stress that was properly countered by the voice over.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.7 Navigation=====
Traditionally a significant problem in virtual-world training experiments, learning the appropriate method of avatar navigation has typically been compounded by the use of first-time virtual world participants unfamiliar with the control of their avatar. This researcher considered this a flaw in previous studies that distorted the results with a temporary experience that would be overcome with only a small amount of in-world experience. The participants in this study, therefore, were intentionally recruited from users already present in second-life rather than brought into the virtual world specifically for the purpose of the experiment.
Consequently the negative comments on navigation were lower than in previous studies, and not generally of the same fundamental ‘how do I operate my avatar?’ nature present in a number of the studies considered in the literature review. In any case the campus and lecture environment was specifically designed to minimise the likelihood of these types of problems, and required only minimal knowledge of avatar controls (sufficient for someone with about 30 minutes of experience – based on the packaged avatar training in the second-life orientation islands).
The comments in this category had to do with how their avatar viewed the presentation. These comments were complaints from the 2D and 3D participants about some viewing aspect of the presentation.
Three (3) of the 2D group complained that the chairs blocking their view of the presentation. It was obvious from this comment that these people lacked the knowledge to use mouse view and used third person view, and did not understand how to control the third person roaming camera effectively.
The 3D group’s complaints provided the most insight as to how they viewed the presentation. A small, but significant, number of the participants complained that the 3D models of the bridges ‘got in the way’ of their reading of the slides (a function of navigation) or that they could not both read the slides and look at the models (a function of time). Although avatars were not seated once the 3D presentation began, and avatars were free to wander around the space, with slides projected onto the walls around the models, some users clearly did not realise the additional freedom allowed them to position their avatar for clear slide viewing at any time. Further, it seemed, although presented with a 3D model and the voice over that covered the entire slide content a number of the 3D group still attempted to use the traditional method of viewing the slides whilst looking at the models.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.8 Technology Constraints=====
This category contained comments by participants about the technology constraints that they had experience during the lecture deliver. Although this question was also asked in the Likert questions as provided in the previous section above where the 2D and 3D groups responded ‘9% and 7% respectively, more participants made a comment identifying technical problems in their open comments.
From the 2D and 3D groups’ comments 20% and 18% respectively identified at least one technology constraint. Of these all of the participants had already answered yes in the Likert question therefore a further 11% in both groups commented upon having a technology related problems. The technical difficulties were due to sound and lag/object rezzing, the same problems given by the participants in the Likert questions.
As discussed in the literature review this technology is streamed in real time therefore ‘lag’ is a common risk in using this technology and will vary with network connection speed (real lag) and individual computer problem (false lag – but possibly the single most common culprit). Although no one made any comments that the lag affected their ability to learn. In most cases where it was reported the lag caused only a slight delay in the slide show with comment being that they experienced ‘some’ lag. As each slide, audio and object was independently synched together, lag problems could not accumulate across the slides and any synching problems were corrected with the next slide (or in some cases half way through a slide).
The sound constraints were only temporary in all cases. This problem was due to drop outs of the presentation voice-over. This problem was picked-up early in the testing phase where occasionally the audio would stop and a re-log of the application was required in order to get the audio back. As this was picked up in testing signs were placed around the lecture screens instructing the participant to re-log if they experienced audio dropouts. In all case if they complained about the audio drop, participants also noted that a re-log solved their problem quickly. The impact of an immediate re-log on the learning would be at most half the content of a slide would have been lost. As all slides were summarised at points during the presentation the participant was unlikely to completely miss the associated material.
====5.3.1.4 Survey Instrument====
This category included comments that related to the pre or post survey instrument.
There were 6 participants from both groups that commented upon the size of the pictures in the diagrams of the post-quiz being too small. From their comments they had trouble distinguishing some of the bridges in the pictures.
As the display size is based upon a person’s monitor size people that had small monitors may have had problems distinguishing the details in the pictures. The survey viewed correctly on a 17 inch monitor at 96 dpi, but anyone with a smaller than this or unusual resolution settings may (possibly) have had problems.
This problem was not realised until quite a number of participants had already completed the research. It was therefore decided that any change in the picture size in the survey would only corrupt the experiment conditions and may bias the results so no modification was made. Therefore all participants that undertook this research operated under the same picture constraints in the survey.
On review of the results of the 6 participants that complained 3 of these were from the 2D group and 3 from the 3D group. The participants results for the post-quiz scores for the 2D for ‘remember’ and ‘understand’ were; 9, 7; 9, 4; 9, 4 and the 3D group; 8, 4; 8, 5; 8, 4 respectively. All of these participants passed both Blooms’ cognitive processes categories. The average z-scores for their groups for ‘remember’ were all above average but the ‘understand’ showed that these participants were either on average score or scored below average.
There were 9 ‘remember’ questions and 8 ‘understand’ questions in the survey that required the participant to use pictures in answering the question. The Bloom’s cognitive process of ‘understand’ would have been more affected by the picture constraints. The questions in the Bloom’s ‘understand’ cognitive process were substantially more difficult with material that was not presented during the lecture therefore the participant had to use the picture to recognised and assimilate information in order to answer the question.
The researcher notes that this problem may have contributed to some of the low scores results especially within the Bloom’s cognitive process of ‘understand’. Although from the comments only 6 out of 111 people complained about this problem there is no way to know how much of a problem this presented, from the lack of comments we can only assume that this was not a constraint for most participants – or, at least, not one they were realising they were experiencing.
===5.3.2 Qualitative Analysis of Thematic Results===
====5.3.2.1 Introduction====
The Survey comment questions were not compulsory, but less than 4% reflected nonsense or non-responses with an average of 100 words per person, and 3D participants providing approximately 12% more comment volume than the 2D participants.
Interpreting the collected thematic responses was aided by the consistency of the emotion and approval expressed by participants, and the surprising number of instant messages sent directly to the researcher by participants in thanks for the experience, and the range of both supportive comments and recommendations provided in the open comments. To that end the researcher offers the following generalised collation of the qualitative opinions expressed by participants.
The general lack of negative observations reflects that same proportion in the underlying data. Three positive and three negative observations were requested as well as open/general comments. Overwhelmingly, the positive question was populated while the negative question was generally underpopulated, or comments like ‘I have none’. The most frequent negative comments were an expressed desire to control the delivery speed, acquire additional information in some way, or the opportunity for distraction. In some cases these were also identified as positives. The lack of colour in the negative comments was contrasted by the diversity of positive comments. Different participants chose to comment on different positive aspects of the experience, and an individual participant tended to concentrate comments within a theme.
To aid in interpretation of the analysis while avoiding the implication of hard statistical interpretation, where some degree of researcher subjectivity and ‘translation’ is involved, the researcher has used the following terms with some degree of overlap at the margins:
*Few – 5% or less of comments
*A number – 5% to 15% of comments
*A significant number – 15% to 25% of comments
*Many – More than 25% of comments
*A majority – More than 50% of comments
*Most – More than 60% of comments
Outside of these terms the researcher has provided clear absolute percentage counts where the numbers are at the extremes.
====5.3.2.2 The Virtual Learning Experience: Both Groups====
The two most used words to describe their experience were ‘fun’ and ‘interesting’. The frequency and strength of these positive comments surprised the researcher, representing over 60% of the participants.
The virtual world seemed to offer participants with a fun way to learn with the convenience of learning on line in their own time but further, at least as the experimental campus and lecture rooms were constructed in this experiment, offering a participant with a sense of presence that provided them with the perception of a similar experience to that of learning in a real world learning environment. Seeing others in the environment while attending a lecture as their avatar in a simulated theatre, gave the participant more of a connection to the learning process than one might expect if they were doing an online purely HTML page based traditional distance education learning course. To the majority of participants this experience felt personal, and the atmosphere relaxed and many found that it offered a more pleasurable experience than the traditional learning method of attending a lecture class in the real world.
The environment seemed to promote a favourable attitude to learning. Not only did the majority of the participants say it was “fun” but a number commented that they felt inspired to learn more about the topic, wanted to ask further questions on the same, or seek for more details and a significant number expressed surprise that although they clearly had experience of the topic in real life, they had never really considered how exciting a bridge could be. Only one participant expressed a non favourable attitude to this form of learning and/or the topic.
Based on the comments, the average participant was clearly immersed in this aspect of virtual learning as reflected by many comments that expressed varying degrees of ownership over the experience – and even, in some cases, resentment of others or extraneous circumstances had interfered in their learning.
To many this was a new experience in a virtual world and although they initially saw the offer of ‘linden’ as an easy way to make fast money, by the end of their experience instead of thanking the researcher for the money they thanked the researcher for the learning experience. The content of some of comments expressed surprise that the game they had known before was no longer ‘just’ a game to them. Participation had opened the possibility for a whole new world of learning, inside and outside of Second Life.
The virtual learning campus provided the participant with a seamless way to learn. Many liked the staged approach reflected by the testing and learning process (necessary as part of the automated control regime for the experimental process) - finding it a novel approach to the learning experience. Going from room to room to complete the each stage in the learning process possibly made this more fun than an alternative virtual world learning approach utilising a single class room in which all stages of a process might occur. Not knowing where the teleports would lead them in the next stage of their journey provided an exploratory feel to the environment. For most participants they found the environment very easy to use and welcoming.
The format and the information provided in the slide presentation received, on the most part, positive feedback. The requirement for more control over the slide show to pause, forward and rewind came from both groups. Enabling user control like this in this experiment was not an option as control over the information delivery for both groups’ had to be placed under strict experimental conditions so that only one independent variable changed in the experiment – the presence or absence of the 3D models.
Even so, if this or a similar lecture was not under experimental conditions the researcher cannot help but question if this addition would have lessened the entire experience of the participant. Sharing in the learning process within a set time frame and the pressure of the quiz after completion may have also added to the positive experience felt by the participants. Possibly allowing the user to walk away with additional material may have assisted in providing the participant with the convenience to learn more than just the information presented. In addition to this a live lecturer as some participants would have like to have seen may have also satisfied the participant’s requirements for more controlled information.
Technology constraints certainly presented itself in this experiment with approximately 20% of the participants from both groups commenting upon a technology issues to varying degrees. The major problems related to network latency (lag) and audio dropouts. In a streamed world (such as Second Life) especially when there are many avatars in a SIM lag is a typical problem. Audio although not as bad or as frequent as visual lag does occasionally present a problem in Second Life. The audio stream occasionally is lost and the only way to fix the problem is to re-log the application. Both problems from participant’s comments did not seem to affect their learning experience, and for only 7-9% warranted rating as having an impact. In the experience of this researcher, the majority of lag class problems are in fact not network lag but recipient computer performance issues. The entire sim and the various lecture rooms were monitored continually during the experiment and true (network) lag was not observed on the researcher’s computers during the experiment, nor did the SIM performance statistics monitored during the period demonstrate any significant decrease in performance.
Approximately 5% people from both groups complained that some of the pictures were too small in the survey instrument thus potentially obscuring the details of the effected bridges displayed. This could have been a major constraint on a participant’s ability to answer the Bloom’s cognitive process of ‘understand’ questions more than the ‘remember’ questions, and therefore may have contributed to perceptions of difficulty in Bloom’s ‘understand’ cognitive process portion of the post-quiz.
====5.3.2.3 The Participants: Differences Between Groups====
Whilst the 3D participants were presented with 3D models to aid learning, a number still seemed to be reading the slide show presentation. This effectively provided the 3D participants with 4 channels of learning; slide show pictures, slide show text, audio and models, whereas the 2D participants only had 3 of these channels.
There were 24 slides 20 of which were learning slides provided within a 20 minute lecture session for both groups. This meant a participant had approximately one minute per slide where they were presented with something new. There were 11 3D models of 4 bridge types therefore a new model was presented to them approximately every 2 minutes. Combining the models with the slides in the same time frame as the 2D participants may have disadvantaged the 3D participants.
The information content that was delivered to both groups was the same. No more or less technical or providing anything new with exception to the 3D models for the 3D group. Yet from the 3D groups’ comments some participants seemed to want more information or simpler explanations. Within the 2D group many had commented that it was easy to follow not too technical and easy to comprehend – none commented that the material was complex. Possibly this difference is not that they needed more information but rather that with 4 information channels there was too much information provided in the time allocated for the 3D group. Alternatively the difference might also reflect as case of ‘not knowing what you don’t know’ in the 2D group, while the addition of accurately constructed 3D models raised additional questions in the minds of the participants, or improved their general level of attentiveness.
The 3D group found the addition of 3D models to be a useful learning tool. From their comments it seemed that 3D models of the bridges were perceived to have helped them understand the subject matter better than they perceived they would with a lecture without the models. (Note, however that in this case the perception is not supported by the test results). Many participants perceived that the 3D models also made the entire lecture experience more engaging than whatever assumed alternative against which they were measuring the experience.
The focus of the 3D participants was more strongly inside the world rather than their outside world. Furthermore the extent to which their focus inside the world provided distraction brought about more emotional response than distractions noted by from the 2D participants. The former tended to use repetition, descriptive adjectives and emphatic declamations concerning distractions, while the latter tended to merely note or comment favourably about the ability to be distracted! This seems to suggest that the 3D participants experienced a greater feeling of presence and possibly immersion in their virtual world learning experience.
To appreciate these comments, the reader is referred to the literature review where the difference between immersion and presence is discussed (see page 39). Immersion or ‘system immersion’ is an objective measure it is the extent to which a person becomes removed from their outside world to operate within the virtual world space. Whereas, presence is a subjective measure it is the extent to which a person feels connected inside the virtual world or the feeling of ‘being there’ and their ‘willingness to suspend disbelief’ they are a part of, and inside, the virtual world.
The classification model presented by Benford (see Figure 9. Shared Space Technology According to Artificiality and Transportation) virtual reality environments are placed on a scale of artificiality and transportation. The degree to which a participant becomes removed from their local space to operate in a remote space is transportation that from Benford model is purely based upon the physical aspect of the virtual environment.
In this study the strong difference in the emotion and terms consistently used by participants in the 2D versus 3D lectures seemed to suggest that given the same virtual reality technology (desktop CVE) a greater transportation occurred for the 3D participants. The 3D participants become removed from their local world distractions and were transported into the virtual remote world. Thus in turn lead to a higher degree of presence within the virtual environment. The 2D comments of distraction compares with the results obtained by Martinez, Martinez, & Warkentin (2007) reviewed in Chapter Two Literature Review. They found when participants were presented with a 2D lecture in world participants reported distractions or a ‘disconnect’ from the lecture in world (see p. 86).
The degree of presence in the environment is often linked with desktop virtual worlds based around social interaction. As discussed the literature review Schroeder defines presence in terms of presence, copresence and connected presence (see Figure 10) which can be described respectively as ‘being there’, ‘being there together’ and ‘being connected together’. As discussed in the literature review for a social virtual world the level of presence is greater than a game virtual world due to the social connective aspects that occur within the virtual world. Heeter also defines that the presence of an individual is increased when social relationships are formed within the environment. Whereas, in this study, both groups where given the same social interactive aspects but it seems that the introduction of 3D models produced a higher level of presence for the 3D participant. The 3D participants clearly displayed more ‘ownership’ over their learning experience than the 2D group.
Of interest this higher level of engagement by the 3D group carried over to the volume of survey responses. The 3D group provided more descriptive and richer comments than the 2D group. Rather than a short dot points as often used by 2D participants, the 3D participants tended to use sentences in their open comments. The researcher was left with the subjective impression that the 3D participants, as a group, were motivated to greater detail and consideration of their comments, than was typical of the 2D group. Although not specifically measured, it is possible that the 3D group were still engaged with the experience even after they had left the lecture environment.
A further noticeable difference between the two groups was their relative concept of time. The 2D group made more comments that the slide show was a bit slow, whereas the 3D group made more comments that the lecture was too fast. (Note the actual timing and content were identical). This differing perception of time is most likely is due to a combination to the extra channel of information delivered to the 3D participants (being the 3D models) that had to be absorbed in the same time span as the 2D participants and the higher level of engagement the 3D participants expressed about their learning experience. One cannot rule out the effects of a possible unmeasured elevation of participant stress from the more “intense” learning experience vectored on the addition of the extra information channel.
==5.4 Discussion of Results==
This research sought to find the difference in learning outcomes of participants that were presented with two different forms of delivery methods; a 2D slide show and the same 2D slide show augmented with 3D models and simulations.
For the quantitative analysis the level of learning outcomes was the difference in the measure of achievement scores between the 2D group and 3D group.
Did they learn more after being presented with a 2D slide show or a 3D simulation model? From the results of both groups there was a slight, not statistically significant, lean towards the 3D group’s results on the total post-quiz scores. When analysed within each of Bloom’s cognitive process of ‘remember’ and ‘understand’, the 3D group performed slightly better than the 2D group (most notably at the upper score ranges) in the ‘remember’ dimension but there was no appreciable difference in the ‘understand’ dimension. The subjective interpretation might be that, with respect to the ‘remember’ outcome, the 3D approach may assist ‘stronger’ students to do better than they would otherwise do under the 2D approach, but that there was little impact on the ‘average’ student. The study did measure the ‘instantaneous’ ‘remember’ outcome, not the ‘remember’ outcome over an extended period, which might reveal greater differences.
Regardless of any anecdotal differences that may have been found, and the foregoing comments, the results of the statistical analysis of the post-quiz score across both groups revealed no statistically significant difference between the two groups learning outcomes within the confines of this experimental model. Thus the hypotheses defined for the quantitative analyses for this experiment remains unconfirmed.
Learning outcomes for a student traditionally are measured by a student’s achievement scores. Although an important measure, this does not provide any insight to the learning experience of the student. A student that obtains a high learning outcome by quantitative methods is not a measure of success from a qualitative approach. Quantitative methods focus on outcomes, qualitative methods focus upon the journey that leads the student to their end results.
While both the 2D and 3D groups were strongly positive of the learning experience, the qualitative analysis of both groups’ open comments revealed noticeable differences between the two groups’ journey to their end results. The 3D group tended towards greater ‘ownership’ of their learning experience, and while the 2D tended to merely observe (in some cases as a benefit) with the opportunity for distraction, the 3D almost universally, expressed resentment, or even anger, about the same distractions.
The experimental constraint of ‘same time’ may have adversely impacted the 3D group’s scored outcome due to the delivery of an additional information channel over the same time frame – even though at least 2 of the channels were effectively redundant. As the two groups performed the same and if anything the 3D group did slightly better, such a conclusion is by no means certain. The affect may rather have been to induce greater involvement by raising the stress factor for the 3D group and force greater participation in order to ‘keep up’ with the information flow.
The presence of the 3D models was widely perceived by the participants to enhance their understanding of the subject matter – although the scoring suggests that they assisted with remembering rather than understanding.
From the literature review of previous research it was found virtual world learning does take longer than traditional methods (Arreguin, 2007; Joseph, 2007). In this lecture we provided 20 minutes to both groups for a post-quiz of 20 questions. Although the 2D participants did not display a problem with the time allocated to the lecture from their comments given the results of the post-quiz particularly with Bloom’s ‘understand’, possibly both groups needed more time in which to understand the material, and particularly the 3D group where they were presented with an extra channel of information which could be interactively explored by which to learn.
Of the Likert scale questions 28 and 29 showed the most variation across the participants. The questions were specific to a participant’s learning experience. Question 28 asked if they found the learning experience better than their usual methods of learning. The vast majority from both groups agreed.
When asked in the Likert scale if the information provided was enough to understand the topic the 2D group was slightly more satisfied than the 3D group. The open questions shed some light on this issue, with more 3D group participants expressing a desire for more time to assimilate what was provided and more opportunity for self driven information collection, questioning and investigation – rather than merely more information per se. This difference might also reflect the greater level of participation, immersion, presence or transportation evidenced in the 3D group.
==5.5 Conclusion==
In answering the research question: How effective is it to learn in a virtual world using a traditional 2D slide show method compared to that of a 3D interactive simulation? The conclusions from this research are clear, and not necessarily as expected by the researcher at the commencement of the study:
#Transportation of a 2D real world lecture presentation into a virtual world situation is an acceptable use of the virtual world technology producing no statistically different outcome for Bloom’s ‘remember’ and ‘understand’ and combined cognitive processes at the mean, although there are some indicators that the ‘remember’ outcome might be enhanced at the upper and lower deciles of participant ability through augmentation of the 2D presentation with 3D representation and simulation.
#Adoption of 3D visual aids is not a pre-requisite for successful learning in a virtual space.
#The presence of 3D visual aids assisted participant’s perceptions of enjoyment, engagement, presence, immersion and/or transportation, and may therefore have a longer term effect on participation rates where participation in learning is purely voluntary.
Projecting these conclusions into a practical teaching scenario, where outcomes are the same, and only instantaneous outcome measures are considered (as the researcher did not examine long term outcomes) after taking account of the input costs of material preparation, it is clearly more cost effective to use the 2D presentation strategy for delivering virtual world courses. This conclusion is sustained where cost is measured in terms of time required for input preparation regardless of the sourcing (i.e. where the 3D models are acquired for no input hours and no financial cost the cost measure would void the observation), and outcomes are measured in terms of test scored results taken within a short period of the learning.
Where the outcome measure includes participant perception of the experience, the 3D augmented learning approach is indicated, but in this scenario, grading the relative ‘worth’ of the greater experiential outcome is more difficult and it is less clear how it can be factored absolutely into a cost benefit analysis.
==5.6 Opportunities for Further Research==
Experimental research as the name suggests is applying scientific methods and analysis to learn new insights so that other researchers can pick up from the experiment to reproduce, reform and critique. In this section the researcher proposes some opportunities for further research based upon the analysis of the results discovered in this research.
===5.6.1 Improving Instrument Reliability===
One limitation that is difficult to avoid was found in the analysis of the instrument reliability using formal (statistical) instrument reliability testing. Essentially in this experiment there were too few questions within each of the two Bloom’s cognitive process test sets to provide a conclusive reliability measure of the instrument. Increasing the number of questions within each group would certainly provide more data points in which to measure achievement results, and as a consequence of how the reliability measure algorithm works, would improve instrument reliability. The first obvious problem faced with the pre-quiz and post-quiz design for this type of experiment is that, as the number of test questions (data points) is increased, there is a point at which the testing might materially affect the training experience and therefore the outcomes, as the participants would eventually start learning from the quiz questions.
If the number of question were to be increase the range of information presented to the participant would also have to increase. Increasing the range of information provided would require additional time to be allocated to the lecture and possibly each topic therein. There is a point at which the length of time required to complete the lecture and quiz / survey combined would affect the quality of the results as the voluntary participants might judge the exercise was taking too much time, and rush the final testing / survey stages.
===5.6.2 Course versus Lecture===
The experiment focussed on a single lecture, measuring the affordances over a sequence of lectures using a similar experimental model would provide additional depth of analysis and would neutralise any initial ‘wow’ factor that might have influenced participation and attentiveness in this single event based experiment. It is possible that differences in outcomes might be more apparent between the two groups if a course was involved rather than a single lecture. There are other factors that might influence such an experiment design – such as motivation for attending the course in the first place.
===5.6.3 Introducing a Real and Robot Presenter to the Experience===
The 3D group displayed a higher level of presence in this research study. The contributing factor in this observed difference between the two groups was prima-facie, the 3D models. The opportunity for further research lies in the introduction of a presenter (even an automated robot presenter) into the lecture experience to see if this increased the level of presence had by the 3D group would occur for both groups given a live or virtually-live lecturer. As presence is generally shown to be increased by relationships with other people within a virtual world the introduction of a lecturer may add further insight as to why the 3D group displayed a higher level of presence given they only had the addition of 3D models.
===5.6.4 Testing Other Bloom’s Cognitive Processes===
The 3D group seemed to believe that the models contributed to their understanding of the subject matter. Testing higher levels of Bloom’s cognitive processes such as Apply, Analysis, Evaluate and Create may reveal that this increase in understanding may present differences between the two groups for the higher levels of Bloom’s cognitive processes.
===5.6.5 Outcome Measurement Over Time===
In this experiment the post-quiz was given directly after the lecture. Re-testing the participants over a number of periods to assess which group retained the information better for longer, and the extent to which the two approaches impacted understanding outcomes over time. The experiment would probably require a vastly greater number of initial participants so that each time lagged testing group could be tested once at different intervals, rather than re-tested, so that the testing itself did not colour the results. The researcher suspects that the greater level of post lecture engagement demonstrated by 3D participants might result in both slower degradation of the ‘remember’ outcome and a post lecture improvement in the ‘understand’ outcome over time.
===5.6.6 Comparison to Real-World Training===
Perhaps the most obvious inquiry that presents itself for further research is to include another experimental group. As the virtual world 2D lecture was effectively a real world lecture delivered in a virtual world, the addition of a real world participant group operating under the same constraints as the virtual world groups would provide an interesting control reference for virtual-real world comparison of outcomes. Providing the 2D presentation to real life participants may provide further insight to the differences of the virtual learning experience in addition providing a control group that was based around more traditional learning methods.
</div>
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
e0dc814868f08786a70ffb525e6714fceaef6213
Real Learining in Virtual World - Selected Appendices
0
285
317
2010-08-05T13:24:51Z
Bishopj
1
wikitext
text/x-wiki
<div class="nonumtoc">
=Appendices=
==Appendix A: Terminology==
{| width="15%"
|-
|'''Term'''
|'''Description'''
|}
;Virtual World:
:An artificial environment that a person projects themself within. In the context this in which this term has been used mainly in this paper (unless otherwise stated) it is an environment that is technological built form software programs.
;In World:
:Artificial, where the person operates in the artificial virtual world.
;Real Word
:Reality, where the person operates in their physical world.
;Avatar:
:The digital representation of the person in the virtual world.
;Teleport
:A method of transport used in the virtual world that moves them from one location to another without having to walk to a location with their avatar.
;Presence:
:A subjective measure. The feeling of being in the virtual world that disconnects them from physical world around them.
;Immersion:
:An objective measure. The interface between the virtual world and the user that places the person in the virtual world.
;MMORPG:
:Massively Multiplayer Online Role Playing Game. The can used in various shortened abbreviations eg RPG, MMO
This term often is used to describe the latest generation of online virtual world technology. Many other terms are used such as MUVE, CVE etc.
;MUD:
:Multi User Dungeon. Early text based networked virtual worlds.
<P align=left >'''''Table 14 Terminology'''''</p >
==Appendix B: MMOG Analysis==
Bruce Woodcock (2008) is an independent writer and long time player of MMOGs that has dedicated his research to tracking subscriptions numbers of online MMOGs. These figures are obtained from source and public available material e.g. company financial reports, company media releases, media publications and in some cases an educated guess. These figures although not precise, allows us to do a comparison of MMOGs that would otherwise would not be available unless one was to undertaken the same type of analysis such as he has done over the years. If anything these figures would be underreported as they only are based upon user subscriptions and therefore do not include in the numbers of user that have free-access to their environments (included within the ones listed). These figures are current as at April 2008, for more information see http://www.mmogchart.com/.
Breakdown of MMOGs listed Chart.
{|border="1" width="40%" align=center
|-
|align=center |'''Name'''
|align=center |'''Current Active Subscriptions'''
|-
|align="right"|World of Warcraft
|align="right"|10,000,000
|-
|align="right"|RuneScape
|align="right"|1,200,000
|-
|align="right"|Lineage
|align="right"|1,056,177
|-
|align="right" |Lineage II
|align="right" |1,006,556
|-
|align="right" |Final Fantasy XI
|align="right" |500,000
|-
|align="right" |Dofus
|align="right" |452,000
|-
|align="right" |EVE Online
|align="right" |236,510
|-
|align="right" |EverQuest II
|align="right" |200,000
|-
|align="right" |EverQuest
|align="right" |175,000
|-
|align="right" |The Lord of the Rings Online
|align="right" |150,000
|-
|align="right" |City of Heroes / Villains
|align="right" |136,250
|-
|align="right" |Tibia
|align="right" |104,338
|-
|align="right" |Star Wars Galaxies
|align="right" |100,000
|-
|align="right" |Toontown Online
|align="right" |100,000
|-
|align="right" |Second Life
|align="right" |-91,531
|-
|align="right" |Tabula Rasa
|align="right" |75,000
|-
|align="right" |Ultima Online
|align="right" |75,000
|-
|align="right" |Pirates of the Burning Sea
|align="right" |65,000
|-
|align="right" |Dark Age of Camelot
|align="right" |45,000
|-
|align="right" |Dungeons & Dragons Online
|align="right" |45,000
|-
|align="right" |Vanguard: Saga of Heroes
|align="right" |40,000
|-
|align="right" |Yohoho! Puzzle Pirates
|align="right" |34,000
|-
|align="right" |EverQuest Online Adventures
|align="right" |30,000
|-
|align="right" |The Matrix Online
|align="right" |30,000
|-
|align="right" |Era of Eidolon
|align="right" |27,000
|-
|align="right" |PlanetSide
|align="right" |20,000
|-
|align="right" |Asheron's Call
|align="right" |15,000
|-
|align="right" |Sphere
|align="right" |15,000
|-
|align="right" |Anarchy Online
|align="right" |12,000
|-
|align="right" |The Realm Online
|align="right" |12,000
|-
|align="right" |World War II Online
|align="right" |12,000
|-
|align="right" |Pirates of the Caribbean Online
|align="right" |10,000
|-
|align="right" |Neocron 2
|align="right" |6,000
|-
|align="right" |Horizons
|align="right" |5,000
|-
|align="right" |Mankind
|align="right" |5,000
|-
|align="right" |A Tale in the Desert
|align="right" |1,054
|}
==Appendix I: Second Life Demographics==
{| align="center" width="50%" style="background-color:#ffffcc; "
|-
|align="left" colspan="3" |'''Second Life Virtual Economy<br />
Demographic Summary Information<br />
Through November 2008'''
|-bgcolor="lightblue"
|align="center" colspan="3" |'''Top 20 Countries by Active User Hours'''
|-bgcolor="wheat"
|align=center |'''Country'''
|align=center |'''Total Hours'''
|align=center |'''% of Total Hrs'''
|-
|align=right |United States
|align=right |14,451,180.28
|align=right |39.38%
|-
|align=right |Germany
|align=right | 3,505,103.93
|align=right | 9.55%
|-
|align=right | United Kingdom
|align=right | 2,424,987.88
|align=right | 6.61%
|-
|align=right | Japan
|align=right | 2,014,299.45
|align=right | 5.49%
|-
|align=right | France
|align=right | 1,972,875.00
|align=right | 5.38%
|-
|align=right |Netherlands
|align=right | 1,406,652.90
|align=right | 3.83%
|-
|align=right |Italy
|align=right | 1,397,571.12
|align=right | 3.81%
|-
|align=right |Brazil
|align=right | 1,361,741.72
|align=right | 3.71%
|-
|align=right |Canada
|align=right | 1,336,706.03
|align=right | 3.64%
|-
|align=right |Spain
|align=right | 1,083,716.70
|align=right | 2.95%
|-
|align=right |Australia
|align=right | 747,158.40
|align=right | 2.04%
|-
|align=right |Belgium
|align=right | 349,070.48
|align=right | 0.95%
|-
|align=right |Portugal
|align=right | 332,468.60
|align=right | 0.91%
|-
|align=right |Switzerland
|align=right | 277,448.60
|align=right | 0.76%
|-
|align=right |Poland
|align=right | 234,785.58
|align=right | 0.64%
|-
|align=right |Argentina
|align=right | 196,719.35
|align=right | 0.54%
|-
|align=right |Denmark
|align=right | 193,975.72
|align=right | 0.53%
|-
|align=right |Sweden
|align=right | 191,424.80
|align=right | 0.52%
|-
|align=right |Mexico
|align=right | 177,130.73
|align=right | 0.48%
|-
|align=right |Turkey
|align=right | 176,759.05
|align=right | 0.48%
|-
|align=right |Others
|align=right | 2,866,931.23
|align=right | 7.81%
|-
|align=center |'''Total'''
|align=right | '''36,698,707.57'''
|
|}
{| align="center" width="50%" style="background-color:#ffffcc; "
|-
|align="left" colspan="3" |'''Second Life Virtual Economy<br />
Demographic Summary Information<br />
Through November 2008'''
|-bgcolor="lightblue"
|align="center" colspan="3" |'''Usage hours by Age Band'''
|-bgcolor="wheat"
|align=center |'''Age'''
|align=center |'''% of Total Hrs'''
|-
|align=right |13-17 (Teen Grid)
|align=right |0.32%
|-
|align=right |18-24
|align=right | 15.07%
|-
|align=right |25-34
|align=right | 34.51%
|-
|align=right |35-44
|align=right | 28.51%
|-
|align=right |45 plus
|align=right | 21.14%
|-
|align=right |Unknown
|align=right | 0.45%
|-bgcolor="lightblue"
|align="center" colspan="3" |'''Usage hours by Gender'''
|-
|align=right |Male
|align=right | 58.72
|-
|align=right |Female
|align=right | 41.28
|}
<p align="center" >'''''Source: (Linden Lab, 2008b)'''''</p >
==Appendix J: Pre-Quiz Score Results==
This section discusses the pre-quiz scores significance test results.
===J.1 Remember Scores===
Figure 68 provides the pre-quiz results for Bloom’s ‘remember’ cognitive process.
Figure 68. Results: Pre-Quiz Remember - Histogram & Bell Curve
The pre-quiz ‘remember’ scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = -0.417, sek = -1.105, K2 p = 0.26747 and 3D: ses = -0.595 and sek = -1.54, K2 p = 0.2675) and the variance between the groups was not significant (F = 0.668, 2 tailed p = 0.140, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found no significant difference (t = 1.665, df = 109, two-tailed p = 0.0987, α = 0.05) between the results of the 2D (x1 = 2.44, s1 = 1.032) and 3D (x2 = 2.071, s2 = 1.263) pre-quiz ‘remember’ scores.
When tested using a one-tail test where µ1 – µ2 > 0.5 the results show that there is a significant different (t = 1.665, df = 109, one-tailed p = 0.0494, α = 0.05), thus the 2D pre-quiz scores were significantly higher than the 3D scores for the Bloom’s cognitive process of ‘remember’.
===J.2 Understand Scores===
Figure 69 provides the pre-quiz results for Bloom’s understand cognitive process.
Figure 69. Results: Pre-Quiz Understand - Histogram & Bell Curve
The pre-quiz ‘understand’ scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = 0.790, sek = -0.227, K2 p = 0.63248 and 3D: ses = 1.072, sek = 0.0563, K2 p = 0.50798) and the variance between the groups was not significant (F = 0.799, 2 tailed p = 0.410, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found a significant difference (t = -2.257, df = 109, two-tailed p = 0.0260, α = 0.05) between the results of the 2D (x1 = 1.254, s1 = 0.775) and 3D (x2 = 1.607, s2 = 0.867) pre-quiz ‘understand’ scores. The 3D pre-quiz scores were significantly greater than the 2D pre-quiz scores for the Bloom’s cognitive process of ‘understand’ (µ1 – µ2 < 0.5; t = -3.03167, df = 109, one-tailed p = 0.00138, α = 0.05).
===J.3 Summary Pre-Quiz Remember and Understand===
Figure 70 provides an inverse cumulative normal distribution graph for Bloom’s cognitive process ‘remember’ and ‘understand’ for the post-quiz scores. This graph displays what percentage of participants scored under a nominated score.
Figure 70. Results: Pre-Quiz Rem & Und - Inverse Cumulative Normal Distribution Graph
===J.4 Total Scores===
A graph of the results for the total score was provided in the main document in the Chapter, 4 Results, Pre- Quiz Results.
The pre-quiz total scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D, ses = 0.0218, sek = -1.087, K2 p = 0.49248 and 3D, ses = -0.574, sek = -0.425, K2 p = 0.671739) and the variance between the groups was not significant (F = 0.862, 2 tailed p = 0.586, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found a no significant difference (t = 0.0455, df = 109, two-tailed p = 0.964, α = 0.05) between the results of the 2D (x1 = 3.690, s1 = 1.372) and 3D (x2 = 3.679, s2 = 1.479) pre-quiz total scores.
==Appendix K: Post-Quiz Score Results==
A graph of the results for the post-quiz score was provided in the main document in the Chapter, 4 Results; Post Quiz Results, Hypothesis One and Two sections.
===K.1 Remember Scores===
The post-quiz ‘remember’ scores (H01) were tested using the non-parametric Mann-Whitney U Test as the results for the post-quiz ‘remember’ scores did not meet the assumptions for parametric testing which requires the scores to be normality distributed (2D: ses = -1.94259, sek = -1.10294, K2 p = 0.06976 and 3D: ses = -2.87371, sek = 1.02617, K2 p = 0.01161). The 3D scores failed the D’Agostino-Pearson (K2) normal distribution test (p = 0.01161 ie < 0.05) therefore the scores from this group deviate significantly from normal distribution.
The results of Mann-Whitney U Test when applied found that there was no significant difference between the 2D and 3D post-quiz ‘remember’ scores where the average ranked scores were 2D = 53.9364 and 3D = 58.0268 resulted in U = 1653.5, W = 113.5, 2 tailed p = 0.493107, α = 0.05.
===K.2 Understand Scores===
The post-quiz ‘understand’ scores (H02) were tested using the parametric independent t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = 0.204408, sek = - 0.8453, K2 p = and 3D: ses = 1.016, sek = 0.016, K2 p = ) and the variance between the groups was not significant (F = 1.028, 2 tailed p = 0.920, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
===K.3 Total Scores===
The post-quiz total scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = 0.158427, sek = -0.230644, K2 p = 0.8865884 and 3D: ses = -0.700083, sek = 0.404913, K2 p = 0.62133) and the variance between the groups was not significant (F = 1.10638, 2 tailed p = 0.70972, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found a no significant difference (t = -0.8212, df = 119, two-tailed p = 0.4133, α = 0.05) between the results of the 2D (x1 = 10.9818, s1 = 2.46825) and 3D (x2 = 11.3571, s2 = 2.34659) post-quiz total scores.
==Appendix L: Instrument Reliability Results==
Table 15 provides the results of the instrument reliability tests performed on the achievement quiz results. For the pre-quiz there were 4 questions each for the Bloom’s cognitive process of ‘remember’ (rem) and ‘understand’ (und) for a combined total of 8 and in the post-quiz 10 questions for a combined total of 20. The 2D group consisted of 55 participants and the 3D group 56.
{| align="center" width="60%" style="background-color:#ffffcc; "
|-
|colspan="5" align="center" |'''Post-Quiz Remember'''
|-
|align=center|
|align=center bgcolor="#DDADAF" colspan=2 |'''2D'''
|align=center bgcolor="lightblue" colspan=2 |'''3D'''
|-bgcolor="lightgrey"
|align=center|
|align=center bgcolor="lightgrey" |'''Rem'''
|align=center bgcolor="lightgrey" |'''Und'''
|align=center bgcolor="lightgrey" |'''Rem'''
|align=center bgcolor="lightgrey" |'''Und'''
|-
|align=right |'''Pre-Quiz KR20'''
|align=right | 0.14
|align=right | -0.46
|align=right | 0.48
|align=right | -0.01
|- bgcolor="lightgrey"
|align=right |'''Post-Quiz KR20'''
|align=right | 0.53
|align=right | -0.01
|align=right | 0.54
|align=right | 0.10
|}
<p align=center >'''''Table 15. Instrument Reiability: Acheivement Quiz'''''</p>
Table 16 provides the results of the instrument reliability tests performed on the post survey Likert scale results for questions 23, 24, 28 and 29.
{| align="center" width="60%" style="background-color:#ffffcc; "
|-
|colspan="3" align="center" |'''Post-Quiz Remember'''
|-
|align=center|
|align=center bgcolor="#DDADAF" |'''2D'''
|align=center bgcolor="lightblue" |'''3D'''
|-bgcolor="lightgrey"
|align=right |'''Cronbach's Alpha:'''
|align=right |0.73
|align=right |0.72
|}
<p align="center" >'''''Table 16. Instrument Reiability: Survey Likert Scales'''''</p>
Frary (2008) provides the following definitions for the measure of these reliability (r) results:
*r = .90 or higher - High reliability. Suitable for making a decision about an examinee based on a single test score.
*r = .80 to .89 - Good reliability. Suitable for use in evaluating individual examinees if averaged with a small number of other scores of similar reliability.
*r = .60 to .79 - Low to moderate reliability. Suitable for evaluating individuals only if averaged with several other scores of similar reliability.
*r = .40 to .59 - Doubtful reliability. Should be used only with caution in the evaluation of individual examinees. May be satisfactory for determination of average score differences between groups.
'''Discussion'''
Instrument reliability tests the correlation of answers within a data set. The assumptions for the KR-20 test is that test items are of equal, or near equal, difficulty and intercorrelation (Lenke, Wellens, & Oswald, 1977). Consistent with these assumptions, the tests performed were split into the Bloom’s cognitive processes of ‘remember’ and ‘understand’. Furthermore as we were measuring the difference between the achievement results of two groups that had distinctly different treatment methods the reliability tests were divided into 2D and 3D participant groups. These repeated divisions caused a problem for the application of the instrument reliability test as in each division the total number tested items is 10 or below. If the number of questions (or subjects) are too low within each group then the results of the test as put by Frary ‘should be taken with a grain of salt’. Frary (2008) provides further insight as to why:
“All reliability estimates are subject to considerable error when there are small numbers of examinees or test items. If there are fewer than, say, 25 examinees or 10 items, the reliability estimate must be "taken with a grain of salt." This phenomenon is especially noticeable when there are several scrambled forms of the test, each administered to a relatively small number of examinees. Then the KR20 coefficients may fluctuate considerably from one form to another.”
As we can see from the above results there was considerable fluctuation in the reliability test results between the two groups. With exception to the post-quiz ‘remember’ results the other figures varied considerably. These results seem to correlate to the results that are discussed in Chapter 5 Discussion and Conclusion chapter. Participants for both groups performed well for ‘remember’ but did not for Bloom’s ‘understand’.
Although, as Frary asserts, the test reliability measures under this research’s circumstances are inconclusive indicators.
==Appendix M: Qualitative Analysis: A Sample of Participants Comments==
===Virtual World Learning Experience===
*I found learning in world is a great way to find out about things you don’t normally think about finding out about
*You’re more likely to learn things in world than go to places to find out about things
*Things I usually don't take time to learn about, I can learn about them here
*I really felt as if I was sitting in a Room of Such listening to a lecturer
*It kind of felt personal.
*Kind of soothing but not putting me to sleep kind
*The lack of pressure that comes from a more traditional classroom atmosphere
*You can see if others are in the class with you
*Feel this way is better experience then the normal online way of taking classes
*I prefer learning alone and I would definitely prefer this type of learning to going to a classroom with other students.
*Seemed better than the typical classroom experience
*This is a fantastic experiment and I believe the potential to reach people with anything that will help them become better educated is a wonderful thing.
*Top idea to get people to learn about several topics
*I liked the idea; please invite me for more lessons
*By being part of this Survey Study, I have opened a door to seeking out further Studies, as well as Classes with SL
===Campus Experience===
*It was very easy to use
*It was very well laid out
*Easier to navigate through
*I liked the way different stages
*The environment was well set out
*Very user friendly
===Format===
*2D: Like liked the layout... it showed you a picture of the different types of bridges as well as giving you plenty of information on the subject then had a summary of all of it at the end
*2D: I wish that the pictures had been interactive so I could've clicked on the different sections of the bridges and gotten an individual description
*2D: Easy to follow slides
*2D: The presentation was actually enjoyable, however I believe that for this to be a truly effective learning tool the presentation speed must be made adjustable as people may find certain topics boring and just skip through them but may wish to spend longer periods of time on other material and wish to slow down to be more attentive.
*2D: the possibility to go back or control the slideshow
*3D: The mix of the audio and the bulleted points made it easier to follow for visual
*3D: Wonderfully laid out. The visuals were great! They conveyed the most important points very well.
*3D: lots of examples
*3D: Wish there was a way I could stop the presentation or lecture and go back to review what was just said.
===Information content===
*2D: Very informative and interesting
*2D: very easy to comprehend
*2D: It was not too technical
*2D: I never gave it much thought at to the Construction of Bridges, one droves on them, over them etc, and you certainly hear in recent years of the collapse of bridges etc, I found the topic informative although a lot to digest.
*3D: It was informative.
*3D: Need more infor need more infor need more infor
*3D: I have never stopped to think about bridges before. Now how am I going to drive over a bridge without thinking about what it is?
*3D: The theory of the subject was well thought out, even though to my knowledge the subject was well informative, it could have been explained in more of laymen terms for those who really don't understand the makeup of bridges.
*3D: I found myself getting lost a bit here and there with the terminology
*3D: Overall a bit too complicated for someone with no previous knowledge coming into the presentation, but still worthwhile.
*3D: I might have liked a little better explanation of how compression and tension work at the beginning so as I could understand the physics of it a little better.
===Learning===
*2D: I liked learning something new
*2D: I got to learn something I did not know.
*2D: Learned more about bridges
*2D: It was good to learn about the understanding of bridges
*3D: learn something about a subject I never knew something about
*3D: Suddenly, unforeseeably, I was studying the physics of bridges! I could never have guessed when I woke up today that I would learn this.
*3D: What a well thought out presentation, Now that I know something about bridges. I have something new to take to Real life with me
*3D: Combined my hobby with learning
===Facets of 3D Learning===
*3D: The way the bridges could actually be seen materialized and color coded was great.
*3D: It was visually appealing versus reading a book or listening to a live lecture.
*3D: It's a great learning key.
*3D: I liked the ability to see a 3D diagram of the topic.
*3D: The examples floating in space allowed for a better view of the material
*3D: The use of "real" object as opposed to drawings helped with any problems in understanding
*3D: The images were 3D making it a little easier to get an idea of what each bridge was.
*3D: The 3D rendered models illustrating the different types of bridges & how loads were carried were a great tool.
*3D: With the help of bridge models I was able to get a better understanding about what the lecture was talking about.
*3D: Just the fact that the examples where suspended in space, allowed me a better understanding from all angles.
*3D: While it may not quite stick on the first pass, I feel as if this method DEFINITELY provided a clear, direct delivery of the subject matter. I could see this type of presentation doing much more for someone with at least a rudimentary knowledge of the subject matter.
===Instruction===
*2D: It would have been fun to have an "instructor" to ask questions of. :)
*2D: lack of contact or clarification of issues
*3D: There was no place to pause the instructor, or ask further questions about the subject matter
*3D: A live guide would've been very helpful to clear up any confusion along the way, though it isn't necessary.
*3D: The inability to ask for clarification or further explanation.
*3D: No interactive question-answer
===Focus===
====In world distractions====
*2D: Distracting avatars
*2D: my club shine glitzier owners tag got in the way
*2D: I was distracted by my own curiosity of the technology
*3D: Disruptions from others in chat
*3D: Noise or excessive gestures of certain people.
*3D: Some others in the room were very disruptive
*3D: Interruptions from people who don't take the education seriously.
*3D: It would be idea to separate people in the education process as some people make noises during the presentation that distracts from the education.
===Outside world distractions===
*2D: Just the fact it’s the weekend and so many distractions in the house
*2D: Thought it was interesting I may watch it again later, if it’s alright, my daughter kept talking to me during it and I kept getting distracted but I did try and pay attention.
*2D: I could do other things at my desk and could answer the phone!
*2D: I guess it’s not good to be able to talk to others during a class where you're supposed to learn something [yahoo messaging]
*2D: Could do things at my desk
*2D: "real life" interruptions the telephone ringing
*3D: Interruptions from real life
===Time===
*2D: Being new, it held my attention for the whole time
*2D: It went a bit slow.
*2D: Speed of the presentation was a little slow
*2D: The narrator was a bit monotone which caused me to get bored a couple of times.
*2D: I lost focus for a little.
*2D: found myself zoning out a little bit.
*2D: voice got monotonous
*3D: It actually held my attention! Quite the accomplishment if I do say so myself!
*3D: It was fast.
*3D: The soothing voice of the narrator kept me engaged.
*3D: Easy to stay concentrated
*3D: The images kept mind from wondering.
*3D: It was exceptionally quick
*3D: Just a little fast for me a time or two
*3D: There were a few times it went a little fast
===Navigation===
*2D: I didn't see the words the best way cause of the chair.
*2D: Seating made vision difference which had to be adjusted more than once
*3D: Hard to put screen right
*3D: Models that were rotating sometimes blocked the text
*3D: I had to situate my view to read the board
*3D: Display was blocked many times.
*3D: Had to peek round the 3D bridges to read the text
</div>
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
1916f50b633d914556a986b7bcf3de50d707f15d
371
317
2010-08-05T13:24:51Z
Bishopj
1
wikitext
text/x-wiki
<div class="nonumtoc">
=Appendices=
==Appendix A: Terminology==
{| width="15%"
|-
|'''Term'''
|'''Description'''
|}
;Virtual World:
:An artificial environment that a person projects themself within. In the context this in which this term has been used mainly in this paper (unless otherwise stated) it is an environment that is technological built form software programs.
;In World:
:Artificial, where the person operates in the artificial virtual world.
;Real Word
:Reality, where the person operates in their physical world.
;Avatar:
:The digital representation of the person in the virtual world.
;Teleport
:A method of transport used in the virtual world that moves them from one location to another without having to walk to a location with their avatar.
;Presence:
:A subjective measure. The feeling of being in the virtual world that disconnects them from physical world around them.
;Immersion:
:An objective measure. The interface between the virtual world and the user that places the person in the virtual world.
;MMORPG:
:Massively Multiplayer Online Role Playing Game. The can used in various shortened abbreviations eg RPG, MMO
This term often is used to describe the latest generation of online virtual world technology. Many other terms are used such as MUVE, CVE etc.
;MUD:
:Multi User Dungeon. Early text based networked virtual worlds.
<P align=left >'''''Table 14 Terminology'''''</p >
==Appendix B: MMOG Analysis==
Bruce Woodcock (2008) is an independent writer and long time player of MMOGs that has dedicated his research to tracking subscriptions numbers of online MMOGs. These figures are obtained from source and public available material e.g. company financial reports, company media releases, media publications and in some cases an educated guess. These figures although not precise, allows us to do a comparison of MMOGs that would otherwise would not be available unless one was to undertaken the same type of analysis such as he has done over the years. If anything these figures would be underreported as they only are based upon user subscriptions and therefore do not include in the numbers of user that have free-access to their environments (included within the ones listed). These figures are current as at April 2008, for more information see http://www.mmogchart.com/.
Breakdown of MMOGs listed Chart.
{|border="1" width="40%" align=center
|-
|align=center |'''Name'''
|align=center |'''Current Active Subscriptions'''
|-
|align="right"|World of Warcraft
|align="right"|10,000,000
|-
|align="right"|RuneScape
|align="right"|1,200,000
|-
|align="right"|Lineage
|align="right"|1,056,177
|-
|align="right" |Lineage II
|align="right" |1,006,556
|-
|align="right" |Final Fantasy XI
|align="right" |500,000
|-
|align="right" |Dofus
|align="right" |452,000
|-
|align="right" |EVE Online
|align="right" |236,510
|-
|align="right" |EverQuest II
|align="right" |200,000
|-
|align="right" |EverQuest
|align="right" |175,000
|-
|align="right" |The Lord of the Rings Online
|align="right" |150,000
|-
|align="right" |City of Heroes / Villains
|align="right" |136,250
|-
|align="right" |Tibia
|align="right" |104,338
|-
|align="right" |Star Wars Galaxies
|align="right" |100,000
|-
|align="right" |Toontown Online
|align="right" |100,000
|-
|align="right" |Second Life
|align="right" |-91,531
|-
|align="right" |Tabula Rasa
|align="right" |75,000
|-
|align="right" |Ultima Online
|align="right" |75,000
|-
|align="right" |Pirates of the Burning Sea
|align="right" |65,000
|-
|align="right" |Dark Age of Camelot
|align="right" |45,000
|-
|align="right" |Dungeons & Dragons Online
|align="right" |45,000
|-
|align="right" |Vanguard: Saga of Heroes
|align="right" |40,000
|-
|align="right" |Yohoho! Puzzle Pirates
|align="right" |34,000
|-
|align="right" |EverQuest Online Adventures
|align="right" |30,000
|-
|align="right" |The Matrix Online
|align="right" |30,000
|-
|align="right" |Era of Eidolon
|align="right" |27,000
|-
|align="right" |PlanetSide
|align="right" |20,000
|-
|align="right" |Asheron's Call
|align="right" |15,000
|-
|align="right" |Sphere
|align="right" |15,000
|-
|align="right" |Anarchy Online
|align="right" |12,000
|-
|align="right" |The Realm Online
|align="right" |12,000
|-
|align="right" |World War II Online
|align="right" |12,000
|-
|align="right" |Pirates of the Caribbean Online
|align="right" |10,000
|-
|align="right" |Neocron 2
|align="right" |6,000
|-
|align="right" |Horizons
|align="right" |5,000
|-
|align="right" |Mankind
|align="right" |5,000
|-
|align="right" |A Tale in the Desert
|align="right" |1,054
|}
==Appendix I: Second Life Demographics==
{| align="center" width="50%" style="background-color:#ffffcc; "
|-
|align="left" colspan="3" |'''Second Life Virtual Economy<br />
Demographic Summary Information<br />
Through November 2008'''
|-bgcolor="lightblue"
|align="center" colspan="3" |'''Top 20 Countries by Active User Hours'''
|-bgcolor="wheat"
|align=center |'''Country'''
|align=center |'''Total Hours'''
|align=center |'''% of Total Hrs'''
|-
|align=right |United States
|align=right |14,451,180.28
|align=right |39.38%
|-
|align=right |Germany
|align=right | 3,505,103.93
|align=right | 9.55%
|-
|align=right | United Kingdom
|align=right | 2,424,987.88
|align=right | 6.61%
|-
|align=right | Japan
|align=right | 2,014,299.45
|align=right | 5.49%
|-
|align=right | France
|align=right | 1,972,875.00
|align=right | 5.38%
|-
|align=right |Netherlands
|align=right | 1,406,652.90
|align=right | 3.83%
|-
|align=right |Italy
|align=right | 1,397,571.12
|align=right | 3.81%
|-
|align=right |Brazil
|align=right | 1,361,741.72
|align=right | 3.71%
|-
|align=right |Canada
|align=right | 1,336,706.03
|align=right | 3.64%
|-
|align=right |Spain
|align=right | 1,083,716.70
|align=right | 2.95%
|-
|align=right |Australia
|align=right | 747,158.40
|align=right | 2.04%
|-
|align=right |Belgium
|align=right | 349,070.48
|align=right | 0.95%
|-
|align=right |Portugal
|align=right | 332,468.60
|align=right | 0.91%
|-
|align=right |Switzerland
|align=right | 277,448.60
|align=right | 0.76%
|-
|align=right |Poland
|align=right | 234,785.58
|align=right | 0.64%
|-
|align=right |Argentina
|align=right | 196,719.35
|align=right | 0.54%
|-
|align=right |Denmark
|align=right | 193,975.72
|align=right | 0.53%
|-
|align=right |Sweden
|align=right | 191,424.80
|align=right | 0.52%
|-
|align=right |Mexico
|align=right | 177,130.73
|align=right | 0.48%
|-
|align=right |Turkey
|align=right | 176,759.05
|align=right | 0.48%
|-
|align=right |Others
|align=right | 2,866,931.23
|align=right | 7.81%
|-
|align=center |'''Total'''
|align=right | '''36,698,707.57'''
|
|}
{| align="center" width="50%" style="background-color:#ffffcc; "
|-
|align="left" colspan="3" |'''Second Life Virtual Economy<br />
Demographic Summary Information<br />
Through November 2008'''
|-bgcolor="lightblue"
|align="center" colspan="3" |'''Usage hours by Age Band'''
|-bgcolor="wheat"
|align=center |'''Age'''
|align=center |'''% of Total Hrs'''
|-
|align=right |13-17 (Teen Grid)
|align=right |0.32%
|-
|align=right |18-24
|align=right | 15.07%
|-
|align=right |25-34
|align=right | 34.51%
|-
|align=right |35-44
|align=right | 28.51%
|-
|align=right |45 plus
|align=right | 21.14%
|-
|align=right |Unknown
|align=right | 0.45%
|-bgcolor="lightblue"
|align="center" colspan="3" |'''Usage hours by Gender'''
|-
|align=right |Male
|align=right | 58.72
|-
|align=right |Female
|align=right | 41.28
|}
<p align="center" >'''''Source: (Linden Lab, 2008b)'''''</p >
==Appendix J: Pre-Quiz Score Results==
This section discusses the pre-quiz scores significance test results.
===J.1 Remember Scores===
Figure 68 provides the pre-quiz results for Bloom’s ‘remember’ cognitive process.
Figure 68. Results: Pre-Quiz Remember - Histogram & Bell Curve
The pre-quiz ‘remember’ scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = -0.417, sek = -1.105, K2 p = 0.26747 and 3D: ses = -0.595 and sek = -1.54, K2 p = 0.2675) and the variance between the groups was not significant (F = 0.668, 2 tailed p = 0.140, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found no significant difference (t = 1.665, df = 109, two-tailed p = 0.0987, α = 0.05) between the results of the 2D (x1 = 2.44, s1 = 1.032) and 3D (x2 = 2.071, s2 = 1.263) pre-quiz ‘remember’ scores.
When tested using a one-tail test where µ1 – µ2 > 0.5 the results show that there is a significant different (t = 1.665, df = 109, one-tailed p = 0.0494, α = 0.05), thus the 2D pre-quiz scores were significantly higher than the 3D scores for the Bloom’s cognitive process of ‘remember’.
===J.2 Understand Scores===
Figure 69 provides the pre-quiz results for Bloom’s understand cognitive process.
Figure 69. Results: Pre-Quiz Understand - Histogram & Bell Curve
The pre-quiz ‘understand’ scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = 0.790, sek = -0.227, K2 p = 0.63248 and 3D: ses = 1.072, sek = 0.0563, K2 p = 0.50798) and the variance between the groups was not significant (F = 0.799, 2 tailed p = 0.410, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found a significant difference (t = -2.257, df = 109, two-tailed p = 0.0260, α = 0.05) between the results of the 2D (x1 = 1.254, s1 = 0.775) and 3D (x2 = 1.607, s2 = 0.867) pre-quiz ‘understand’ scores. The 3D pre-quiz scores were significantly greater than the 2D pre-quiz scores for the Bloom’s cognitive process of ‘understand’ (µ1 – µ2 < 0.5; t = -3.03167, df = 109, one-tailed p = 0.00138, α = 0.05).
===J.3 Summary Pre-Quiz Remember and Understand===
Figure 70 provides an inverse cumulative normal distribution graph for Bloom’s cognitive process ‘remember’ and ‘understand’ for the post-quiz scores. This graph displays what percentage of participants scored under a nominated score.
Figure 70. Results: Pre-Quiz Rem & Und - Inverse Cumulative Normal Distribution Graph
===J.4 Total Scores===
A graph of the results for the total score was provided in the main document in the Chapter, 4 Results, Pre- Quiz Results.
The pre-quiz total scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D, ses = 0.0218, sek = -1.087, K2 p = 0.49248 and 3D, ses = -0.574, sek = -0.425, K2 p = 0.671739) and the variance between the groups was not significant (F = 0.862, 2 tailed p = 0.586, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found a no significant difference (t = 0.0455, df = 109, two-tailed p = 0.964, α = 0.05) between the results of the 2D (x1 = 3.690, s1 = 1.372) and 3D (x2 = 3.679, s2 = 1.479) pre-quiz total scores.
==Appendix K: Post-Quiz Score Results==
A graph of the results for the post-quiz score was provided in the main document in the Chapter, 4 Results; Post Quiz Results, Hypothesis One and Two sections.
===K.1 Remember Scores===
The post-quiz ‘remember’ scores (H01) were tested using the non-parametric Mann-Whitney U Test as the results for the post-quiz ‘remember’ scores did not meet the assumptions for parametric testing which requires the scores to be normality distributed (2D: ses = -1.94259, sek = -1.10294, K2 p = 0.06976 and 3D: ses = -2.87371, sek = 1.02617, K2 p = 0.01161). The 3D scores failed the D’Agostino-Pearson (K2) normal distribution test (p = 0.01161 ie < 0.05) therefore the scores from this group deviate significantly from normal distribution.
The results of Mann-Whitney U Test when applied found that there was no significant difference between the 2D and 3D post-quiz ‘remember’ scores where the average ranked scores were 2D = 53.9364 and 3D = 58.0268 resulted in U = 1653.5, W = 113.5, 2 tailed p = 0.493107, α = 0.05.
===K.2 Understand Scores===
The post-quiz ‘understand’ scores (H02) were tested using the parametric independent t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = 0.204408, sek = - 0.8453, K2 p = and 3D: ses = 1.016, sek = 0.016, K2 p = ) and the variance between the groups was not significant (F = 1.028, 2 tailed p = 0.920, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
===K.3 Total Scores===
The post-quiz total scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = 0.158427, sek = -0.230644, K2 p = 0.8865884 and 3D: ses = -0.700083, sek = 0.404913, K2 p = 0.62133) and the variance between the groups was not significant (F = 1.10638, 2 tailed p = 0.70972, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found a no significant difference (t = -0.8212, df = 119, two-tailed p = 0.4133, α = 0.05) between the results of the 2D (x1 = 10.9818, s1 = 2.46825) and 3D (x2 = 11.3571, s2 = 2.34659) post-quiz total scores.
==Appendix L: Instrument Reliability Results==
Table 15 provides the results of the instrument reliability tests performed on the achievement quiz results. For the pre-quiz there were 4 questions each for the Bloom’s cognitive process of ‘remember’ (rem) and ‘understand’ (und) for a combined total of 8 and in the post-quiz 10 questions for a combined total of 20. The 2D group consisted of 55 participants and the 3D group 56.
{| align="center" width="60%" style="background-color:#ffffcc; "
|-
|colspan="5" align="center" |'''Post-Quiz Remember'''
|-
|align=center|
|align=center bgcolor="#DDADAF" colspan=2 |'''2D'''
|align=center bgcolor="lightblue" colspan=2 |'''3D'''
|-bgcolor="lightgrey"
|align=center|
|align=center bgcolor="lightgrey" |'''Rem'''
|align=center bgcolor="lightgrey" |'''Und'''
|align=center bgcolor="lightgrey" |'''Rem'''
|align=center bgcolor="lightgrey" |'''Und'''
|-
|align=right |'''Pre-Quiz KR20'''
|align=right | 0.14
|align=right | -0.46
|align=right | 0.48
|align=right | -0.01
|- bgcolor="lightgrey"
|align=right |'''Post-Quiz KR20'''
|align=right | 0.53
|align=right | -0.01
|align=right | 0.54
|align=right | 0.10
|}
<p align=center >'''''Table 15. Instrument Reiability: Acheivement Quiz'''''</p>
Table 16 provides the results of the instrument reliability tests performed on the post survey Likert scale results for questions 23, 24, 28 and 29.
{| align="center" width="60%" style="background-color:#ffffcc; "
|-
|colspan="3" align="center" |'''Post-Quiz Remember'''
|-
|align=center|
|align=center bgcolor="#DDADAF" |'''2D'''
|align=center bgcolor="lightblue" |'''3D'''
|-bgcolor="lightgrey"
|align=right |'''Cronbach's Alpha:'''
|align=right |0.73
|align=right |0.72
|}
<p align="center" >'''''Table 16. Instrument Reiability: Survey Likert Scales'''''</p>
Frary (2008) provides the following definitions for the measure of these reliability (r) results:
*r = .90 or higher - High reliability. Suitable for making a decision about an examinee based on a single test score.
*r = .80 to .89 - Good reliability. Suitable for use in evaluating individual examinees if averaged with a small number of other scores of similar reliability.
*r = .60 to .79 - Low to moderate reliability. Suitable for evaluating individuals only if averaged with several other scores of similar reliability.
*r = .40 to .59 - Doubtful reliability. Should be used only with caution in the evaluation of individual examinees. May be satisfactory for determination of average score differences between groups.
'''Discussion'''
Instrument reliability tests the correlation of answers within a data set. The assumptions for the KR-20 test is that test items are of equal, or near equal, difficulty and intercorrelation (Lenke, Wellens, & Oswald, 1977). Consistent with these assumptions, the tests performed were split into the Bloom’s cognitive processes of ‘remember’ and ‘understand’. Furthermore as we were measuring the difference between the achievement results of two groups that had distinctly different treatment methods the reliability tests were divided into 2D and 3D participant groups. These repeated divisions caused a problem for the application of the instrument reliability test as in each division the total number tested items is 10 or below. If the number of questions (or subjects) are too low within each group then the results of the test as put by Frary ‘should be taken with a grain of salt’. Frary (2008) provides further insight as to why:
“All reliability estimates are subject to considerable error when there are small numbers of examinees or test items. If there are fewer than, say, 25 examinees or 10 items, the reliability estimate must be "taken with a grain of salt." This phenomenon is especially noticeable when there are several scrambled forms of the test, each administered to a relatively small number of examinees. Then the KR20 coefficients may fluctuate considerably from one form to another.”
As we can see from the above results there was considerable fluctuation in the reliability test results between the two groups. With exception to the post-quiz ‘remember’ results the other figures varied considerably. These results seem to correlate to the results that are discussed in Chapter 5 Discussion and Conclusion chapter. Participants for both groups performed well for ‘remember’ but did not for Bloom’s ‘understand’.
Although, as Frary asserts, the test reliability measures under this research’s circumstances are inconclusive indicators.
==Appendix M: Qualitative Analysis: A Sample of Participants Comments==
===Virtual World Learning Experience===
*I found learning in world is a great way to find out about things you don’t normally think about finding out about
*You’re more likely to learn things in world than go to places to find out about things
*Things I usually don't take time to learn about, I can learn about them here
*I really felt as if I was sitting in a Room of Such listening to a lecturer
*It kind of felt personal.
*Kind of soothing but not putting me to sleep kind
*The lack of pressure that comes from a more traditional classroom atmosphere
*You can see if others are in the class with you
*Feel this way is better experience then the normal online way of taking classes
*I prefer learning alone and I would definitely prefer this type of learning to going to a classroom with other students.
*Seemed better than the typical classroom experience
*This is a fantastic experiment and I believe the potential to reach people with anything that will help them become better educated is a wonderful thing.
*Top idea to get people to learn about several topics
*I liked the idea; please invite me for more lessons
*By being part of this Survey Study, I have opened a door to seeking out further Studies, as well as Classes with SL
===Campus Experience===
*It was very easy to use
*It was very well laid out
*Easier to navigate through
*I liked the way different stages
*The environment was well set out
*Very user friendly
===Format===
*2D: Like liked the layout... it showed you a picture of the different types of bridges as well as giving you plenty of information on the subject then had a summary of all of it at the end
*2D: I wish that the pictures had been interactive so I could've clicked on the different sections of the bridges and gotten an individual description
*2D: Easy to follow slides
*2D: The presentation was actually enjoyable, however I believe that for this to be a truly effective learning tool the presentation speed must be made adjustable as people may find certain topics boring and just skip through them but may wish to spend longer periods of time on other material and wish to slow down to be more attentive.
*2D: the possibility to go back or control the slideshow
*3D: The mix of the audio and the bulleted points made it easier to follow for visual
*3D: Wonderfully laid out. The visuals were great! They conveyed the most important points very well.
*3D: lots of examples
*3D: Wish there was a way I could stop the presentation or lecture and go back to review what was just said.
===Information content===
*2D: Very informative and interesting
*2D: very easy to comprehend
*2D: It was not too technical
*2D: I never gave it much thought at to the Construction of Bridges, one droves on them, over them etc, and you certainly hear in recent years of the collapse of bridges etc, I found the topic informative although a lot to digest.
*3D: It was informative.
*3D: Need more infor need more infor need more infor
*3D: I have never stopped to think about bridges before. Now how am I going to drive over a bridge without thinking about what it is?
*3D: The theory of the subject was well thought out, even though to my knowledge the subject was well informative, it could have been explained in more of laymen terms for those who really don't understand the makeup of bridges.
*3D: I found myself getting lost a bit here and there with the terminology
*3D: Overall a bit too complicated for someone with no previous knowledge coming into the presentation, but still worthwhile.
*3D: I might have liked a little better explanation of how compression and tension work at the beginning so as I could understand the physics of it a little better.
===Learning===
*2D: I liked learning something new
*2D: I got to learn something I did not know.
*2D: Learned more about bridges
*2D: It was good to learn about the understanding of bridges
*3D: learn something about a subject I never knew something about
*3D: Suddenly, unforeseeably, I was studying the physics of bridges! I could never have guessed when I woke up today that I would learn this.
*3D: What a well thought out presentation, Now that I know something about bridges. I have something new to take to Real life with me
*3D: Combined my hobby with learning
===Facets of 3D Learning===
*3D: The way the bridges could actually be seen materialized and color coded was great.
*3D: It was visually appealing versus reading a book or listening to a live lecture.
*3D: It's a great learning key.
*3D: I liked the ability to see a 3D diagram of the topic.
*3D: The examples floating in space allowed for a better view of the material
*3D: The use of "real" object as opposed to drawings helped with any problems in understanding
*3D: The images were 3D making it a little easier to get an idea of what each bridge was.
*3D: The 3D rendered models illustrating the different types of bridges & how loads were carried were a great tool.
*3D: With the help of bridge models I was able to get a better understanding about what the lecture was talking about.
*3D: Just the fact that the examples where suspended in space, allowed me a better understanding from all angles.
*3D: While it may not quite stick on the first pass, I feel as if this method DEFINITELY provided a clear, direct delivery of the subject matter. I could see this type of presentation doing much more for someone with at least a rudimentary knowledge of the subject matter.
===Instruction===
*2D: It would have been fun to have an "instructor" to ask questions of. :)
*2D: lack of contact or clarification of issues
*3D: There was no place to pause the instructor, or ask further questions about the subject matter
*3D: A live guide would've been very helpful to clear up any confusion along the way, though it isn't necessary.
*3D: The inability to ask for clarification or further explanation.
*3D: No interactive question-answer
===Focus===
====In world distractions====
*2D: Distracting avatars
*2D: my club shine glitzier owners tag got in the way
*2D: I was distracted by my own curiosity of the technology
*3D: Disruptions from others in chat
*3D: Noise or excessive gestures of certain people.
*3D: Some others in the room were very disruptive
*3D: Interruptions from people who don't take the education seriously.
*3D: It would be idea to separate people in the education process as some people make noises during the presentation that distracts from the education.
===Outside world distractions===
*2D: Just the fact it’s the weekend and so many distractions in the house
*2D: Thought it was interesting I may watch it again later, if it’s alright, my daughter kept talking to me during it and I kept getting distracted but I did try and pay attention.
*2D: I could do other things at my desk and could answer the phone!
*2D: I guess it’s not good to be able to talk to others during a class where you're supposed to learn something [yahoo messaging]
*2D: Could do things at my desk
*2D: "real life" interruptions the telephone ringing
*3D: Interruptions from real life
===Time===
*2D: Being new, it held my attention for the whole time
*2D: It went a bit slow.
*2D: Speed of the presentation was a little slow
*2D: The narrator was a bit monotone which caused me to get bored a couple of times.
*2D: I lost focus for a little.
*2D: found myself zoning out a little bit.
*2D: voice got monotonous
*3D: It actually held my attention! Quite the accomplishment if I do say so myself!
*3D: It was fast.
*3D: The soothing voice of the narrator kept me engaged.
*3D: Easy to stay concentrated
*3D: The images kept mind from wondering.
*3D: It was exceptionally quick
*3D: Just a little fast for me a time or two
*3D: There were a few times it went a little fast
===Navigation===
*2D: I didn't see the words the best way cause of the chair.
*2D: Seating made vision difference which had to be adjusted more than once
*3D: Hard to put screen right
*3D: Models that were rotating sometimes blocked the text
*3D: I had to situate my view to read the board
*3D: Display was blocked many times.
*3D: Had to peek round the 3D bridges to read the text
</div>
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
1916f50b633d914556a986b7bcf3de50d707f15d
VirtualWorldLearningReferences
0
284
315
2010-08-05T14:05:12Z
Bishopj
1
wikitext
text/x-wiki
=Introduction=
The RiskWiki book and Thesis: "Real Learning in Virtual Worlds" by Dianne Bishop (2008) references an extensive list of works which is reproduced in its entirety here. The reference list also provides an outstanding bibliography about Virtual Worlds and the Virtual World Learning space. Students of these two areas are commended to explore the work of the authors listed below.
=References and Bibliography=
Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., et al. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of Educational Objectives. New York: Longman.
Anderson Research Group (n.d.). The Revised Bloom’s Taxonomy. Accessed: Jun, 2008 Retrieved from: www.andersonresearchgroup.com/reports/TPP2.ppt
Annetta, L. A., Murray, M. R., Laird, S. G., Bohr, S. C., & Park, J. C. (2006). Serious Games: Incorporating Video Games in the Classroom. EDUCAUSE Quarterly 29(3), Accessed: Jun, 2008 Retrieved from: http://connect.educause.edu/Library/EDUCAUSE+Quarterly/SeriousGamesIncorporating/39986
Arreguin, C. (2007). Reports from the Field: Second Life Community Convention 2007 Education Track Summary. Best Practices from the Second Life Community Convention Education Track 2007, Accessed: Jun, 2008 Retrieved from: http://www.holymeatballs.org/pdfs/VirtualWorldsforLearningRoadmap_012008.pdf
Axon, S. (2008). Massively's Visual History of MMORPGs, Part I. Massively, Accessed: Jun, 2008 Retrieved from: http://www.massively.com/2008/03/31/massivelys-visual-history-of-mmorpgs-part-i/
Bailenson, J. N., Yee, N., Blascovich, J., Beall, A. C., Lundblad, N., & Jin, M. (2007). The use of immersive virtual reality in the learning sciences: Digital transformations of teachers, students, and social context. The Journal of the Learning Sciences.
Bainbridge, W. S. (2007). The Scientific Research Potential of Virtual Worlds. Science, 317(5837), 472 - 476.
Bartle, R. (1990). Interactive Multi-User Computer Games. Accessed: Jun, 2008 Retrieved from: http://www.mud.co.uk/richard/imucg0.htm
Bartle, R. (2003). Designing Virtual Worlds. Indianapolis, USA: New Riders.
Beedle, J. B., & Wright, V. H. (2007). Perspectives from Multiplayer Video Gamers. In D. Gibson (Ed.), Games and Simulations in Online Learning: Research & Development Frameworks. Hershey PA, USA: Idea Group Inc
Bell, L. (2006). Dobbit Do program at Second Life Library. Second Life Library 2.0, Retrieved from: http://secondlifelibrary.blogspot.com/2006/06/dobbit-do-program-at-second-life.html
Bellman, K., & Landauer, C. (2000). Playing In The Mud: Virtual Worlds Are Real Places. Applied Artificial Intelligence, 14(1), 93-123.
Benford, S., Greenhalgh, C., Reynard, G., Brown, C., & Koleva, B. (1998). Understanding and constructing shared spaces with mixed-reality boundaries. ACM Transactions on Computer-Human Interaction 5(3), 185-223 Accessed: Jun, 2008 Retrieved from: http://www.crg.cs.nott.ac.uk/research/publications/papers/TOCHI98.pdf
Billinghurst, M., Kato, H., & Poupyrev, I. (2001). The MagicBook: Moving Seamlessly between Reality and Virtuality. IEEE Computer Graphics and Applications, 21(3), 6-8.
Biocca, F., & Delaney, B. (1995). Immersive virtual reality technology. In Communication in the age of virtual reality (pp. 57-124): Lawrence Erlbaum Associates, Inc.Accessed: May, 2008 Retrieved from: http://www.mindlab.org/images/d/DOC713.pdf
Blizzard Entertainment Inc (2008). World of Warcraft Surpasses 11 million Subscribers Worldwide. Retrieved from: http://www.blizzard.com/us/press/081028.html
Bloom, B. S., Englehart, M. D., Furst, M., Hill, E. J., & Krathwohl, D. R. (1956). Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook 1: Cognitive Domain. . New York: David McKay Company, Inc.
Bowery, J. (2001). Spasim (1974) The First First-Person-Shooter 3D Multiplayer Networked Game. Accessed: April, 2008 Retrieved from: http://www.geocities.com/jim_bowery/spasim.html
Briggs, J. C. (1996). The Promise of Virtual Reality. The Futurist 30(5), Accessed: May, 2008 Retrieved from: http://project.cyberpunk.ru/idb/virtualreality_promise.html
Brookhaven National Laboratory (n.d.). The First Video Game. Accessed: Jun, 2008 Retrieved from: http://www.bnl.gov/bnlweb/history/higinbotham.asp; also see article http://gamersquarter.com/tennisfortwo/
Brooks, F. P., Jr. (1999). What's real about virtual reality? Computer Graphics and Applications, IEEE, 19(6), 16-27.
Brown, J. D. (1997). Skewness and Kurtosis. Shiken: JALT Testing & Evaluation SIG Newsletter 1(1), 1-20 Accessed: Jan, 2009 Retrieved from: http://jalt.org/test/bro_1.htm
Budge, L. D., Strini, R. A., Dehncke, R. W., & Hunt, J. A. (1998). Synthetic Theater of War (STOW) 97 Overview (98S-SIW-086). Paper presented at the Spring Simulation Interoperability Workshop, Orlando, FL.Accessed: Jun, 2008 Retrieved from: http://www.sisostds.org/index.php?tg=articles&idx=More&topics=46&article=199
Bulkley, K. (2007). Today Second Life, tomorrow the world. Interview: Philip Rosedale. The Guardian, Accessed: Jun, 2008 Retrieved from: http://www.guardian.co.uk/technology/2007/may/17/media.newmedia2
Burdea, G. C., & Coiffet, P. (2003). Virtual Reality Technology (2nd ed.): Wiley-IEEE Press.
Burns, R. B. (2000). Introduction to Reserach Methods (4th ed.). Frenchs Forest, NSW, Australia: Longman.
Bye, C. (2008). Legends of the Industry: An Interview with Randy Farmer and Chip Morningstar. March 25th, 2008, Accessed: Jun, 2008 Retrieved from: http://www.tentonhammer.com/node/29292
Carless, S. (2006). Australian Defence Force Licenses Virtual Battlespace. Serious Games Source April 18, Accessed: Jun, 2008 Retrieved from: http://www.seriousgamessource.com/item.php?story=8955
Carlson, W. (2003). Section 17: Virtual Reality and Artificial Environments. In A Critical History of Computer Graphics and Animation: The Ohio State University.Accessed: May, 2008 Retrieved from: http://design.osu.edu/carlson/history/lessons.html
Carroll, L. (1865). Alice's Adventures in Wonderland. London: Macmillan.
Carroll, L. (1871). Through the Looking-Glass. London: Macmillan.
Castronova, E. (2001). Virtual Worlds: A First-Hand Account of Market and Society on the Cyberian Frontier. CESifo Working Paper Series No. 618, Accessed: May, 2008 Retrieved from: http://ssrn.com/paper=294828
Cavazza, F. (2007). Virtual Universes Landscape. Accessed: May, 2008 Retrieved from: http://www.fredcavazza.net/2007/10/04/virtual-universes-landscape/
Chesher, C. (1994). Colonizing Virtual Reality. Construction of the Discourse of Virtual Reality, 1984-1992. Cultronix (1), Retrieved from: http://cultronix.eserver.org/chesher/
Churches, A. (2008). Bloom's Taxonomy Blooms Digitally. Educators' eZine, Accessed: Jun, 2008 Retrieved from: http://www.techlearning.com/showArticle.php?articleID=196605124; or wiki http://edorigami.wikispaces.com/
Clark, R. E. (1983). Reconsidering Research on Learning from Media. Review of Educational Research 53(4),
Clark, R. E. (1994). Media Will Never Influence Learning. Educational Technology Research and Development, 42(2), 21-29.
Clark, S., & Maher, M. L. (2006). Collaborative Learning in A 3D Virtual Place: Investigating the Role of Place in a Virtual Learning Environment. Advanced Technology for Learning 3(4), Accessed: Jun, 2008 Retrieved from: http://web.arch.usyd.edu.au/~mary/Pubs/2006pdf/ATL_MLM_SC.pdf
Clarke, R. (2000). Robert Gagné's Nine Steps of Instruction. ISD - Development, Accessed: Jun, 2008 Retrieved from: http://www.nwlink.com/~donclark/hrd/learning/development.htm
Coleridge, S. T. (1817). Biographia Literaria (2nd edition ed.): Sara Coleridge.
Colley, S. (n.d.). Stories from the Maze War 30 Year Retrospective. Accessed: Jun, 2008 Retrieved from: http://www.digibarn.com/history/04-VCF7-MazeWar/stories/colley.html
Combs, N. (2004). A virtual world by any other name? , Accessed: 1 April 2008 Retrieved from: http://terranova.blogs.com/terra_nova/2004/06/a_virtual_world.html
CompuServe. (2007). MUD1. Accessed: Oct, 2007 Retrieved from: http://www.british-legends.com/
Computer History Museum. (n.d.). Spacewar! Accessed: Mar, 2008 Retrieved from: http://www.computerhistory.org/pdp-1/play_spacewar.html; Also see: http://www.wheels.org/spacewar/index.html
Corbit, M. (2002). Building Virtual Worlds for Informal Science Learning (SciCentr and SciFair) in the Active Worlds Educational Universe (AWEDU). Presence: Teleoperators & Virtual Environments, 11(1), 55-67.
Corry, M. (1996). Gagne's Theory of Instruction. Dr. Donald Cunningham Spring, 540 Accessed: Jun, 2008 Retrieved from: http://home.gwu.edu/~mcorry/corry1.htm
Cosby, L. N. (1999). SIMNET: An Insider's Perspective. SISO News 2(1g), Accessed: Jun, 2008 Retrieved from: http://www.sisostds.org/webletter/siso/Iss_39/art_202.htm
Dabbagh, N. (2006). The Instructional Design Knowledge Base. Instructional Technology Program, Accessed: Jun, 2008 Retrieved from: http://classweb.gmu.edu/ndabbagh/Resources/IDKB/models_theories.htm
Dalgarno, B. J. (2004). Characteristics of 3D Environments and Potential Contributions to Spatial Learning. University of Wollongong
Damer, B. (2004). First experimental post for March guest column. Accessed: Jun, 2008 Retrieved from: http://terranova.blogs.com/terra_nova/2007/03/march_topics.html
Damer, B. (2007). Meeting in the ether. Interactions, 14(5), 16-18.
Dave, R. H. (1967). Psychomotor Domain. Paper presented at the International Conference of Educational Testing.
Dave, R. H. (1970). Psychomotor levels In R. J. Armstrong (Ed.), Developing and Writing Behavioural Objectives. Tucson AZ: Educational Innovators Press
Dede, C. (1995). The Evolution of Constructivist Learning Environments: Immersion in Distributed, Virtual Worlds. Educational Technology, Research and Development, Vol. 35(No. 5), pp. 46-52.
Dede, C. (2004). Enabling Distributed Learning Communities Via EmergingTechnologies -- Part Two. T H E Journal, 32(3), 16-26.
Denzin, N. (1978). Sociological Methods: A Sourcebook.
Department of the Army (2008). America's Army: The Making Of. Accessed: Jun, 2008 Retrieved from: http://www.americasarmy.com/intel/makingof.php
Deuchar, S., & Nodder, C. (2003). The Impact of Avatars and 3D Virtual World Creation on Learning. Paper presented at the Proceedings of the 16th Annual NACCQ, Palmerston North New Zealand. Retrieved from: www.naccq.ac.nz
Dickey, M. D. (1999). 3D Virtual Worlds and Learning: An analysis of the impact of design affordances and limitations in Active Worlds, Blaxxun Interactive, and Onlive! Traveler; and a study of the implementation of Active Worlds for formal and informal education. Dissertation; The Ohio State University, from http://mchel.com/Research.htm
Dickey, M. D. (2003). Teaching in 3D: Pedagogical Affordances and Constraints of 3D Virtual Worlds for Synchronous Distance Learning. Distance Education, 24(1), 105-122.
Dickey, M. D. (2005). Three-dimensional virtual worlds and distance learning: Two case studies of Active Worlds as a medium for distance education. British Journal of Educational Technology, 36(2), 439.
DONCIO, OPNAV N79, CNET, Naval Postgraduate School, Marine Corps Training and Education Command, & Marine Corps Distance Learning Center (2008). Learning in a Virtual World, Accessed: Jun, 2008 Retrieved from: http://wiki.nasa.gov/cm/wiki/?id=2731
Edutech Wiki. (2009). The Media Debate. Accessed: Jan, 2009 Retrieved from: http://edutechwiki.unige.ch/en/The_media_debate
Electronic Arts (2007). Ultima Online: Kingdom Reborn FAQ. Accessed: May, 2008 Retrieved from: http://www.uo.com/uokr/UOKR/uokr_faq.shtml
Farmer, F. R. (1992). Social Dimensions of Habitat's Citizenry. In C. E. Loeffler & T. Anderson (Eds.), The Virtual Reality Casebook. New York: Van Nostrand Reinhold
Fielding, N. G., & Fielding, J. L. (1986). Linking Data: Qualitative and Qzrantitative Me/hods in Social Research. .
Fife-Schaw, C. (2007). How do I test the normality of a variable’s distribution? , Accessed: Jan, 2009 Retrieved from: http://www.psy.surrey.ac.uk/cfs/p8.htm
Foley, P., & Gifford, T. (2002). An Introduction to SEDRIS. Paper presented at the SEDRIS Technology Conference. Retrieved from: http://www.sedris.org/stc/2002/tu/intro/sld001.htm
Frary, R. B. (2008). Testing Memo 8: Reliability of Test Scores. Virginia Polytechnic Institute and State University (Jan, 2009), Retrieved from: http://www.testscoring.vt.edu/memo08.html
Friedl, M. (2002). Chapter One: Learning and Inspiration. In C. R. M. Inc (Ed.), Online Game Interactivity Theory. Hingham, Massachusetts
Gabrisch, C., & Burgess, G. (2005). The COA-Sim JSAF Environment in Support of Joint Military Training and Exercises. Paper presented at the SimTecT. Accessed: Jun, 2008 Retrieved from: http://www.siaa.asn.au/library_simtect_2005.html
Gagne, R. M. (1985). The Conditions of Learning and the Theory of Instruction (4 ed.). New York: Holt, Rinehart, and Winston.
Garson, G. D. (2000). The role of information technology in quality education. In Social dimensions of information technology: issues for the new millennium (pp. 177-197): IGI Publishing
Gartner (2007). Media Relations. Accessed: Oct, 2007 Retrieved from: http://www.gartner.com/it/page.jsp?id=503861
Gehorsam, R. (2003). The coming revolution in massively multiuser persistent worlds. Computer, 36(4), 93-95.
Gibson, W. (1984). Neuromancer. Canada: Ace Books.
Gikas, J., & Van Eck, R. (2004). Integrating video games in the classroom: Where to begin? . Paper presented at the National Learning Infrastructure Initiative 2004 Annual Meeting, San Diego, CA.Accessed: Many, 2008 Retrieved from: http://www.educause.edu/ir/library/pdf/NLI0431a.pdf
Goldberg, M. (2002). The History of Computer Gaming Part 5 - PLATO Ain't Just Greek. Classic Gamming, Accessed: April, 2008 Retrieved from: http://classicgaming.gamespy.com/View.php?view=Articles.Detail&id=324
Gonzalez, D. (2007). Second Life for Digital Entertainment Technology Education. Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Graphpad (2009). How useful are normality tests? , Accessed: Jan, 2009 Retrieved from: http://www.graphpad.com/library/BiostatsSpecial/article_197.htm
Grau, O. (1999). Into the Belly of the Image: Historical Aspects of Virtual Reality. Leonardo, 32(5), 365-371.
Grøstad, O. F. (2007). Define: virtual world. Accessed: Apr, 2008 Retrieved from: http://worldtheory.blogspot.com/2007/06/define-virtual-world.html
Hardy, D. R., Allen, E. C., Adams, K. P., Peters, C. B., Peterson, L. J., Cannon, M. A., et al. (2001). Advanced Distributed Simulation: Decade in Review and Future Challenges. Accessed: Jun, 2008 Retrieved from: http://stinet.dtic.mil/cgi-bin/GetTRDoc?AD=A434191&Location=U2&doc=GetTRDoc.pdf
Harrow, A. J. (1972). A taxonomy of the psychomotor domain. New York: David McKay Company, Inc.
Harvard's Berkman Center for Internet and Society. (2007). Cyber One: Law in the Court of Public Opinion. Accessed: 20/10/2007 Retrieved from: http://sleducation.wikispaces.com/educationaluses#distance
Heeter, C. (1992). Being there: The subjective experience of presence. Presence: Teleoperators & Virtual Environments, 1(2), 262– 271.
Heeter, C. (2003). Reflections on Real Presence. Presence: Teleoperators & Virtual Environments 12(4), 335-345 Accessed: Jun, 2008 Retrieved from: http://commtechlab.msu.edu/publications/files/presence2003.pdf
Heilig, M. (1955). The Cinema of the Future, reprinted. In R. Packer & K. Jordan (Eds.), Multimedia: From Wagner to Virtual Reality (expanded edition), 2002 (pp. 239-251). New York/London: W. W. Norton and Company
Holmberg, J. (2003). Ideals of Immersion in Early Cinema. Cinémas 14(1), 129-147 Retrieved from: http://www.erudit.org/revue/cine/2003/v14/n1/008961ar.pdf
Howard, R. E. (1932). The Phoenix on the Sword. In Weird Tales (Vol. December). Chicago: Popular Fiction Publishing Co
Hu, S.-Y., & Liao, G.-M. (2004). Network and System Support for Games: Scalable Peer-to-Peer Networked Virtual Environment. Paper presented at the 3rd ACM SIGCOMM workshop on Network and system support for games, Portland, Oregon, USA Accessed: Jun, 2008 Retrieved from: http://www.phys.sinica.edu.tw/~statphys/publications/2004_full_text/S_Y_Hu_Proc_ACM_SIGCOMM_2004_on_NetGame_p129-133(2004).pdf
Jacoby, J., & Matell, M. S. (1971). Three-Point Likert Scales Are Good Enough. Journal of Marketing Research, 8(4), 495-500.
Jamison, J. (2007). Two Years of Introducing Educators to Second Life in 60 Minutes, or: Tips for Dinosaur Wrangling. Paper presented at the Second Life Best Practices in Education: Teaching, Learning, and Research 2007 International Conference
Jennings, S. (2007). Virtually a World. Accessed: 1 April 2008 Retrieved from: http://brokentoys.org/2007/06/15/virtually-a-world/. Accessed: 2008-04-08.
Jones, G., & Hicks, J. (2004). 3D Online Learning Environments for Emergency Preparedness and Homeland Security Training. Paper presented at the World Conference on E-Learning in Corporate, Government, Healthcare, & Higher Education, Washington, D.C. Retrieved from: http://courseweb.unt.edu/gjones/pdf/Jones_elearn04.pdf
Joseph, B. (2007). Global Kids, Inc.’s Best Practices in Using Virtual Worlds For Education. Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Kearsley, G. (2008). Conditions of Learning (R. Gagne). Explorations in Learning & Instruction: The Theory Into Practice Database Accessed: Jun, 2008 Retrieved from: http://tip.psychology.org/gagne.html
Keegan, M. (1997). A Classification of MUDs. The Journal of Virtual Environments 2(2), Accessed: Mar, 2008 Retrieved from: http://www.brandeis.edu/pubs/jove/HTML/v2/keegan.html
Kelly, K. (1995). Singular Visionary. Wired June(3.06), Retrieved from: http://www.wired.com/wired/archive/3.06/vinge.html
King, B. (2003). Educators Turn to Games for Help. Wired, Accessed: Jun, 2008 Retrieved from: http://www.wired.com/gaming/gamingreviews/news/2003/08/59855
Kingdom of Drakkar. (1992-Current). Further Reading. Accessed: May, 2008 Retrieved from: Official: http://www.kingdomofdrakkar.com/; Historical: http://www.kingdomofdrakkar.com/forums/viewtopic.php?f=38&t=6197
Kish, S. (2007). Second Life: Virtual Worlds and the Enterprise. Accessed: May, 2008 Retrieved from: http://www.susankish.com/susan_kish/vw_secondlife.html
Klein, H. K., & Myers, M. D. (1999). A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems. MIS Quarterly, 23(1), 67-93.
Klich, R. (2007). Multimedia Theatre in the Virtual Age. University of New South Wales, Sydney, from http://www.library.unsw.edu.au/~thesis/adt-NUN/uploads/approved/adt-NUN20080304.114128/public/02whole.pdf
Kofi, B. A., Svihla, V., Gawel, D., & Bransford, D. J. (2007). Learning about Adaptive Expertise in a Multi-User Virtual Environment Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Koster, R. (2002). Online World Timeline. Accessed: Jun, 2008 Retrieved from: http://www.raphkoster.com/gaming/mudtimeline.shtml
Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development 42(2), 7-9
Krathwohl, D. R. (2002). A Revision of Bloom's Taxonomy: An Overview - Benjamin S. Bloom, University of Chicago. Theory Into Practice 41(4), 212-218 Accessed: Jun, 2008 Retrieved from: http://findarticles.com/p/articles/mi_m0NQM/is_4_41/ai_94872707
Krathwohl, D. R. (2002). A revision of Bloom's Taxonomy: an overview - Benjamin S. Bloom, University of Chicago. Theory Into Practice, Accessed: Jun, 2008 Retrieved from: http://findarticles.com/p/articles/mi_m0NQM/is_4_41/ai_94872707
Krathwohl, D. R., Bloom, B. S., & Masia, B. B. (1964). Taxonomy of Educational Objectives. The Classification of Educational Goals, Handbook II: Affective Domain. New York: David McKay Company, Inc.
Kribble, M. (2007). Getting a Second Life: Virtual Harvard. Law Library E-Newsletter January, Accessed: 20/10/2007 Retrieved from: http://www.nsulaw.nova.edu/library_tech/library/publications/bookdocket/2007/Jan2007.pdf
Kurt, S., Mike, B., Jamillah, M. G., & Thomas, H. (2004). Electromagnetism supercharged! Learning Physics with Digital Simulation Games, Proceedings of the 6th international conference on Learning sciences (pp. 513-520). Santa Monica, California: International Society of the Learning Sciences.
KZERO Research (2007). There.com vs Second Life: demographics. Accessed: Jun, 2008 Retrieved from: http://www.kzero.co.uk/blog/?p=961
Lang, T., Maclntyre, B., & Zugaza, I. J. (2008). Massively Multiplayer Online Worlds as a Platform for Augmented Reality Experiences. Paper presented at the Virtual Reality Conference, 2008. VR '08. IEEE.
Laurel, B. (1991). Computers as theatre. New York: Addison-Wesley.
Lee, A. (1991). Integrating Positivist And Interpretive Approaches To Organizational Research. Organization Science, 2(4), 342.
Lee, S.-Y., Kim, I.-J., Ahn, S. C., Lim, M.-T., & Kim, H.-G. (2005). Intelligent 3D Video Avatar for Immersive Telecommunication. In S. Zhang & R. Jarvis (Eds.), AI 2005 (pp. 726-735). Berlin Heidelberg: Springer-Verlag.Accessed: Jun, 2008 Retrieved from: http://www.imrc.kist.re.kr/~kij/LNCS_2005.pdf
Lenke, J. M., Wellens, B., & Oswald, J. (1977, Jan, 2009). Differences Between Kuder-Richardson Formula 20 and Formula 21 Reliability Coefficients for Short Tests with Different Item Variabilities. Paper presented at the Annual Meeting of the American Educational Research Association, New York, USA. Retrieved from: http://eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=ED141411&ERICExtSearch_SearchType_0=no&accno=ED141411
Lenoir, T. (2003). Programming Theatres of War: Gamemakers as Soldiers. In R. Latham (Ed.), In Bombs and Bandwidth: The Emerging Relationship Between IT and Security (pp. 175-198). New York: The New Press.Accessed: Jun, 2008 Retrieved from: http://www.stanford.edu/dept/HPS/TimLenoir/Publications/Lenoir_TheatresOfWar.pdf
Leonard, B. (Director), S. King, B. Leonard & G. Everett (Writer), G. Everett (Producer), (1992). The Lawnmower Man [Motion Picture]: New Line Cinema.
Levine, A. (2007). Avatars and Appearance: What’s your ‘dress code’? NMC Teachers Buzz. NMC Campus Observer, Retrieved from: http://sl.nmc.org/2007/07/19/dress-code/
Lewis, D. (2001). Objectivism vs. Constructivism: The Origins of this Debate and the Implications for Instructional Designers. EME 6613 Development of Technology-Based Instruction, Accessed: Jun, 2008 Retrieved from: http://www.coedu.usf.edu/agents/dlewis/publications/Objectivism_vs_Constructivism.htm
Linden, C., & Linden, P. (2008). Discussion on Education in Second Life, What’s Going On and How To Get Involved. “Inside the Lab” Podcast, a Discussion on Education in Second Life", Accessed: Jun, 2008 Retrieved from: http://blog.secondlife.com/2008/06/02/inside-the-lab-podcast-a-discussion-on-education-in-second-life/
Linden Lab (2008a). Economic Statistics: Graphs. Accessed: Jun, 2008 Retrieved from: http://secondlife.com/whatis/economy-graphs.php
Linden Lab (2008b). Second Life: Economic Statistics. Accessed: Dec, 2008 Retrieved from: http://secondlife.com/whatis/economy_stats.php
Linden Lab (2008c). Second Life: System Requirements. Accessed: Jun, 2008 Retrieved from: http://secondlife.com/support/sysreqs.php
Lisberger, S. (Director), S. Lisberger & B. MacBird (Writer), D. Kushner (Producer), (1982). Tron [Motion Picture]: Buena Vista Pictures.
Lord of the Rings Online (2007). Online Virtual World. Accessed: Jun, 2008 Retrieved from: http://www.lotro.com/
Lowood, H. E. (2008). Virtual Reality. Encyclopaedia Britannica Online, Accessed: Jun, 2008 Retrieved from: http://search.eb.com/eb/article-9001382
Macmillan, I. (Director), (Writer), (Producer), (2008). The Worlds of Fantasy: The Epic Imagination. England: Blast! Films Production for BBC.
Mania, K., & Chalmers, A. (2001). The Effects of Levels of Immersion on Memory and Presence in Virtual Environments: A Reality Centered Approach. CyberPsychology & Behavior, 4(2), 247-264.
Markowitz, M. (2000). Spacewar: The first computer video game. Really! , Retrieved from: http://www3.sympatico.ca/maury/games/space/spacewar.html
Martinez, L. M., Martinez, P., & Warkentin, G. (2007). A First Experience on Implementing a Lecture on Second Life Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Mazuryk, T., & Gervautz, M. (1996). Virtual Reality History, Applications, Technology and Future. Retrieved from: http://www.cg.tuwien.ac.at/research/publications/1996/mazuryk-1996-VRH
McLellan, H. (2004). Virtual realities. In D. H. Jonassen (Ed.), Handbook of Research on Educational Communications and Technology (2nd ed., pp. 745-784). Mahwah, NJ: Lawrence Erlbaum Associates.Accessed: Jun, 2008 Retrieved from: http://www.aect.org/edtech/17.pdf
Mergel, B. (1998). Instructional Design & Learning Theory. Accessed: Jun, 2008 Retrieved from: http://www.usask.ca/education/coursework/802papers/mergel/mergel.pdf
Meridian 59. (1996-2000 & 2002-Current). Futher Reading Retrieved from: Official Site: http://meridian59.neardeathstudios.com/; General: http://en.wikipedia.org/wiki/Meridian_59; http://www.massively.com/photos/massivelys-visual-history-of-mmorpgs-part-i/727035/
MetaMersion. Your in the Game. Retrieved from: http://www.metamersion.com/index.html
Milgram, P., & Kishino, F. (1994). A Taxonomy of Mixed Reality Visual Displays. E77-D(12), Accessed: May, 2008 Retrieved from: http://vered.rose.utoronto.ca/people/paul_dir/IEICE94/ieice.html
Miller, D. C., & Thorpe, J. A. (1995). SIMNET: The Advent Of Simulator Networking. Proceedings of the IEEE, 83(8), 1114-1123.
Monash University. (2008). Preparing Educational Objectives. Accessed: Jun, 2008 Retrieved from: http://www.calt.monash.edu.au/staff-teaching/support/objectives.html
Moriarty, D. (2008). StatCat (version 3.6). Accessed: Jan, 2009 Retrieved from: http://www.csupomona.edu/~djmoriarty/b211/index.html#statcat
Morningstar, C., & Farmer, R. (1990). The Lessons of Lucasfilm's Habitat. Paper presented at the The First International Conference on Cyberspace, University of Texas at Austin.Accessed: Oct, 2007 Retrieved from: http://www.fudco.com/chip/lessons.html
Mulligan, J. (2000). History of Online Games Part III. Imaginary Realities (April), Retrieved from: http://www.tharsis-gate.org/articles/imaginary/HISTOR~1.HTM
Mulligan, J. (2002). Talkin’ ‘bout My… Generation. Biting The Hand 17(22-JAN), Accessed: Jun, 2008 Retrieved from: http://www.skotos.net/articles/BTH_17.shtml
Nash, S. S. (2007). Behaviorism vs. Constructivism as Applied to Online Learning. XplanaZine, Accessed: Jun, 2008 Retrieved from: http://www.xplanazine.com/2007/09/behaviorism-vs-constructivism-as-applied-to-online-learning
Neuman, W. L. (2006). Social research methods (6th ed.). Boston Pearson Education, Inc
NeverWinter Nights (AOL). (1991-1997). Further Reading. Accessed: May, 2008 Retrieved from: http://en.wikipedia.org/wiki/Neverwinter_Nights_(AOL_game); http://www.bladekeep.com/nwn/index2.htm,
NIST (2006). e-Handbook of Statistical Methods: Levene Test for Equality of Variances. Accessed: Jan, 2009 Retrieved from: http://www.itl.nist.gov/div898/handbook/eda/section3/eda35a.htm
O'Donnell, D. (2003). The Annotated NetHack File. Retrieved from: http://www.spod-central.org/~psmith/nh/anhftime.html
Olsen, W. (2004). Triangulation in Social Research: Qualitative and Quantitative Methods Can Really Be Mixed. In Developments in Sociology: Causeway Press Retrieved from: http://www.ccsr.ac.uk/staff/triangulation.pdf
Onwuegbuzie, A. J. (2002). Why can't we all get along? Towards a framework for unifying research paradigms. Education, 122(3), 518-530.
Orlikowski, W. J., & Baroudi, J. J. (1991). Studying Information Technology in Organizations: Research Approaches and Assumptions. Information Systems Research, 2(1), 1-28.
Oxford Dictionary (Ed.) (1989) (2nd ed.). Oxford University Press.
Oxford Dictionary (Ed.) (1997) (Vols. 3). Oxford University Press.
Packer, R., & Jordan, K. (2002). Multimedia: From Wagner to Virtual Reality (Expanded ed.). New York: W. W. Norton and Company.
Patel, K., Bailenson, J., Jung, S., Diankov, R., & Bajcsy, R. (2006). The effects of fully immersive virtual reality on the learning of physical tasks. Paper presented at the International Workshop on Presence, Cleveland, Ohio, USA.Accessed: Jun, 2008 Retrieved from: http://www.cs.washington.edu/homes/kayur/papers/ispr06.pdf
Pearson, J. L. (2002). Shamanism and the Ancient Mind: A Cognitive Approach to Archaeology: AltaMira Press.
Pellett, D. Open letter to "Classic Gaming .com": Re: your web page, titled "The History of Computer Gaming". The Game of Dungeons (dnd): Gary Whisenhunt, Ray Wood, Dirk Pellett, and Flint Pellett's DND, Retrieved from: http://www.armory.com/~dlp/dnd1.html
Petrich, L. (n.d.). Real-Time-3D Game-Engine Taxonomy. Accessed: Jun, 2008 Retrieved from: http://homepage.mac.com/lpetrich/www/games/GET.html
Pimentel, K. K., & Teixeira, K. K. (1994). Virtual reality. New York, USA: Windcrest Books.
Purbrick, J., & Greenhalgh, C. (2002). An extensible event-based infrastructure for networked virtual worlds. Paper presented at the Virtual Reality, 2002. Proceedings. IEEE.
Ray, J. (2008). Backwards Compatible - How We Got Connected. ABC Good Games Stories, Accessed: April 2008 Retrieved from: http://www.abc.net.au/tv/goodgame/stories/s2171457.htm
Reynolds, R. (2008). VW Taxonomy Q1 ‘08. Accessed: 1 April 2008 Retrieved from: http://terranova.blogs.com/terra_nova/2008/03/vw-taxonomy-q1.html.
Rheingold, H. (1992). Virtual reality. London: Mandarin Paperback.
Rheingold, H. (1993). The virtual community: home-steading on the electronic frontier? : New York: Harper Collins. .
Richardson, H. (2005). Postmodernism: A Hobbit’s View of Information Systems Research Methodology. Paper presented at the 4th International Critical Management Studies Conference, University of Cambridge, Cambridge, UK.Accessed: 10/9/2007 Retrieved from: http://www.mngt.waikato.ac.nz/ejrot/cmsconference/2005/
Robson, S. (2008). US Army to Invest $50M in Combat Training Games. Stars and Stripes (Nov 2008), Retrieved from: http://www.stripes.com/article.asp?section=104&article=59009
Rolland, J., & Hua, H. (2005). Head-Mounted Display Systems. Encyclopedia of Optical Engineering, 1 - 14.
Rosenblum, L. J. (1995). Alice: rapid prototyping for virtual reality. Computer Graphics and Applications, IEEE, 15(3), 8-11.
Russell, T. L. (2001). No Significant Difference Phenomenon (5 ed.). North Carolina State University: IDECC.
Schmidt, M., Kinzer, C., & Greenbaum, I. (2007). Exploring Virtual Education: First Hand Account of 3 Second Life Classes Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Schroeder, R. (1997). Networked Worlds: Social Aspects of Multi-User Virtual Reality Technology. In S. R. Online (Ed.) (Vol. 2).
Schroeder, R. (2006). Being There Together and the Future of Connected Presence. Presence: Teleoperators & Virtual Environments, 15(4), 438-454.
Schuemie, M. J., Straaten, P. V. D., Krijn, M., & Mast, C. A. P. G. V. D. (2001). Research on Presence in Virtual Reality: A Survey. CyberPsychology & Behavior 4(2), Accessed: May, 2008 Retrieved from: http://graphics.tudelft.nl/~vrphobia/surveypub.pdf
Shadow of Yserbius. (1992-1996). Further Reading. Accessed: May, 2008 Retrieved from: http://www.syntax2000.co.uk/issues/; http://www.oldgames.nu/PC/Shadow_of_Yserbius/2085/; http://en.wikipedia.org/wiki/The_Shadow_of_Yserbius;
Sheridan, T. B. (1992). Musings on telepresence and virtual presence. Presence: Teleoperators & Virtual Environments, 1(1), 120-126.
Sheth, R. (2003). Avatar Technology: Giving a Face to the e-Learning Interface. [The eLearning Guild]. The eLearning Devlopers' Journal, August.
Siegle, D. (2008). The Principles and Methods of Educational Research. Accessed: Jan, 2009 Retrieved from: http://www.gifted.uconn.edu/Siegle/research/Instrument%20Reliability%20and%20Validity/Reliability.htm
Simpson, E. J. (1972). The classification of educational objectives in the psychomotor domain. The Psychomotor Domain (Vol. 3). Washington, DC: Gryphon House.
SimTeach. (2008). Universities, Colleges & Schools in Second Life. Accessed: Jun, 2008 Retrieved from: http://www.simteach.com/wiki/index.php?title=Institutions_and_Organizations_in_SL
Slater III, W. F. (2002). Internet History and Growth. Chicago Chapter of the Internet Society, Accessed: April, 2008 Retrieved from: http://www.isoc.org/internet/history/
Slater, M. (1999). Measuring Presence: A Response to the Witmer and Singer Presence Questionnaire. Presence: Teleoperators & Virtual Environments, 8(5), 560-565.
Slater, M., & Usoh, M. (1993). Presence in immersive virtual environments. Paper presented at the Virtual Reality Annual International Symposium, 1993., 1993 IEEE.
Slater, M., & Usoh, M. (1994). Representation Systems, Perceptual Position and Presence in Virtual Environments. Presence: Teleoperators & Virtual Environments, 2(3), 221–233.
Slater, M., & Wilbur, S. (1997). A Framework for Immersive Virtual Environments (FIVE): Speculations on the Role of Presence in Virtual Environments. Presence: Teleoperators & Virtual Environments, 6(6).
Slator, B. M., Borchert, O., Brandt, L., Chaput, H., Erickson, K., Groesbeck, G., et al. (2007). From Dungeons to Classrooms: The Evolution of MUDs as Learning Environments. In The Evolution of Teaching and Learning Paradigms (pp. 119-160): Springer-Verlag
Small, D., & Small, S. (1984). PLATO RISING. Online learning for Atarians 3(3), 36-87 Retrieved from: http://www.atarimagazines.com/v3n3/platorising.html
Smith, A. (1999). COLLABORATION: A Global Survey of Institutions and Programs in Virtual World Cyberspace. Retrieved from: http://www.ccon.org/vlearn/collab.htm
STATGRAPHICS Centurion (2009). Analysis Software. Retrieved from: http://www.statgraphics.com/
Stephenson, N. (1992). Snow Crash. New York: Bantam Spectra Book.
Steuer, J. (1992). Defining Virtual Reality: Dimensions Determining Telepresence. Journal of Communications, 42(4), 73-93.
Sun Microsystems (2008). Current Reality and Future Vision Open Virtual Worlds (White Paper). January, Accessed: 13 March 2008 Retrieved from: http://www.sun.com/service/applicationserversubscriptions/OpenVirtualWorld.pdf
Sutherland, I. (1965). The Ultimate Display. Paper presented at the International Federation of Information Processing. Retrieved from: http://www.cs.utah.edu/classes/cs6360/Readings/UltimateDisplay.pdf
Sutherland, I. (1968). A Head-Mounted Three-Dimensional Display. Paper presented at the Proceedings of the AFIPS Fall Joint Computer Conference, Washington, D.C.
Terdiman, D. (2007). Tech titans seek virtual-world interoperability. CNET News.com, Accessed: Jun, 2008 Retrieved from: http://news.cnet.com/Tech-titans-seek-virtual-world-interoperability/2100-1043_3-6213148.html
The New Media Consortium, & EDCAUSE (2007). The Horizon Report. Accessed: Nov, 2007 Retrieved from: http://www.nmc.org/pdf/2007_Horizon_Report.pdf
Tiernan, T. R. (1996). Synthetic Theater of War (STOW) Engineering Demonstration-1A (ED-1A) Analysis Report (ADA315093). Accessed: Jun, 2008 Retrieved from: http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA315093
Tolkien, J. R. R. (1937). The Hobbit. United Kingdom: Allen and Unwin.
Tolkien, J. R. R. (1954, 1955). The Lord of the Rings. United Kingdom: Allen and Unwin.
Ultima Online. (1997-Current). Further Reading. Accessed: Jun, 2008 Retrieved from: http://www.uoherald.com/news/;
Unger, J. M. (1979). Kanamajiri Editing and the Plato Computer-Based Education System. The Journal of the Association of Teachers of Japanese, 14(2), 141-156.
University of Washington (2008). Instructional Design Approaches. Accessed: Jun, 2008 Retrieved from: http://depts.washington.edu/eproject/Instructional%20Design%20Approaches.htm
US Joint Forces Command. (2008). Joint Semi-Automated Forces (JSAF). Accessed: Jun, 2008 Retrieved from: http://www.jfcom.mil/about/fact_jsaf.html
Van Dam, A., Forsberg, A. S., Laidlaw, D. H., LaViola, J. J. J., & Simpson, R. M. (2000). Immersive VR for scientific visualization: a progress report. Computer Graphics and Applications, IEEE, 20(6), 26-52.
VCampus Corporation. (2008). cyber1.org. Accessed: April, 2008 Retrieved from: http://www.cyber1.org/
Vinge, V. (1981). True Names: Binary Star Number 5, Dell Reprinted in True Names and Other Dangers, Vernor Vinge, Baen Books, 1987.
Vivekananda Centre (2008). Hinduism for Schools. Retrieved from: http://www.vivekananda.btinternet.co.uk/secondaryschoolspage1.htm
Wachowski, A., & Wachowski, L. (Director), (Writer), J. Silver (Producer), (1999). The Matrix [Motion Picture]: Warner Bros, Village Roadshow Pictures,.
Wagner, R. (1849). The Artwork of the Future (Das Kunstwerk der Zukunft), Accessed: April, 2008 Retrieved from: http://users.belgacom.net/wagnerlibrary/prose/wagartfut.htm
Walker, J. (1990). Through the Looking Glass. In L. Brenda (Ed.), The Art of Human-Computer Interface Design: Addison-Wesley
Walsham, G. (1995). The Emergence of Interpretivism in IS Research. Information Systems Research, 6(4), 376-394.
Wang, C.-S., & Tzeng, Y.-R. (2007). Framework for Bloom's Knowledge Placement in Computer Games. Paper presented at the Digital Game and Intelligent Toy Enhanced Learning, 2007. DIGITEL '07. The First IEEE International Workshop.
Weber, R. (2004). The Rhetoric of Positivism Versus Interpretivism: A Personal View. MIS Quarterly, 28(1), 3-xiii.
West Virginia University. (2008). The Looking Glass Project. Accessed: April, 2008 Retrieved from: http://clc.as.wvu.edu:8080/clc/projects/alice/document_view?month:int=5&year:int=2008
Wikipedia. (2008a). The Manhole. Accessed: Jun, 2008 Retrieved from: http://en.wikipedia.org/wiki/The_Manhole
Wikipedia. (2008b). PLATO (computer system). Retrieved from: http://en.wikipedia.org/wiki/PLATO
Wikipedia Doom. (2008). Doom Engine. Accessed: Jun, 2008 Retrieved from: http://doom.wikia.com/wiki/Vanilla_Doom#Fan_community_variants
Wikipedia Ultima (2008). Ultima Online: Third Dawn. Accessed: May, 2008 Retrieved from: http://ultima.wikia.com/wiki/Ultima_Online:_Third_Dawn
Wilson, N. (2007). The Problem with Virtual Worlds. Accessed: 1 April 2008 Retrieved from: http://metaversed.com/23-oct-2007/problem-virtual-words
Witmer, B. G., & Singer, M. J. (1998). Measuring Presence in Virtual Environments: A Presence Questionnaire. Presence: Teleoperators & Virtual Environments, 7(3), 225-240.
Woodcock, B. S. (2008, May, 2008). An Analysis of MMOG Subscription Growth. MMOGCHART.COM Retrieved from: http://www.mmogchart.com
Woolley, D. R. (1994). PLATO: The Emergence of On-Line Community. Computer-Mediated Communication Magazine, 1(3), 5.
Yee, N. (2006). The Demographics, Motivations, and Derived Experiences of Users of Massively Multi-User Online Graphical Environments. Presence: Teleoperators & Virtual Environments, 15(3), 309-329.
Youngblut, C. (1998). Education Uses of Virtual Reality Technology (pp. 131). Alexandria, VA: Institute for Defence Analyses.
Yount, W. R. (2006). Research Design & Statistical Analysis in Christian Ministry, Accessed: Dec, 2008 Retrieved from: http://www.napce.org/yount.html
Zakon, R. H. (2006). Hobbes' Internet Timeline v8.2. Accessed: April, 2008 Retrieved from: http://www.zakon.org/robert/internet/timeline/
Zyda, M. (2005). From Visual Simulation to Virtual Reality to Games. Computer, 38(9), 25-32.
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
ab2b67bad99f0d003b33f38e83cdb66c45970a6a
369
315
2010-08-05T14:05:12Z
Bishopj
1
wikitext
text/x-wiki
=Introduction=
The RiskWiki book and Thesis: "Real Learning in Virtual Worlds" by Dianne Bishop (2008) references an extensive list of works which is reproduced in its entirety here. The reference list also provides an outstanding bibliography about Virtual Worlds and the Virtual World Learning space. Students of these two areas are commended to explore the work of the authors listed below.
=References and Bibliography=
Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., et al. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of Educational Objectives. New York: Longman.
Anderson Research Group (n.d.). The Revised Bloom’s Taxonomy. Accessed: Jun, 2008 Retrieved from: www.andersonresearchgroup.com/reports/TPP2.ppt
Annetta, L. A., Murray, M. R., Laird, S. G., Bohr, S. C., & Park, J. C. (2006). Serious Games: Incorporating Video Games in the Classroom. EDUCAUSE Quarterly 29(3), Accessed: Jun, 2008 Retrieved from: http://connect.educause.edu/Library/EDUCAUSE+Quarterly/SeriousGamesIncorporating/39986
Arreguin, C. (2007). Reports from the Field: Second Life Community Convention 2007 Education Track Summary. Best Practices from the Second Life Community Convention Education Track 2007, Accessed: Jun, 2008 Retrieved from: http://www.holymeatballs.org/pdfs/VirtualWorldsforLearningRoadmap_012008.pdf
Axon, S. (2008). Massively's Visual History of MMORPGs, Part I. Massively, Accessed: Jun, 2008 Retrieved from: http://www.massively.com/2008/03/31/massivelys-visual-history-of-mmorpgs-part-i/
Bailenson, J. N., Yee, N., Blascovich, J., Beall, A. C., Lundblad, N., & Jin, M. (2007). The use of immersive virtual reality in the learning sciences: Digital transformations of teachers, students, and social context. The Journal of the Learning Sciences.
Bainbridge, W. S. (2007). The Scientific Research Potential of Virtual Worlds. Science, 317(5837), 472 - 476.
Bartle, R. (1990). Interactive Multi-User Computer Games. Accessed: Jun, 2008 Retrieved from: http://www.mud.co.uk/richard/imucg0.htm
Bartle, R. (2003). Designing Virtual Worlds. Indianapolis, USA: New Riders.
Beedle, J. B., & Wright, V. H. (2007). Perspectives from Multiplayer Video Gamers. In D. Gibson (Ed.), Games and Simulations in Online Learning: Research & Development Frameworks. Hershey PA, USA: Idea Group Inc
Bell, L. (2006). Dobbit Do program at Second Life Library. Second Life Library 2.0, Retrieved from: http://secondlifelibrary.blogspot.com/2006/06/dobbit-do-program-at-second-life.html
Bellman, K., & Landauer, C. (2000). Playing In The Mud: Virtual Worlds Are Real Places. Applied Artificial Intelligence, 14(1), 93-123.
Benford, S., Greenhalgh, C., Reynard, G., Brown, C., & Koleva, B. (1998). Understanding and constructing shared spaces with mixed-reality boundaries. ACM Transactions on Computer-Human Interaction 5(3), 185-223 Accessed: Jun, 2008 Retrieved from: http://www.crg.cs.nott.ac.uk/research/publications/papers/TOCHI98.pdf
Billinghurst, M., Kato, H., & Poupyrev, I. (2001). The MagicBook: Moving Seamlessly between Reality and Virtuality. IEEE Computer Graphics and Applications, 21(3), 6-8.
Biocca, F., & Delaney, B. (1995). Immersive virtual reality technology. In Communication in the age of virtual reality (pp. 57-124): Lawrence Erlbaum Associates, Inc.Accessed: May, 2008 Retrieved from: http://www.mindlab.org/images/d/DOC713.pdf
Blizzard Entertainment Inc (2008). World of Warcraft Surpasses 11 million Subscribers Worldwide. Retrieved from: http://www.blizzard.com/us/press/081028.html
Bloom, B. S., Englehart, M. D., Furst, M., Hill, E. J., & Krathwohl, D. R. (1956). Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook 1: Cognitive Domain. . New York: David McKay Company, Inc.
Bowery, J. (2001). Spasim (1974) The First First-Person-Shooter 3D Multiplayer Networked Game. Accessed: April, 2008 Retrieved from: http://www.geocities.com/jim_bowery/spasim.html
Briggs, J. C. (1996). The Promise of Virtual Reality. The Futurist 30(5), Accessed: May, 2008 Retrieved from: http://project.cyberpunk.ru/idb/virtualreality_promise.html
Brookhaven National Laboratory (n.d.). The First Video Game. Accessed: Jun, 2008 Retrieved from: http://www.bnl.gov/bnlweb/history/higinbotham.asp; also see article http://gamersquarter.com/tennisfortwo/
Brooks, F. P., Jr. (1999). What's real about virtual reality? Computer Graphics and Applications, IEEE, 19(6), 16-27.
Brown, J. D. (1997). Skewness and Kurtosis. Shiken: JALT Testing & Evaluation SIG Newsletter 1(1), 1-20 Accessed: Jan, 2009 Retrieved from: http://jalt.org/test/bro_1.htm
Budge, L. D., Strini, R. A., Dehncke, R. W., & Hunt, J. A. (1998). Synthetic Theater of War (STOW) 97 Overview (98S-SIW-086). Paper presented at the Spring Simulation Interoperability Workshop, Orlando, FL.Accessed: Jun, 2008 Retrieved from: http://www.sisostds.org/index.php?tg=articles&idx=More&topics=46&article=199
Bulkley, K. (2007). Today Second Life, tomorrow the world. Interview: Philip Rosedale. The Guardian, Accessed: Jun, 2008 Retrieved from: http://www.guardian.co.uk/technology/2007/may/17/media.newmedia2
Burdea, G. C., & Coiffet, P. (2003). Virtual Reality Technology (2nd ed.): Wiley-IEEE Press.
Burns, R. B. (2000). Introduction to Reserach Methods (4th ed.). Frenchs Forest, NSW, Australia: Longman.
Bye, C. (2008). Legends of the Industry: An Interview with Randy Farmer and Chip Morningstar. March 25th, 2008, Accessed: Jun, 2008 Retrieved from: http://www.tentonhammer.com/node/29292
Carless, S. (2006). Australian Defence Force Licenses Virtual Battlespace. Serious Games Source April 18, Accessed: Jun, 2008 Retrieved from: http://www.seriousgamessource.com/item.php?story=8955
Carlson, W. (2003). Section 17: Virtual Reality and Artificial Environments. In A Critical History of Computer Graphics and Animation: The Ohio State University.Accessed: May, 2008 Retrieved from: http://design.osu.edu/carlson/history/lessons.html
Carroll, L. (1865). Alice's Adventures in Wonderland. London: Macmillan.
Carroll, L. (1871). Through the Looking-Glass. London: Macmillan.
Castronova, E. (2001). Virtual Worlds: A First-Hand Account of Market and Society on the Cyberian Frontier. CESifo Working Paper Series No. 618, Accessed: May, 2008 Retrieved from: http://ssrn.com/paper=294828
Cavazza, F. (2007). Virtual Universes Landscape. Accessed: May, 2008 Retrieved from: http://www.fredcavazza.net/2007/10/04/virtual-universes-landscape/
Chesher, C. (1994). Colonizing Virtual Reality. Construction of the Discourse of Virtual Reality, 1984-1992. Cultronix (1), Retrieved from: http://cultronix.eserver.org/chesher/
Churches, A. (2008). Bloom's Taxonomy Blooms Digitally. Educators' eZine, Accessed: Jun, 2008 Retrieved from: http://www.techlearning.com/showArticle.php?articleID=196605124; or wiki http://edorigami.wikispaces.com/
Clark, R. E. (1983). Reconsidering Research on Learning from Media. Review of Educational Research 53(4),
Clark, R. E. (1994). Media Will Never Influence Learning. Educational Technology Research and Development, 42(2), 21-29.
Clark, S., & Maher, M. L. (2006). Collaborative Learning in A 3D Virtual Place: Investigating the Role of Place in a Virtual Learning Environment. Advanced Technology for Learning 3(4), Accessed: Jun, 2008 Retrieved from: http://web.arch.usyd.edu.au/~mary/Pubs/2006pdf/ATL_MLM_SC.pdf
Clarke, R. (2000). Robert Gagné's Nine Steps of Instruction. ISD - Development, Accessed: Jun, 2008 Retrieved from: http://www.nwlink.com/~donclark/hrd/learning/development.htm
Coleridge, S. T. (1817). Biographia Literaria (2nd edition ed.): Sara Coleridge.
Colley, S. (n.d.). Stories from the Maze War 30 Year Retrospective. Accessed: Jun, 2008 Retrieved from: http://www.digibarn.com/history/04-VCF7-MazeWar/stories/colley.html
Combs, N. (2004). A virtual world by any other name? , Accessed: 1 April 2008 Retrieved from: http://terranova.blogs.com/terra_nova/2004/06/a_virtual_world.html
CompuServe. (2007). MUD1. Accessed: Oct, 2007 Retrieved from: http://www.british-legends.com/
Computer History Museum. (n.d.). Spacewar! Accessed: Mar, 2008 Retrieved from: http://www.computerhistory.org/pdp-1/play_spacewar.html; Also see: http://www.wheels.org/spacewar/index.html
Corbit, M. (2002). Building Virtual Worlds for Informal Science Learning (SciCentr and SciFair) in the Active Worlds Educational Universe (AWEDU). Presence: Teleoperators & Virtual Environments, 11(1), 55-67.
Corry, M. (1996). Gagne's Theory of Instruction. Dr. Donald Cunningham Spring, 540 Accessed: Jun, 2008 Retrieved from: http://home.gwu.edu/~mcorry/corry1.htm
Cosby, L. N. (1999). SIMNET: An Insider's Perspective. SISO News 2(1g), Accessed: Jun, 2008 Retrieved from: http://www.sisostds.org/webletter/siso/Iss_39/art_202.htm
Dabbagh, N. (2006). The Instructional Design Knowledge Base. Instructional Technology Program, Accessed: Jun, 2008 Retrieved from: http://classweb.gmu.edu/ndabbagh/Resources/IDKB/models_theories.htm
Dalgarno, B. J. (2004). Characteristics of 3D Environments and Potential Contributions to Spatial Learning. University of Wollongong
Damer, B. (2004). First experimental post for March guest column. Accessed: Jun, 2008 Retrieved from: http://terranova.blogs.com/terra_nova/2007/03/march_topics.html
Damer, B. (2007). Meeting in the ether. Interactions, 14(5), 16-18.
Dave, R. H. (1967). Psychomotor Domain. Paper presented at the International Conference of Educational Testing.
Dave, R. H. (1970). Psychomotor levels In R. J. Armstrong (Ed.), Developing and Writing Behavioural Objectives. Tucson AZ: Educational Innovators Press
Dede, C. (1995). The Evolution of Constructivist Learning Environments: Immersion in Distributed, Virtual Worlds. Educational Technology, Research and Development, Vol. 35(No. 5), pp. 46-52.
Dede, C. (2004). Enabling Distributed Learning Communities Via EmergingTechnologies -- Part Two. T H E Journal, 32(3), 16-26.
Denzin, N. (1978). Sociological Methods: A Sourcebook.
Department of the Army (2008). America's Army: The Making Of. Accessed: Jun, 2008 Retrieved from: http://www.americasarmy.com/intel/makingof.php
Deuchar, S., & Nodder, C. (2003). The Impact of Avatars and 3D Virtual World Creation on Learning. Paper presented at the Proceedings of the 16th Annual NACCQ, Palmerston North New Zealand. Retrieved from: www.naccq.ac.nz
Dickey, M. D. (1999). 3D Virtual Worlds and Learning: An analysis of the impact of design affordances and limitations in Active Worlds, Blaxxun Interactive, and Onlive! Traveler; and a study of the implementation of Active Worlds for formal and informal education. Dissertation; The Ohio State University, from http://mchel.com/Research.htm
Dickey, M. D. (2003). Teaching in 3D: Pedagogical Affordances and Constraints of 3D Virtual Worlds for Synchronous Distance Learning. Distance Education, 24(1), 105-122.
Dickey, M. D. (2005). Three-dimensional virtual worlds and distance learning: Two case studies of Active Worlds as a medium for distance education. British Journal of Educational Technology, 36(2), 439.
DONCIO, OPNAV N79, CNET, Naval Postgraduate School, Marine Corps Training and Education Command, & Marine Corps Distance Learning Center (2008). Learning in a Virtual World, Accessed: Jun, 2008 Retrieved from: http://wiki.nasa.gov/cm/wiki/?id=2731
Edutech Wiki. (2009). The Media Debate. Accessed: Jan, 2009 Retrieved from: http://edutechwiki.unige.ch/en/The_media_debate
Electronic Arts (2007). Ultima Online: Kingdom Reborn FAQ. Accessed: May, 2008 Retrieved from: http://www.uo.com/uokr/UOKR/uokr_faq.shtml
Farmer, F. R. (1992). Social Dimensions of Habitat's Citizenry. In C. E. Loeffler & T. Anderson (Eds.), The Virtual Reality Casebook. New York: Van Nostrand Reinhold
Fielding, N. G., & Fielding, J. L. (1986). Linking Data: Qualitative and Qzrantitative Me/hods in Social Research. .
Fife-Schaw, C. (2007). How do I test the normality of a variable’s distribution? , Accessed: Jan, 2009 Retrieved from: http://www.psy.surrey.ac.uk/cfs/p8.htm
Foley, P., & Gifford, T. (2002). An Introduction to SEDRIS. Paper presented at the SEDRIS Technology Conference. Retrieved from: http://www.sedris.org/stc/2002/tu/intro/sld001.htm
Frary, R. B. (2008). Testing Memo 8: Reliability of Test Scores. Virginia Polytechnic Institute and State University (Jan, 2009), Retrieved from: http://www.testscoring.vt.edu/memo08.html
Friedl, M. (2002). Chapter One: Learning and Inspiration. In C. R. M. Inc (Ed.), Online Game Interactivity Theory. Hingham, Massachusetts
Gabrisch, C., & Burgess, G. (2005). The COA-Sim JSAF Environment in Support of Joint Military Training and Exercises. Paper presented at the SimTecT. Accessed: Jun, 2008 Retrieved from: http://www.siaa.asn.au/library_simtect_2005.html
Gagne, R. M. (1985). The Conditions of Learning and the Theory of Instruction (4 ed.). New York: Holt, Rinehart, and Winston.
Garson, G. D. (2000). The role of information technology in quality education. In Social dimensions of information technology: issues for the new millennium (pp. 177-197): IGI Publishing
Gartner (2007). Media Relations. Accessed: Oct, 2007 Retrieved from: http://www.gartner.com/it/page.jsp?id=503861
Gehorsam, R. (2003). The coming revolution in massively multiuser persistent worlds. Computer, 36(4), 93-95.
Gibson, W. (1984). Neuromancer. Canada: Ace Books.
Gikas, J., & Van Eck, R. (2004). Integrating video games in the classroom: Where to begin? . Paper presented at the National Learning Infrastructure Initiative 2004 Annual Meeting, San Diego, CA.Accessed: Many, 2008 Retrieved from: http://www.educause.edu/ir/library/pdf/NLI0431a.pdf
Goldberg, M. (2002). The History of Computer Gaming Part 5 - PLATO Ain't Just Greek. Classic Gamming, Accessed: April, 2008 Retrieved from: http://classicgaming.gamespy.com/View.php?view=Articles.Detail&id=324
Gonzalez, D. (2007). Second Life for Digital Entertainment Technology Education. Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Graphpad (2009). How useful are normality tests? , Accessed: Jan, 2009 Retrieved from: http://www.graphpad.com/library/BiostatsSpecial/article_197.htm
Grau, O. (1999). Into the Belly of the Image: Historical Aspects of Virtual Reality. Leonardo, 32(5), 365-371.
Grøstad, O. F. (2007). Define: virtual world. Accessed: Apr, 2008 Retrieved from: http://worldtheory.blogspot.com/2007/06/define-virtual-world.html
Hardy, D. R., Allen, E. C., Adams, K. P., Peters, C. B., Peterson, L. J., Cannon, M. A., et al. (2001). Advanced Distributed Simulation: Decade in Review and Future Challenges. Accessed: Jun, 2008 Retrieved from: http://stinet.dtic.mil/cgi-bin/GetTRDoc?AD=A434191&Location=U2&doc=GetTRDoc.pdf
Harrow, A. J. (1972). A taxonomy of the psychomotor domain. New York: David McKay Company, Inc.
Harvard's Berkman Center for Internet and Society. (2007). Cyber One: Law in the Court of Public Opinion. Accessed: 20/10/2007 Retrieved from: http://sleducation.wikispaces.com/educationaluses#distance
Heeter, C. (1992). Being there: The subjective experience of presence. Presence: Teleoperators & Virtual Environments, 1(2), 262– 271.
Heeter, C. (2003). Reflections on Real Presence. Presence: Teleoperators & Virtual Environments 12(4), 335-345 Accessed: Jun, 2008 Retrieved from: http://commtechlab.msu.edu/publications/files/presence2003.pdf
Heilig, M. (1955). The Cinema of the Future, reprinted. In R. Packer & K. Jordan (Eds.), Multimedia: From Wagner to Virtual Reality (expanded edition), 2002 (pp. 239-251). New York/London: W. W. Norton and Company
Holmberg, J. (2003). Ideals of Immersion in Early Cinema. Cinémas 14(1), 129-147 Retrieved from: http://www.erudit.org/revue/cine/2003/v14/n1/008961ar.pdf
Howard, R. E. (1932). The Phoenix on the Sword. In Weird Tales (Vol. December). Chicago: Popular Fiction Publishing Co
Hu, S.-Y., & Liao, G.-M. (2004). Network and System Support for Games: Scalable Peer-to-Peer Networked Virtual Environment. Paper presented at the 3rd ACM SIGCOMM workshop on Network and system support for games, Portland, Oregon, USA Accessed: Jun, 2008 Retrieved from: http://www.phys.sinica.edu.tw/~statphys/publications/2004_full_text/S_Y_Hu_Proc_ACM_SIGCOMM_2004_on_NetGame_p129-133(2004).pdf
Jacoby, J., & Matell, M. S. (1971). Three-Point Likert Scales Are Good Enough. Journal of Marketing Research, 8(4), 495-500.
Jamison, J. (2007). Two Years of Introducing Educators to Second Life in 60 Minutes, or: Tips for Dinosaur Wrangling. Paper presented at the Second Life Best Practices in Education: Teaching, Learning, and Research 2007 International Conference
Jennings, S. (2007). Virtually a World. Accessed: 1 April 2008 Retrieved from: http://brokentoys.org/2007/06/15/virtually-a-world/. Accessed: 2008-04-08.
Jones, G., & Hicks, J. (2004). 3D Online Learning Environments for Emergency Preparedness and Homeland Security Training. Paper presented at the World Conference on E-Learning in Corporate, Government, Healthcare, & Higher Education, Washington, D.C. Retrieved from: http://courseweb.unt.edu/gjones/pdf/Jones_elearn04.pdf
Joseph, B. (2007). Global Kids, Inc.’s Best Practices in Using Virtual Worlds For Education. Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Kearsley, G. (2008). Conditions of Learning (R. Gagne). Explorations in Learning & Instruction: The Theory Into Practice Database Accessed: Jun, 2008 Retrieved from: http://tip.psychology.org/gagne.html
Keegan, M. (1997). A Classification of MUDs. The Journal of Virtual Environments 2(2), Accessed: Mar, 2008 Retrieved from: http://www.brandeis.edu/pubs/jove/HTML/v2/keegan.html
Kelly, K. (1995). Singular Visionary. Wired June(3.06), Retrieved from: http://www.wired.com/wired/archive/3.06/vinge.html
King, B. (2003). Educators Turn to Games for Help. Wired, Accessed: Jun, 2008 Retrieved from: http://www.wired.com/gaming/gamingreviews/news/2003/08/59855
Kingdom of Drakkar. (1992-Current). Further Reading. Accessed: May, 2008 Retrieved from: Official: http://www.kingdomofdrakkar.com/; Historical: http://www.kingdomofdrakkar.com/forums/viewtopic.php?f=38&t=6197
Kish, S. (2007). Second Life: Virtual Worlds and the Enterprise. Accessed: May, 2008 Retrieved from: http://www.susankish.com/susan_kish/vw_secondlife.html
Klein, H. K., & Myers, M. D. (1999). A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems. MIS Quarterly, 23(1), 67-93.
Klich, R. (2007). Multimedia Theatre in the Virtual Age. University of New South Wales, Sydney, from http://www.library.unsw.edu.au/~thesis/adt-NUN/uploads/approved/adt-NUN20080304.114128/public/02whole.pdf
Kofi, B. A., Svihla, V., Gawel, D., & Bransford, D. J. (2007). Learning about Adaptive Expertise in a Multi-User Virtual Environment Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Koster, R. (2002). Online World Timeline. Accessed: Jun, 2008 Retrieved from: http://www.raphkoster.com/gaming/mudtimeline.shtml
Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development 42(2), 7-9
Krathwohl, D. R. (2002). A Revision of Bloom's Taxonomy: An Overview - Benjamin S. Bloom, University of Chicago. Theory Into Practice 41(4), 212-218 Accessed: Jun, 2008 Retrieved from: http://findarticles.com/p/articles/mi_m0NQM/is_4_41/ai_94872707
Krathwohl, D. R. (2002). A revision of Bloom's Taxonomy: an overview - Benjamin S. Bloom, University of Chicago. Theory Into Practice, Accessed: Jun, 2008 Retrieved from: http://findarticles.com/p/articles/mi_m0NQM/is_4_41/ai_94872707
Krathwohl, D. R., Bloom, B. S., & Masia, B. B. (1964). Taxonomy of Educational Objectives. The Classification of Educational Goals, Handbook II: Affective Domain. New York: David McKay Company, Inc.
Kribble, M. (2007). Getting a Second Life: Virtual Harvard. Law Library E-Newsletter January, Accessed: 20/10/2007 Retrieved from: http://www.nsulaw.nova.edu/library_tech/library/publications/bookdocket/2007/Jan2007.pdf
Kurt, S., Mike, B., Jamillah, M. G., & Thomas, H. (2004). Electromagnetism supercharged! Learning Physics with Digital Simulation Games, Proceedings of the 6th international conference on Learning sciences (pp. 513-520). Santa Monica, California: International Society of the Learning Sciences.
KZERO Research (2007). There.com vs Second Life: demographics. Accessed: Jun, 2008 Retrieved from: http://www.kzero.co.uk/blog/?p=961
Lang, T., Maclntyre, B., & Zugaza, I. J. (2008). Massively Multiplayer Online Worlds as a Platform for Augmented Reality Experiences. Paper presented at the Virtual Reality Conference, 2008. VR '08. IEEE.
Laurel, B. (1991). Computers as theatre. New York: Addison-Wesley.
Lee, A. (1991). Integrating Positivist And Interpretive Approaches To Organizational Research. Organization Science, 2(4), 342.
Lee, S.-Y., Kim, I.-J., Ahn, S. C., Lim, M.-T., & Kim, H.-G. (2005). Intelligent 3D Video Avatar for Immersive Telecommunication. In S. Zhang & R. Jarvis (Eds.), AI 2005 (pp. 726-735). Berlin Heidelberg: Springer-Verlag.Accessed: Jun, 2008 Retrieved from: http://www.imrc.kist.re.kr/~kij/LNCS_2005.pdf
Lenke, J. M., Wellens, B., & Oswald, J. (1977, Jan, 2009). Differences Between Kuder-Richardson Formula 20 and Formula 21 Reliability Coefficients for Short Tests with Different Item Variabilities. Paper presented at the Annual Meeting of the American Educational Research Association, New York, USA. Retrieved from: http://eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=ED141411&ERICExtSearch_SearchType_0=no&accno=ED141411
Lenoir, T. (2003). Programming Theatres of War: Gamemakers as Soldiers. In R. Latham (Ed.), In Bombs and Bandwidth: The Emerging Relationship Between IT and Security (pp. 175-198). New York: The New Press.Accessed: Jun, 2008 Retrieved from: http://www.stanford.edu/dept/HPS/TimLenoir/Publications/Lenoir_TheatresOfWar.pdf
Leonard, B. (Director), S. King, B. Leonard & G. Everett (Writer), G. Everett (Producer), (1992). The Lawnmower Man [Motion Picture]: New Line Cinema.
Levine, A. (2007). Avatars and Appearance: What’s your ‘dress code’? NMC Teachers Buzz. NMC Campus Observer, Retrieved from: http://sl.nmc.org/2007/07/19/dress-code/
Lewis, D. (2001). Objectivism vs. Constructivism: The Origins of this Debate and the Implications for Instructional Designers. EME 6613 Development of Technology-Based Instruction, Accessed: Jun, 2008 Retrieved from: http://www.coedu.usf.edu/agents/dlewis/publications/Objectivism_vs_Constructivism.htm
Linden, C., & Linden, P. (2008). Discussion on Education in Second Life, What’s Going On and How To Get Involved. “Inside the Lab” Podcast, a Discussion on Education in Second Life", Accessed: Jun, 2008 Retrieved from: http://blog.secondlife.com/2008/06/02/inside-the-lab-podcast-a-discussion-on-education-in-second-life/
Linden Lab (2008a). Economic Statistics: Graphs. Accessed: Jun, 2008 Retrieved from: http://secondlife.com/whatis/economy-graphs.php
Linden Lab (2008b). Second Life: Economic Statistics. Accessed: Dec, 2008 Retrieved from: http://secondlife.com/whatis/economy_stats.php
Linden Lab (2008c). Second Life: System Requirements. Accessed: Jun, 2008 Retrieved from: http://secondlife.com/support/sysreqs.php
Lisberger, S. (Director), S. Lisberger & B. MacBird (Writer), D. Kushner (Producer), (1982). Tron [Motion Picture]: Buena Vista Pictures.
Lord of the Rings Online (2007). Online Virtual World. Accessed: Jun, 2008 Retrieved from: http://www.lotro.com/
Lowood, H. E. (2008). Virtual Reality. Encyclopaedia Britannica Online, Accessed: Jun, 2008 Retrieved from: http://search.eb.com/eb/article-9001382
Macmillan, I. (Director), (Writer), (Producer), (2008). The Worlds of Fantasy: The Epic Imagination. England: Blast! Films Production for BBC.
Mania, K., & Chalmers, A. (2001). The Effects of Levels of Immersion on Memory and Presence in Virtual Environments: A Reality Centered Approach. CyberPsychology & Behavior, 4(2), 247-264.
Markowitz, M. (2000). Spacewar: The first computer video game. Really! , Retrieved from: http://www3.sympatico.ca/maury/games/space/spacewar.html
Martinez, L. M., Martinez, P., & Warkentin, G. (2007). A First Experience on Implementing a Lecture on Second Life Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Mazuryk, T., & Gervautz, M. (1996). Virtual Reality History, Applications, Technology and Future. Retrieved from: http://www.cg.tuwien.ac.at/research/publications/1996/mazuryk-1996-VRH
McLellan, H. (2004). Virtual realities. In D. H. Jonassen (Ed.), Handbook of Research on Educational Communications and Technology (2nd ed., pp. 745-784). Mahwah, NJ: Lawrence Erlbaum Associates.Accessed: Jun, 2008 Retrieved from: http://www.aect.org/edtech/17.pdf
Mergel, B. (1998). Instructional Design & Learning Theory. Accessed: Jun, 2008 Retrieved from: http://www.usask.ca/education/coursework/802papers/mergel/mergel.pdf
Meridian 59. (1996-2000 & 2002-Current). Futher Reading Retrieved from: Official Site: http://meridian59.neardeathstudios.com/; General: http://en.wikipedia.org/wiki/Meridian_59; http://www.massively.com/photos/massivelys-visual-history-of-mmorpgs-part-i/727035/
MetaMersion. Your in the Game. Retrieved from: http://www.metamersion.com/index.html
Milgram, P., & Kishino, F. (1994). A Taxonomy of Mixed Reality Visual Displays. E77-D(12), Accessed: May, 2008 Retrieved from: http://vered.rose.utoronto.ca/people/paul_dir/IEICE94/ieice.html
Miller, D. C., & Thorpe, J. A. (1995). SIMNET: The Advent Of Simulator Networking. Proceedings of the IEEE, 83(8), 1114-1123.
Monash University. (2008). Preparing Educational Objectives. Accessed: Jun, 2008 Retrieved from: http://www.calt.monash.edu.au/staff-teaching/support/objectives.html
Moriarty, D. (2008). StatCat (version 3.6). Accessed: Jan, 2009 Retrieved from: http://www.csupomona.edu/~djmoriarty/b211/index.html#statcat
Morningstar, C., & Farmer, R. (1990). The Lessons of Lucasfilm's Habitat. Paper presented at the The First International Conference on Cyberspace, University of Texas at Austin.Accessed: Oct, 2007 Retrieved from: http://www.fudco.com/chip/lessons.html
Mulligan, J. (2000). History of Online Games Part III. Imaginary Realities (April), Retrieved from: http://www.tharsis-gate.org/articles/imaginary/HISTOR~1.HTM
Mulligan, J. (2002). Talkin’ ‘bout My… Generation. Biting The Hand 17(22-JAN), Accessed: Jun, 2008 Retrieved from: http://www.skotos.net/articles/BTH_17.shtml
Nash, S. S. (2007). Behaviorism vs. Constructivism as Applied to Online Learning. XplanaZine, Accessed: Jun, 2008 Retrieved from: http://www.xplanazine.com/2007/09/behaviorism-vs-constructivism-as-applied-to-online-learning
Neuman, W. L. (2006). Social research methods (6th ed.). Boston Pearson Education, Inc
NeverWinter Nights (AOL). (1991-1997). Further Reading. Accessed: May, 2008 Retrieved from: http://en.wikipedia.org/wiki/Neverwinter_Nights_(AOL_game); http://www.bladekeep.com/nwn/index2.htm,
NIST (2006). e-Handbook of Statistical Methods: Levene Test for Equality of Variances. Accessed: Jan, 2009 Retrieved from: http://www.itl.nist.gov/div898/handbook/eda/section3/eda35a.htm
O'Donnell, D. (2003). The Annotated NetHack File. Retrieved from: http://www.spod-central.org/~psmith/nh/anhftime.html
Olsen, W. (2004). Triangulation in Social Research: Qualitative and Quantitative Methods Can Really Be Mixed. In Developments in Sociology: Causeway Press Retrieved from: http://www.ccsr.ac.uk/staff/triangulation.pdf
Onwuegbuzie, A. J. (2002). Why can't we all get along? Towards a framework for unifying research paradigms. Education, 122(3), 518-530.
Orlikowski, W. J., & Baroudi, J. J. (1991). Studying Information Technology in Organizations: Research Approaches and Assumptions. Information Systems Research, 2(1), 1-28.
Oxford Dictionary (Ed.) (1989) (2nd ed.). Oxford University Press.
Oxford Dictionary (Ed.) (1997) (Vols. 3). Oxford University Press.
Packer, R., & Jordan, K. (2002). Multimedia: From Wagner to Virtual Reality (Expanded ed.). New York: W. W. Norton and Company.
Patel, K., Bailenson, J., Jung, S., Diankov, R., & Bajcsy, R. (2006). The effects of fully immersive virtual reality on the learning of physical tasks. Paper presented at the International Workshop on Presence, Cleveland, Ohio, USA.Accessed: Jun, 2008 Retrieved from: http://www.cs.washington.edu/homes/kayur/papers/ispr06.pdf
Pearson, J. L. (2002). Shamanism and the Ancient Mind: A Cognitive Approach to Archaeology: AltaMira Press.
Pellett, D. Open letter to "Classic Gaming .com": Re: your web page, titled "The History of Computer Gaming". The Game of Dungeons (dnd): Gary Whisenhunt, Ray Wood, Dirk Pellett, and Flint Pellett's DND, Retrieved from: http://www.armory.com/~dlp/dnd1.html
Petrich, L. (n.d.). Real-Time-3D Game-Engine Taxonomy. Accessed: Jun, 2008 Retrieved from: http://homepage.mac.com/lpetrich/www/games/GET.html
Pimentel, K. K., & Teixeira, K. K. (1994). Virtual reality. New York, USA: Windcrest Books.
Purbrick, J., & Greenhalgh, C. (2002). An extensible event-based infrastructure for networked virtual worlds. Paper presented at the Virtual Reality, 2002. Proceedings. IEEE.
Ray, J. (2008). Backwards Compatible - How We Got Connected. ABC Good Games Stories, Accessed: April 2008 Retrieved from: http://www.abc.net.au/tv/goodgame/stories/s2171457.htm
Reynolds, R. (2008). VW Taxonomy Q1 ‘08. Accessed: 1 April 2008 Retrieved from: http://terranova.blogs.com/terra_nova/2008/03/vw-taxonomy-q1.html.
Rheingold, H. (1992). Virtual reality. London: Mandarin Paperback.
Rheingold, H. (1993). The virtual community: home-steading on the electronic frontier? : New York: Harper Collins. .
Richardson, H. (2005). Postmodernism: A Hobbit’s View of Information Systems Research Methodology. Paper presented at the 4th International Critical Management Studies Conference, University of Cambridge, Cambridge, UK.Accessed: 10/9/2007 Retrieved from: http://www.mngt.waikato.ac.nz/ejrot/cmsconference/2005/
Robson, S. (2008). US Army to Invest $50M in Combat Training Games. Stars and Stripes (Nov 2008), Retrieved from: http://www.stripes.com/article.asp?section=104&article=59009
Rolland, J., & Hua, H. (2005). Head-Mounted Display Systems. Encyclopedia of Optical Engineering, 1 - 14.
Rosenblum, L. J. (1995). Alice: rapid prototyping for virtual reality. Computer Graphics and Applications, IEEE, 15(3), 8-11.
Russell, T. L. (2001). No Significant Difference Phenomenon (5 ed.). North Carolina State University: IDECC.
Schmidt, M., Kinzer, C., & Greenbaum, I. (2007). Exploring Virtual Education: First Hand Account of 3 Second Life Classes Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Schroeder, R. (1997). Networked Worlds: Social Aspects of Multi-User Virtual Reality Technology. In S. R. Online (Ed.) (Vol. 2).
Schroeder, R. (2006). Being There Together and the Future of Connected Presence. Presence: Teleoperators & Virtual Environments, 15(4), 438-454.
Schuemie, M. J., Straaten, P. V. D., Krijn, M., & Mast, C. A. P. G. V. D. (2001). Research on Presence in Virtual Reality: A Survey. CyberPsychology & Behavior 4(2), Accessed: May, 2008 Retrieved from: http://graphics.tudelft.nl/~vrphobia/surveypub.pdf
Shadow of Yserbius. (1992-1996). Further Reading. Accessed: May, 2008 Retrieved from: http://www.syntax2000.co.uk/issues/; http://www.oldgames.nu/PC/Shadow_of_Yserbius/2085/; http://en.wikipedia.org/wiki/The_Shadow_of_Yserbius;
Sheridan, T. B. (1992). Musings on telepresence and virtual presence. Presence: Teleoperators & Virtual Environments, 1(1), 120-126.
Sheth, R. (2003). Avatar Technology: Giving a Face to the e-Learning Interface. [The eLearning Guild]. The eLearning Devlopers' Journal, August.
Siegle, D. (2008). The Principles and Methods of Educational Research. Accessed: Jan, 2009 Retrieved from: http://www.gifted.uconn.edu/Siegle/research/Instrument%20Reliability%20and%20Validity/Reliability.htm
Simpson, E. J. (1972). The classification of educational objectives in the psychomotor domain. The Psychomotor Domain (Vol. 3). Washington, DC: Gryphon House.
SimTeach. (2008). Universities, Colleges & Schools in Second Life. Accessed: Jun, 2008 Retrieved from: http://www.simteach.com/wiki/index.php?title=Institutions_and_Organizations_in_SL
Slater III, W. F. (2002). Internet History and Growth. Chicago Chapter of the Internet Society, Accessed: April, 2008 Retrieved from: http://www.isoc.org/internet/history/
Slater, M. (1999). Measuring Presence: A Response to the Witmer and Singer Presence Questionnaire. Presence: Teleoperators & Virtual Environments, 8(5), 560-565.
Slater, M., & Usoh, M. (1993). Presence in immersive virtual environments. Paper presented at the Virtual Reality Annual International Symposium, 1993., 1993 IEEE.
Slater, M., & Usoh, M. (1994). Representation Systems, Perceptual Position and Presence in Virtual Environments. Presence: Teleoperators & Virtual Environments, 2(3), 221–233.
Slater, M., & Wilbur, S. (1997). A Framework for Immersive Virtual Environments (FIVE): Speculations on the Role of Presence in Virtual Environments. Presence: Teleoperators & Virtual Environments, 6(6).
Slator, B. M., Borchert, O., Brandt, L., Chaput, H., Erickson, K., Groesbeck, G., et al. (2007). From Dungeons to Classrooms: The Evolution of MUDs as Learning Environments. In The Evolution of Teaching and Learning Paradigms (pp. 119-160): Springer-Verlag
Small, D., & Small, S. (1984). PLATO RISING. Online learning for Atarians 3(3), 36-87 Retrieved from: http://www.atarimagazines.com/v3n3/platorising.html
Smith, A. (1999). COLLABORATION: A Global Survey of Institutions and Programs in Virtual World Cyberspace. Retrieved from: http://www.ccon.org/vlearn/collab.htm
STATGRAPHICS Centurion (2009). Analysis Software. Retrieved from: http://www.statgraphics.com/
Stephenson, N. (1992). Snow Crash. New York: Bantam Spectra Book.
Steuer, J. (1992). Defining Virtual Reality: Dimensions Determining Telepresence. Journal of Communications, 42(4), 73-93.
Sun Microsystems (2008). Current Reality and Future Vision Open Virtual Worlds (White Paper). January, Accessed: 13 March 2008 Retrieved from: http://www.sun.com/service/applicationserversubscriptions/OpenVirtualWorld.pdf
Sutherland, I. (1965). The Ultimate Display. Paper presented at the International Federation of Information Processing. Retrieved from: http://www.cs.utah.edu/classes/cs6360/Readings/UltimateDisplay.pdf
Sutherland, I. (1968). A Head-Mounted Three-Dimensional Display. Paper presented at the Proceedings of the AFIPS Fall Joint Computer Conference, Washington, D.C.
Terdiman, D. (2007). Tech titans seek virtual-world interoperability. CNET News.com, Accessed: Jun, 2008 Retrieved from: http://news.cnet.com/Tech-titans-seek-virtual-world-interoperability/2100-1043_3-6213148.html
The New Media Consortium, & EDCAUSE (2007). The Horizon Report. Accessed: Nov, 2007 Retrieved from: http://www.nmc.org/pdf/2007_Horizon_Report.pdf
Tiernan, T. R. (1996). Synthetic Theater of War (STOW) Engineering Demonstration-1A (ED-1A) Analysis Report (ADA315093). Accessed: Jun, 2008 Retrieved from: http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA315093
Tolkien, J. R. R. (1937). The Hobbit. United Kingdom: Allen and Unwin.
Tolkien, J. R. R. (1954, 1955). The Lord of the Rings. United Kingdom: Allen and Unwin.
Ultima Online. (1997-Current). Further Reading. Accessed: Jun, 2008 Retrieved from: http://www.uoherald.com/news/;
Unger, J. M. (1979). Kanamajiri Editing and the Plato Computer-Based Education System. The Journal of the Association of Teachers of Japanese, 14(2), 141-156.
University of Washington (2008). Instructional Design Approaches. Accessed: Jun, 2008 Retrieved from: http://depts.washington.edu/eproject/Instructional%20Design%20Approaches.htm
US Joint Forces Command. (2008). Joint Semi-Automated Forces (JSAF). Accessed: Jun, 2008 Retrieved from: http://www.jfcom.mil/about/fact_jsaf.html
Van Dam, A., Forsberg, A. S., Laidlaw, D. H., LaViola, J. J. J., & Simpson, R. M. (2000). Immersive VR for scientific visualization: a progress report. Computer Graphics and Applications, IEEE, 20(6), 26-52.
VCampus Corporation. (2008). cyber1.org. Accessed: April, 2008 Retrieved from: http://www.cyber1.org/
Vinge, V. (1981). True Names: Binary Star Number 5, Dell Reprinted in True Names and Other Dangers, Vernor Vinge, Baen Books, 1987.
Vivekananda Centre (2008). Hinduism for Schools. Retrieved from: http://www.vivekananda.btinternet.co.uk/secondaryschoolspage1.htm
Wachowski, A., & Wachowski, L. (Director), (Writer), J. Silver (Producer), (1999). The Matrix [Motion Picture]: Warner Bros, Village Roadshow Pictures,.
Wagner, R. (1849). The Artwork of the Future (Das Kunstwerk der Zukunft), Accessed: April, 2008 Retrieved from: http://users.belgacom.net/wagnerlibrary/prose/wagartfut.htm
Walker, J. (1990). Through the Looking Glass. In L. Brenda (Ed.), The Art of Human-Computer Interface Design: Addison-Wesley
Walsham, G. (1995). The Emergence of Interpretivism in IS Research. Information Systems Research, 6(4), 376-394.
Wang, C.-S., & Tzeng, Y.-R. (2007). Framework for Bloom's Knowledge Placement in Computer Games. Paper presented at the Digital Game and Intelligent Toy Enhanced Learning, 2007. DIGITEL '07. The First IEEE International Workshop.
Weber, R. (2004). The Rhetoric of Positivism Versus Interpretivism: A Personal View. MIS Quarterly, 28(1), 3-xiii.
West Virginia University. (2008). The Looking Glass Project. Accessed: April, 2008 Retrieved from: http://clc.as.wvu.edu:8080/clc/projects/alice/document_view?month:int=5&year:int=2008
Wikipedia. (2008a). The Manhole. Accessed: Jun, 2008 Retrieved from: http://en.wikipedia.org/wiki/The_Manhole
Wikipedia. (2008b). PLATO (computer system). Retrieved from: http://en.wikipedia.org/wiki/PLATO
Wikipedia Doom. (2008). Doom Engine. Accessed: Jun, 2008 Retrieved from: http://doom.wikia.com/wiki/Vanilla_Doom#Fan_community_variants
Wikipedia Ultima (2008). Ultima Online: Third Dawn. Accessed: May, 2008 Retrieved from: http://ultima.wikia.com/wiki/Ultima_Online:_Third_Dawn
Wilson, N. (2007). The Problem with Virtual Worlds. Accessed: 1 April 2008 Retrieved from: http://metaversed.com/23-oct-2007/problem-virtual-words
Witmer, B. G., & Singer, M. J. (1998). Measuring Presence in Virtual Environments: A Presence Questionnaire. Presence: Teleoperators & Virtual Environments, 7(3), 225-240.
Woodcock, B. S. (2008, May, 2008). An Analysis of MMOG Subscription Growth. MMOGCHART.COM Retrieved from: http://www.mmogchart.com
Woolley, D. R. (1994). PLATO: The Emergence of On-Line Community. Computer-Mediated Communication Magazine, 1(3), 5.
Yee, N. (2006). The Demographics, Motivations, and Derived Experiences of Users of Massively Multi-User Online Graphical Environments. Presence: Teleoperators & Virtual Environments, 15(3), 309-329.
Youngblut, C. (1998). Education Uses of Virtual Reality Technology (pp. 131). Alexandria, VA: Institute for Defence Analyses.
Yount, W. R. (2006). Research Design & Statistical Analysis in Christian Ministry, Accessed: Dec, 2008 Retrieved from: http://www.napce.org/yount.html
Zakon, R. H. (2006). Hobbes' Internet Timeline v8.2. Accessed: April, 2008 Retrieved from: http://www.zakon.org/robert/internet/timeline/
Zyda, M. (2005). From Visual Simulation to Virtual Reality to Games. Computer, 38(9), 25-32.
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
ab2b67bad99f0d003b33f38e83cdb66c45970a6a
RIAM:Overview: The Assertion Linked Systems Based Audit (ALSBA)
0
292
335
2010-08-05T14:27:42Z
Bishopj
1
wikitext
text/x-wiki
==Introduction==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:ALSBA.png]]
</div>
</td>
</tr>
</table>
The keys to the method are '''structure''' and '''focus'''. With RIAM, Internal Audit applies a Systems and Assertion Based approach to answer targetted 'questions' about an audit area. Questions focus the review, while assertions define the criterion for the answer.
Many systems based approaches merely measure the compliance of an organisation's staff with a particular system. The ALSBA method is a substantial enhancement to this commonly used approach. The RIAM auditor analyses compliance of the system process with a strategic and/or tactical purpose, compliance of practice with procedure and the awareness and readiness presented by potential (risks and opportunities) in the system itself.
The avoidance of checklists makes the auditor adaptable by permanently adopting a 'learning' posture. More importantly, the process is universal in that the same logic structure can be applied from the strategic level through to transactional compliance level, and from 'hard' financial processes to 'soft' subjective process.
Very large organisations present some particular challenges for the systems based audit, including coordination of teams across multiple jurisdictions, locations and organisation units. Here we present an overview to the technical aspects of the RIAM SBA, in [[RIAM:Conduct of the Very Large Audit|Conduct of the Very Large Audit]] we explore the method in detail in both the large audit and small audit context.
==What Are Assertions?==
The figure on the preceding diagramme summarises the Assertion Linked Systems Based Audit analytic structure. The process starts with the five areas of Internal Audit's "Scope of work" within which Assertions are defined. Support for the selected Assertions is classified into management's 10 Control Classes (areas for management action). The systems built by management to support the Assertions within the Control Classes will have identifiable "Control Attributes" identical to those used in our Control Implementation Service, and are classifiable according to the "Type" - preventive, detective or corrective.
The concept of Assertions is the core of a RIAM Systems Based Audit. Assertions are truths that we wish to express about a system. They formulated as statements of "fact" about a system. Examples of typical Compliance Assertions for financial aspects of a Grants Scheme might be:
That:
a. Grant expenditure is bona fide (ie that acquittals are for actual grants and for services appropriate to grant activity);
b. Grant data reported/processed is:
* Attributed to the '''proper period''',
* '''Accurately''' calculated,
* Correctly and appropriately '''accumulated''',
* Accurately '''recorded''',
* Correctly '''disclosed''',
* '''Properly authorised''' with respect to transactions (ie grantee approved costs and the Commission is satisfied that the amount is for an appropriate expense),
* Providing benefits to which grantees are '''eligible''',
c. The relevant '''management directions''' and '''legislation are observed''':
* Payments are in accordance with legislation, and
* Approvals for grants are in accordance with the legislation (ie properly vetted by the Grant Committee and approval is given by the Board); and
d. The assets of the organisation are efficiently, effectively and otherwise '''appropriately protected and applied''' (ie having an appropriate process of grant approval that assures projects are of an appropriate standard, and that Commission resources are used efficiently).
==For What are Assertions Used?==
When we say a given system is operating satisfactorily we mean that our review has tested the truth of a set of assertions and we have found that they have been sustained. Thus testing the assertions is the purpose of the audit.
Assertions are the focus and underlay the structure of the RIAM analytic method. All review activities, findings, discussions and recommendations must be able to be tied back to the review's assertions.
The result is that both the auditor and the auditee have a precise understanding of the level of comfort a given review offers.
Assertions have another huge advantage for the auditor: They allow us to frame focus questions about a system in "yes" or "no" form, which are answered by proving or disproving the assertions. For example, the question "Is system XYZ operating effectively?", is, by its nature, subjective. My meaning of the word 'effective' may be radically different from your understanding of theat same word. If we say, "effective in this context means accurate and timely" then we both know that neither of us meant "authorised and consistent", or "fair and equitable".
Thus by combining a focus question, with the assertions that define a "yes" answer, we as auditors, can give management and the governance committee what they want: certainty. We do not need to hedge for the unknown - because we have stated clearly our context specific meaning.
Thus we say that assertions are the definitions of the audit project focus question.
For a detailed discussion of assertions and example assertion sets in various kinds of systems see:
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
==How do we Establish Assertions?==
A reviews assertions are agreed with the auditee management before a review commences. In many cases, such as financial balances audits and quality audits, we are able to recommend appropriate assertions. In other reviews, particularly those specifically requested by management, the managers will have a clear idea of particular "Questions" they wish answered by the review.
The establishment of "Questions" is the first step in selecting audit assertions.
During the entrance interview phase of the audit management identifies a number of questions about the target system they wish to have answered. The auditor then proposes a series of Assertions, the sustaining of which will constitute an affirmative answer, and the suppressing of which will constitute a negative answer. These assertions are agreed with management.
==What is the Assertion Linked Systems Based Approach?==
===The Objectives===
The objectives of the reviews are summarised as:
* Document the procedures in operation within the section so far as they relate to the target activities;
* Collect sufficient data and analyse that data to support assertions that address management's critical success factors represented by questions they request audit to answer;
* Identify risk and efficiency exposures to the organisation and the critical success factors of management;
* Recommend relevant and practicable changes in the systems and procedures to management where these exposures are present; and
* Form an opinion as to the overall reliability of the systems in place and as modified.
===Meeting The Objectives===
[[Image:ALSBASteps.png]]
The structure of the approach, diagrammed above, that meets the audit objectives has four phases. Here we summarise those phases. A more detailed discussion of these phases mapped into the context of both small team and large multi-location, team audits is explored in:
* [[RIAM:VLA:The Four Phases of the RALSBA|The Four Phases of the RIAM Systems Based Audit]]
<table width="100%" border=1 >
<tr ><td >
====PHASE 1: FAMILIARISATION, SCOPE AND PLANNING====
<ol>
<li> Define View of the Audit Area, Establish Risks, Threats & Benefits expected by Management.<br>
<br>
Identify the objectives and purposes of the section being reviewed, and the review being conducted; document critical success factors. Entrance interviews are held with senior management during which management's concerns and directions are communicated as well as the Critical Success Factors of the audit and the section being audited. Certain objectives, such as legislative compliance, are always assumed to be present;<br>
<br>
Identify the functions in place to realise the objectives, critical success factors and purposes. A series of initial interviews are conducted with relevant middle and line management and staff to:
* Introduce the review and reassure staff as to the assisting rather than policing nature of the review,
* Identify the operations and organisation structure adopted to meet the objectives, purposes and critical success factors.
<br>
<li> Set Focus Questions, Audit Scope, Boundary & Assertions
Establish focus questions and their associated answering assertions , the satisfaction of which will represent a "pass" result. The assertions represent the criteria for evaluation;
</ol>
This topic is explored in more detail in:
*[[RIAM:VLA:FAMILIARISATION, SCOPE & PLANNING|ALSBA - Phase 1. FAMILIARISATION, SCOPE & PLANNING in the Very Large Audit]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 2: DOCUMENTATION AND SYSTEMS ANALYSIS====
<ol>
<li> Systems Description<br>
Build functional description of the area under review, focussing on the ten Control Classes or other appropriate classification of management action areas.<br>
<br>
Build a cyclic description of control systems, examining both time based cycles and data flows.<br>
<br>
Investigate the control systems in place to implement the functions. Tasks include:
* Document the procedures in operation so far as they relate to the scope and boundary of the Audit task,
* Compare actual procedures to legislation, policies, guidelines and documented procedures noting exceptions;
<br>
<br>
Examine management information and reporting systems in place to monitor the operations;
<br><br>
<li> Threat Causing Assertion Failure & Controls Addressing Threats<br>
Evaluate the systems against the assertions to be supported, noting key controls in the systems, and which assertions they affect, to determine:
* Potential strengths and weaknesses of the designed systems;
* Preliminary ranking of risk and exposures including efficiency exposures.
</ol>
More detail is available on this topic in:
* [[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL|PHASE 2: DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]]
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 3: TESTING AND RESULTS ANALYSIS====
<ol>
<li> Test Systems<br>
<br>
Design a testing program and Test the system and its transactions and/or data for:
* Compliance of operations with specified system (strengths);
* Occurrence of the identified weaknesses, risks or exposures;
<br>
<li> Evaluate Results<br>
Analyse the results of systems analysis and compliance testing stages to accept or refute the established assertions and operating compliance. <br>
</ol>
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 4: REPORTING AND FOLLOW UP====
<ol>
<li> Design Corrections<br>
Conclude and report in which we:<br>
* Identify risk and efficiency exposures to the Organisation;
* Recommend changes in the systems and procedures to the Organisation's management where these exposures are present;
* Form an opinion as to the overall reliability of the systems in place and as modified;
* Report to both management and the Audit Committee after and during each task;
<br>
<br>
The control system's ability to support the assertions and therefore the key controls identified are analysed at three levels:<br>
<br>
* '''Preventive Controls/Treatments'''
** Including direct controls such as authorisation and certification of forms, indirect controls such as training, maintenance of up-to-date reference material, section administration and organisation;
<br>
* '''Detective Controls/Treatments'''
** Such as supervisor review, batch control totals, edit checks and periodic system reconciliations;
<br>
* '''Corrective Controls/Treatments'''
** Such as routing an error back through the same control system that originally processed and detected the error and response to exception reports.
<br>
<br>
<li> Gain Management Ownership<br>
Conduct exit interviews, produce the final report and review action plans as required.
<br><br>
Although steps presented here suggest a linear sequence of steps, the correct approach involves regular, on-going to management during the conduct of the review. Interim reports, either formal or informal should be provided during the review. The key factor is that there should be NO SURPRISES for management at the end of the review. This facilitates ownership and acceptance of the findings, recommendations and the audit generally.
<br><br>
<li> Classify Findings, Facilitate Action Plans and Update the Organisation Risk Model
The final stage of the Review is to formalise the findings and recommendations by classifying their effects on the risk evaluation of the organisation and feed these back into the risk model. The risk model both provides an ongoing measure of the organisation's risk level, and eventually feeds back into the planning process for the identification of either further action or necessary reviews.
<br><br>
</ol>
</td></tr>
</table>
===Establishing the framework===
The key principles of the framework include:
* Interviews to scope and focus the review and involvement of Management and Staff throughout the process;
* Ensuring agreement as to the purpose, focus, scope, boundary, approach and findings of the review;
* Assertions as criteria for evaluation.
* Application of Risk Analysis, not just at the Planning stage, but also the Threat Analysis stage when assessing Systems Design, and the Reporting Stage when finalising recommendations. The Audit Risk is the risk that the audit will provide a wrong opinion. This is a function of:
** The Inherent Risk in the organisation
*** the risk that an error is likely to occur;
** The Control Risk
*** the risk that the control system will not prevent, detect or correct the error; and
** The Detection Risk
*** the risk that our procedures will not identify the existence of a material error.
The ALSBA uses Assertion focussed Risk and Threat analytic procedures to minimise this risk.
* Risk and Threat analysis aims to minimise the cost of reviews by keeping procedures tuned to the real exposures, and when combined with assertions, raises the certainty that our systems opinion is correct.
* Use of a variety of report and presentation styles to best communicate information; and
* The Internal Auditor MUST become part of the management & systems improvement process, not a disinterested, occasional observer.
* Analysis of control systems performance in meeting objectives.
* Clear discussion and specific recommendations to provide improvements.
==What is Threat Testing?==
Threat testing is an approach to assertion testing used as an alternative to a Desired Control Model. RIAM supports both concepts.
The key benefits of threat testing are:
* Controls analysis is kept current to the ACTUAL systems in place rather than an out-of-date control model;
* The audit process recognises and supports improvement and change in systems - essential for environments where Total Quality Management is operating;
* By evaluating the sources of possible problems, the process RESULTS in the development of Desired Control Models;
* Management is involved in the assessment of risks of systems failure;
This is a brief outline of the Threat Testing process :
* Each assertion is examined in turn. For each assertion a list of causes for failure of an assertion is prepared based on experience, statistical sampling, management advice, consultant advice, and checklists, etc. These causes are called threats. To each threat a probability of occurrence may be assigned if desired (perhaps based on historic samples).
* Each threat is then applied to the control system model (developed during the systems documentation phase) to investigate the probability of the system preventing the threat (ie. mitigating the risk). This probability is expressed as a probability of system failure.
* The risk of the threat occurring multiplied by the risk of system failure (Control Risk) is probability of the assertion not being sustained in operation.
The sum of all such threat related probabilities is the total risk of assertion failure in the system.
==How Do We Document Systems?==
===Working Papers===
RIAM working papers are designed to form a "tree" or pyramid with the apex being the opinion of the systems in operation, and the base being the detailed "views" or models of the organisation's systems and the testing results verifying aspects of the system's operations.
<table width="100%" border=1 >
<tr ><th >REF</th><th>CONTENTS</th></tr>
<tr><td>1</td><td>Final Audit Report and Other Relevant Files</td></tr>
<tr><td>2</td><td>Supervisor, Manager & Partner Reviews and Follow Up</td></tr>
<tr><td>3</td><td>Engagement Letters, Contract and Contacts</td></tr>
<tr><td>4</td><td>Action Plan, Client Follow Up and Correspondence</td></tr>
<tr><td>5</td><td>Matters for Manager & Partner Attention</td></tr>
<tr><td>6</td><td>Matters for Review Next Audit</td></tr>
<tr><td>7</td><td>Planning Documents and Audit Program</td></tr>
<tr><td>8</td><td>Work & Time recording Schedule</td></tr>
<tr><td>9</td><td>Background and Organisation Details</td></tr>
<tr><td>10</td><td>Organisation Objectives, Operating & Financial Policies, and Performance Measures</td></tr>
<tr><td>11</td><td>Strength & Weakness Schedule</td></tr>
<tr><td>12</td><td>Control System Documentation and Conclusion<br>
(Control Questionnaires, flowcharts, checklists and narratives)</td></tr>
<tr><td>13</td><td>Records of Interview</td></tr>
<tr><td>14</td><td>Legislation and Management Directives - Compliance<br>
(Including Important Contracts and Agreements)</td></tr>
<tr><td>15</td><td>Analysis and Tests of Transactions, Processes and Account Balances</td></tr>
<tr><td>16</td><td>Other Background Data and Notes</td></tr>
</table>
''The Index for The Standard RIAM Audit File''
The foregoing index shows that the files are self contained units including not only plans and tests, but also:
* date records of client contacts;
* relevant legislation and directions;
* full internal and external cross references;
* systems documentation; and
* organisation background and structures.
Section 12 of the file contains the detailed analysis of the systems under review:
<table width="100%" border=1 >
<tr ><th >PHASE</th><th>ACTION</th><th>WHO</th><th>REF</th></tr>
<tr><td>1</td><td>Conclusion</td><td>12.</td></tr>
<tr><td>2</td><td>Objectives (Purpose) of the Control System</td><td>12.</td></tr>
<tr><td>3</td><td>Framework of Analysis (Assertions to be supported)</td><td>12.</td></tr>
<tr><td>4</td><td>Key Controls</td><td>12.</td></tr>
<tr><td>5</td><td>Overview of the Control System (Principal Flows)</td><td>12.</td></tr>
<tr><td>6</td><td>Control System Flowcharts/Documentation</td><td>12.</td></tr>
<tr><td>7</td><td>Files & Records in the System</td><td>12.</td></tr>
<tr><td>8</td><td>Cycles in the System</td><td>12.</td></tr>
<tr><td>9</td><td>Transactions and Value</td><td>12.</td></tr>
<tr><td>10</td><td>Documents in the System</td><td>12.</td></tr>
<tr><td>11</td><td>Segregation of Duties</td><td>12.</td></tr>
<tr><td>12</td><td>Other</td><td>12.</td></tr>
</table>
''Index for Section 12 of the Standard Audit File - Control System Documentation''
The continuation of the "tree" structured analysis is evident in the above index. Each subsection contains further structured working papers, the details of which can be found in the volume "Standard Forms & Papers" of this series.
===Methods of Analysis===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IA9DocMethods.png]]
</div>
</td>
</tr>
</table>
The working papers require that systems design is analysed BEFORE any testing is performed. While prewritten test programs can be used, the full benefit of the method is received when the systems analysis is performed using the various systems models:
<ul>
<li> Segregation of Duties Chart
<li> Client Provider Analysis
<li> Key Quantities (transaction values and volumes)
<li> Cyclic Events
<li> Annotated Data Flows, Narrations and/or Document Flows
<li> Key Controls structured by their "data flow focus":
<ul>
<li> Inputs
<li> Processes
<li> Outputs
<li> Storage
</ul>
</ul>
And evaluated within the assertion/control attribute structure outlined earlier.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%">
<tr>
<td>
<div class="left">
[[Image:IAAnotatedDataFlow.png]]
</div>
</td>
<td>
<div class="right">
[[Image:IASegOfDutiesChart.png]]
</div>
</td>
</tr>
</table>
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IAAssertionMatrix.png]]
</div>
</td>
</tr>
</table>
The Working Paper's documentation of the system culminates in the Assertion Matrix and Control Strength & Weaknesses Chart. The Systems Analysis of section 12 of the file is summarised in these charts.
Within the assertion structured systems model are many subsystems. These are documented throughout the documentation "tree". Each subsystem should be documented in the way that best suits our analytic needs. For transaction flows this might be some type of annotated data flow, for a delegations analysis it might be an organisation chart, and for a risk analysis it might be a Fitzgerald Matrix, etc.
There are a number of techniques available to the auditor for use in documenting systems of internal control, such as:
* Narration
* Process and Document Flows
* Annotated Data Flows
* Organisation Charts
* Segregation of Duties Chart
* Assertion Matrix
* Lancaster Modelling
* Algorithm Pseudo-programming
* Simulation
Irrespective of which method is chosen, documentation should include:
* the origin of every document and record in the system
* all processing that takes place on the document
* the disposition of every document and record in the system
* a description of internal controls operating within the system
==What Are Some of the Types of Reviews Conducted Within the ALSBA?==
Management Assurance services utilising the ALSBA cover the full range of Internal Audit work including:
<table width="100%" border=1 >
<tr>
<td>
* Internal Audit Unit Performance Review;
* Efficiency and Effectiveness Reviews;
* Compliance and Integrity Reviews;
* Strategic and Tactical Planning Reviews;
* Financial Audits;
* Systems Analysis and Design Review;
* Quality Audit (TQM);
* Computer Controls Implementation;
* Methodology Design and Development Review;
* Control Systems Design;
</td>
<td>
* Training Review;
* EDP Reviews (15 different types);
* Corporate Design and Planning Reviews;
* Risk Management Review;
* Change Control;
* Occupational Health & Safety;
* Inventory Management;
* Maintenance Systems;
* Process Control;
* Fraud Control; and
* Quality Management System Integration.
</td>
</tr>
</table>
==How Do We Report?==
Ultimately the product produced and of greatest significance to management is the report. Our reporting is standardised to ensure consistency of structure, coverage, presentation, language and quality.
The significant features of our reports include:
* Standardised structure;
* Systems documentation and flow charts;
* Every finding is presented with: "Observation", "Risks and Implications", "Recommendations", and "Management Comment" sub sections;
* Clear, specific and relevant recommendations, not vague references to the need to "review" an area or "correct a problem";
* Clearly argued risks and implications of each finding. An observation is analysed by:
** The assertions affected,
** Risks and exposures from the observation,
** Arguments in favour of the breach and audit's comment on that argument;Inclusion of and focus on Action Plans; and
** Linking of findings to a clearly stated premise for the finding's importance: The Assertions affected.
Although the Report structure is one of the aspects of RIAM specifically tailored to the client, most adopt a close variation of one standard structure. RIAM includes five distinct report structures to assist clients identify their reporting needs.
The report is presented under the following headings/sections:
<ol>
<li> Executive Summary<br>
Provides a summary of the purpose, objectives, assertions, approach, scope, the overall opinion, key findings and issues arising.
<br>
<li> Objectives and Approach<br>
Addresses the "How" and "Why" of the review, and defines the assertions on which the conclusions and findings are based.
<br>
<li> Scope and Boundary<br>
Clearly defines the matters covered by the review, and most importantly the matters excluded from the review.
<br>
<li> Brief Description of the System Reviewed<br>
Covers the Purpose of the Section/Systems, The People and Organisation Structure, the Principal Activities of the Section/Systems, Documents and Records (both manual and computer) and the Reports Produced from and to the Section/Systems.
<br>
<li> Checklist of Findings, Recommendations and Action Plans<br>
Presents in Landscape form a summary of the findings and recommendations in section 6 under the headings: "Findings" and "Recommendations". Tables include boxes for Action Plans to be referenced or detailed. This section assists in monitoring and following up responses to audit recommendations by the Audit Committee.
<br>
<li> Detailed Findings and Recommendations<br>
The findings and recommendations have a standard structure:<br>
* Observation
** The observed facts, relevant legislation, directions and industry relevant information.
* Implications and Risks
** Assertions suppressed or supported.
** Principal risks and exposures.
** Arguments in favour of, or reasons for, the breach and audit's comment.
** Summation of audit's conclusion as to risk or exposure.
* Recommendations
** Numbered, clear, specific and relevant recommendations for action.
** Where alternatives are identified either by audit or the client they are presented and evaluated.
* Management Comment
** Management's response to the issues raised and action taken. After discussion and exit interviews the vast body of your recommendations should be accepted by management. If not, you have not done your job correctly!
</ol>
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
[[Category:Internal Audit - RIAM]]
{{BackLinks}}
</noinclude>
f16b463043523a28c430d6952430f5ca3868579b
351
335
2010-08-05T14:27:42Z
Bishopj
1
wikitext
text/x-wiki
==Introduction==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:ALSBA.png]]
</div>
</td>
</tr>
</table>
The keys to the method are '''structure''' and '''focus'''. With RIAM, Internal Audit applies a Systems and Assertion Based approach to answer targetted 'questions' about an audit area. Questions focus the review, while assertions define the criterion for the answer.
Many systems based approaches merely measure the compliance of an organisation's staff with a particular system. The ALSBA method is a substantial enhancement to this commonly used approach. The RIAM auditor analyses compliance of the system process with a strategic and/or tactical purpose, compliance of practice with procedure and the awareness and readiness presented by potential (risks and opportunities) in the system itself.
The avoidance of checklists makes the auditor adaptable by permanently adopting a 'learning' posture. More importantly, the process is universal in that the same logic structure can be applied from the strategic level through to transactional compliance level, and from 'hard' financial processes to 'soft' subjective process.
Very large organisations present some particular challenges for the systems based audit, including coordination of teams across multiple jurisdictions, locations and organisation units. Here we present an overview to the technical aspects of the RIAM SBA, in [[RIAM:Conduct of the Very Large Audit|Conduct of the Very Large Audit]] we explore the method in detail in both the large audit and small audit context.
==What Are Assertions?==
The figure on the preceding diagramme summarises the Assertion Linked Systems Based Audit analytic structure. The process starts with the five areas of Internal Audit's "Scope of work" within which Assertions are defined. Support for the selected Assertions is classified into management's 10 Control Classes (areas for management action). The systems built by management to support the Assertions within the Control Classes will have identifiable "Control Attributes" identical to those used in our Control Implementation Service, and are classifiable according to the "Type" - preventive, detective or corrective.
The concept of Assertions is the core of a RIAM Systems Based Audit. Assertions are truths that we wish to express about a system. They formulated as statements of "fact" about a system. Examples of typical Compliance Assertions for financial aspects of a Grants Scheme might be:
That:
a. Grant expenditure is bona fide (ie that acquittals are for actual grants and for services appropriate to grant activity);
b. Grant data reported/processed is:
* Attributed to the '''proper period''',
* '''Accurately''' calculated,
* Correctly and appropriately '''accumulated''',
* Accurately '''recorded''',
* Correctly '''disclosed''',
* '''Properly authorised''' with respect to transactions (ie grantee approved costs and the Commission is satisfied that the amount is for an appropriate expense),
* Providing benefits to which grantees are '''eligible''',
c. The relevant '''management directions''' and '''legislation are observed''':
* Payments are in accordance with legislation, and
* Approvals for grants are in accordance with the legislation (ie properly vetted by the Grant Committee and approval is given by the Board); and
d. The assets of the organisation are efficiently, effectively and otherwise '''appropriately protected and applied''' (ie having an appropriate process of grant approval that assures projects are of an appropriate standard, and that Commission resources are used efficiently).
==For What are Assertions Used?==
When we say a given system is operating satisfactorily we mean that our review has tested the truth of a set of assertions and we have found that they have been sustained. Thus testing the assertions is the purpose of the audit.
Assertions are the focus and underlay the structure of the RIAM analytic method. All review activities, findings, discussions and recommendations must be able to be tied back to the review's assertions.
The result is that both the auditor and the auditee have a precise understanding of the level of comfort a given review offers.
Assertions have another huge advantage for the auditor: They allow us to frame focus questions about a system in "yes" or "no" form, which are answered by proving or disproving the assertions. For example, the question "Is system XYZ operating effectively?", is, by its nature, subjective. My meaning of the word 'effective' may be radically different from your understanding of theat same word. If we say, "effective in this context means accurate and timely" then we both know that neither of us meant "authorised and consistent", or "fair and equitable".
Thus by combining a focus question, with the assertions that define a "yes" answer, we as auditors, can give management and the governance committee what they want: certainty. We do not need to hedge for the unknown - because we have stated clearly our context specific meaning.
Thus we say that assertions are the definitions of the audit project focus question.
For a detailed discussion of assertions and example assertion sets in various kinds of systems see:
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
==How do we Establish Assertions?==
A reviews assertions are agreed with the auditee management before a review commences. In many cases, such as financial balances audits and quality audits, we are able to recommend appropriate assertions. In other reviews, particularly those specifically requested by management, the managers will have a clear idea of particular "Questions" they wish answered by the review.
The establishment of "Questions" is the first step in selecting audit assertions.
During the entrance interview phase of the audit management identifies a number of questions about the target system they wish to have answered. The auditor then proposes a series of Assertions, the sustaining of which will constitute an affirmative answer, and the suppressing of which will constitute a negative answer. These assertions are agreed with management.
==What is the Assertion Linked Systems Based Approach?==
===The Objectives===
The objectives of the reviews are summarised as:
* Document the procedures in operation within the section so far as they relate to the target activities;
* Collect sufficient data and analyse that data to support assertions that address management's critical success factors represented by questions they request audit to answer;
* Identify risk and efficiency exposures to the organisation and the critical success factors of management;
* Recommend relevant and practicable changes in the systems and procedures to management where these exposures are present; and
* Form an opinion as to the overall reliability of the systems in place and as modified.
===Meeting The Objectives===
[[Image:ALSBASteps.png]]
The structure of the approach, diagrammed above, that meets the audit objectives has four phases. Here we summarise those phases. A more detailed discussion of these phases mapped into the context of both small team and large multi-location, team audits is explored in:
* [[RIAM:VLA:The Four Phases of the RALSBA|The Four Phases of the RIAM Systems Based Audit]]
<table width="100%" border=1 >
<tr ><td >
====PHASE 1: FAMILIARISATION, SCOPE AND PLANNING====
<ol>
<li> Define View of the Audit Area, Establish Risks, Threats & Benefits expected by Management.<br>
<br>
Identify the objectives and purposes of the section being reviewed, and the review being conducted; document critical success factors. Entrance interviews are held with senior management during which management's concerns and directions are communicated as well as the Critical Success Factors of the audit and the section being audited. Certain objectives, such as legislative compliance, are always assumed to be present;<br>
<br>
Identify the functions in place to realise the objectives, critical success factors and purposes. A series of initial interviews are conducted with relevant middle and line management and staff to:
* Introduce the review and reassure staff as to the assisting rather than policing nature of the review,
* Identify the operations and organisation structure adopted to meet the objectives, purposes and critical success factors.
<br>
<li> Set Focus Questions, Audit Scope, Boundary & Assertions
Establish focus questions and their associated answering assertions , the satisfaction of which will represent a "pass" result. The assertions represent the criteria for evaluation;
</ol>
This topic is explored in more detail in:
*[[RIAM:VLA:FAMILIARISATION, SCOPE & PLANNING|ALSBA - Phase 1. FAMILIARISATION, SCOPE & PLANNING in the Very Large Audit]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 2: DOCUMENTATION AND SYSTEMS ANALYSIS====
<ol>
<li> Systems Description<br>
Build functional description of the area under review, focussing on the ten Control Classes or other appropriate classification of management action areas.<br>
<br>
Build a cyclic description of control systems, examining both time based cycles and data flows.<br>
<br>
Investigate the control systems in place to implement the functions. Tasks include:
* Document the procedures in operation so far as they relate to the scope and boundary of the Audit task,
* Compare actual procedures to legislation, policies, guidelines and documented procedures noting exceptions;
<br>
<br>
Examine management information and reporting systems in place to monitor the operations;
<br><br>
<li> Threat Causing Assertion Failure & Controls Addressing Threats<br>
Evaluate the systems against the assertions to be supported, noting key controls in the systems, and which assertions they affect, to determine:
* Potential strengths and weaknesses of the designed systems;
* Preliminary ranking of risk and exposures including efficiency exposures.
</ol>
More detail is available on this topic in:
* [[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL|PHASE 2: DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]]
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 3: TESTING AND RESULTS ANALYSIS====
<ol>
<li> Test Systems<br>
<br>
Design a testing program and Test the system and its transactions and/or data for:
* Compliance of operations with specified system (strengths);
* Occurrence of the identified weaknesses, risks or exposures;
<br>
<li> Evaluate Results<br>
Analyse the results of systems analysis and compliance testing stages to accept or refute the established assertions and operating compliance. <br>
</ol>
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 4: REPORTING AND FOLLOW UP====
<ol>
<li> Design Corrections<br>
Conclude and report in which we:<br>
* Identify risk and efficiency exposures to the Organisation;
* Recommend changes in the systems and procedures to the Organisation's management where these exposures are present;
* Form an opinion as to the overall reliability of the systems in place and as modified;
* Report to both management and the Audit Committee after and during each task;
<br>
<br>
The control system's ability to support the assertions and therefore the key controls identified are analysed at three levels:<br>
<br>
* '''Preventive Controls/Treatments'''
** Including direct controls such as authorisation and certification of forms, indirect controls such as training, maintenance of up-to-date reference material, section administration and organisation;
<br>
* '''Detective Controls/Treatments'''
** Such as supervisor review, batch control totals, edit checks and periodic system reconciliations;
<br>
* '''Corrective Controls/Treatments'''
** Such as routing an error back through the same control system that originally processed and detected the error and response to exception reports.
<br>
<br>
<li> Gain Management Ownership<br>
Conduct exit interviews, produce the final report and review action plans as required.
<br><br>
Although steps presented here suggest a linear sequence of steps, the correct approach involves regular, on-going to management during the conduct of the review. Interim reports, either formal or informal should be provided during the review. The key factor is that there should be NO SURPRISES for management at the end of the review. This facilitates ownership and acceptance of the findings, recommendations and the audit generally.
<br><br>
<li> Classify Findings, Facilitate Action Plans and Update the Organisation Risk Model
The final stage of the Review is to formalise the findings and recommendations by classifying their effects on the risk evaluation of the organisation and feed these back into the risk model. The risk model both provides an ongoing measure of the organisation's risk level, and eventually feeds back into the planning process for the identification of either further action or necessary reviews.
<br><br>
</ol>
</td></tr>
</table>
===Establishing the framework===
The key principles of the framework include:
* Interviews to scope and focus the review and involvement of Management and Staff throughout the process;
* Ensuring agreement as to the purpose, focus, scope, boundary, approach and findings of the review;
* Assertions as criteria for evaluation.
* Application of Risk Analysis, not just at the Planning stage, but also the Threat Analysis stage when assessing Systems Design, and the Reporting Stage when finalising recommendations. The Audit Risk is the risk that the audit will provide a wrong opinion. This is a function of:
** The Inherent Risk in the organisation
*** the risk that an error is likely to occur;
** The Control Risk
*** the risk that the control system will not prevent, detect or correct the error; and
** The Detection Risk
*** the risk that our procedures will not identify the existence of a material error.
The ALSBA uses Assertion focussed Risk and Threat analytic procedures to minimise this risk.
* Risk and Threat analysis aims to minimise the cost of reviews by keeping procedures tuned to the real exposures, and when combined with assertions, raises the certainty that our systems opinion is correct.
* Use of a variety of report and presentation styles to best communicate information; and
* The Internal Auditor MUST become part of the management & systems improvement process, not a disinterested, occasional observer.
* Analysis of control systems performance in meeting objectives.
* Clear discussion and specific recommendations to provide improvements.
==What is Threat Testing?==
Threat testing is an approach to assertion testing used as an alternative to a Desired Control Model. RIAM supports both concepts.
The key benefits of threat testing are:
* Controls analysis is kept current to the ACTUAL systems in place rather than an out-of-date control model;
* The audit process recognises and supports improvement and change in systems - essential for environments where Total Quality Management is operating;
* By evaluating the sources of possible problems, the process RESULTS in the development of Desired Control Models;
* Management is involved in the assessment of risks of systems failure;
This is a brief outline of the Threat Testing process :
* Each assertion is examined in turn. For each assertion a list of causes for failure of an assertion is prepared based on experience, statistical sampling, management advice, consultant advice, and checklists, etc. These causes are called threats. To each threat a probability of occurrence may be assigned if desired (perhaps based on historic samples).
* Each threat is then applied to the control system model (developed during the systems documentation phase) to investigate the probability of the system preventing the threat (ie. mitigating the risk). This probability is expressed as a probability of system failure.
* The risk of the threat occurring multiplied by the risk of system failure (Control Risk) is probability of the assertion not being sustained in operation.
The sum of all such threat related probabilities is the total risk of assertion failure in the system.
==How Do We Document Systems?==
===Working Papers===
RIAM working papers are designed to form a "tree" or pyramid with the apex being the opinion of the systems in operation, and the base being the detailed "views" or models of the organisation's systems and the testing results verifying aspects of the system's operations.
<table width="100%" border=1 >
<tr ><th >REF</th><th>CONTENTS</th></tr>
<tr><td>1</td><td>Final Audit Report and Other Relevant Files</td></tr>
<tr><td>2</td><td>Supervisor, Manager & Partner Reviews and Follow Up</td></tr>
<tr><td>3</td><td>Engagement Letters, Contract and Contacts</td></tr>
<tr><td>4</td><td>Action Plan, Client Follow Up and Correspondence</td></tr>
<tr><td>5</td><td>Matters for Manager & Partner Attention</td></tr>
<tr><td>6</td><td>Matters for Review Next Audit</td></tr>
<tr><td>7</td><td>Planning Documents and Audit Program</td></tr>
<tr><td>8</td><td>Work & Time recording Schedule</td></tr>
<tr><td>9</td><td>Background and Organisation Details</td></tr>
<tr><td>10</td><td>Organisation Objectives, Operating & Financial Policies, and Performance Measures</td></tr>
<tr><td>11</td><td>Strength & Weakness Schedule</td></tr>
<tr><td>12</td><td>Control System Documentation and Conclusion<br>
(Control Questionnaires, flowcharts, checklists and narratives)</td></tr>
<tr><td>13</td><td>Records of Interview</td></tr>
<tr><td>14</td><td>Legislation and Management Directives - Compliance<br>
(Including Important Contracts and Agreements)</td></tr>
<tr><td>15</td><td>Analysis and Tests of Transactions, Processes and Account Balances</td></tr>
<tr><td>16</td><td>Other Background Data and Notes</td></tr>
</table>
''The Index for The Standard RIAM Audit File''
The foregoing index shows that the files are self contained units including not only plans and tests, but also:
* date records of client contacts;
* relevant legislation and directions;
* full internal and external cross references;
* systems documentation; and
* organisation background and structures.
Section 12 of the file contains the detailed analysis of the systems under review:
<table width="100%" border=1 >
<tr ><th >PHASE</th><th>ACTION</th><th>WHO</th><th>REF</th></tr>
<tr><td>1</td><td>Conclusion</td><td>12.</td></tr>
<tr><td>2</td><td>Objectives (Purpose) of the Control System</td><td>12.</td></tr>
<tr><td>3</td><td>Framework of Analysis (Assertions to be supported)</td><td>12.</td></tr>
<tr><td>4</td><td>Key Controls</td><td>12.</td></tr>
<tr><td>5</td><td>Overview of the Control System (Principal Flows)</td><td>12.</td></tr>
<tr><td>6</td><td>Control System Flowcharts/Documentation</td><td>12.</td></tr>
<tr><td>7</td><td>Files & Records in the System</td><td>12.</td></tr>
<tr><td>8</td><td>Cycles in the System</td><td>12.</td></tr>
<tr><td>9</td><td>Transactions and Value</td><td>12.</td></tr>
<tr><td>10</td><td>Documents in the System</td><td>12.</td></tr>
<tr><td>11</td><td>Segregation of Duties</td><td>12.</td></tr>
<tr><td>12</td><td>Other</td><td>12.</td></tr>
</table>
''Index for Section 12 of the Standard Audit File - Control System Documentation''
The continuation of the "tree" structured analysis is evident in the above index. Each subsection contains further structured working papers, the details of which can be found in the volume "Standard Forms & Papers" of this series.
===Methods of Analysis===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IA9DocMethods.png]]
</div>
</td>
</tr>
</table>
The working papers require that systems design is analysed BEFORE any testing is performed. While prewritten test programs can be used, the full benefit of the method is received when the systems analysis is performed using the various systems models:
<ul>
<li> Segregation of Duties Chart
<li> Client Provider Analysis
<li> Key Quantities (transaction values and volumes)
<li> Cyclic Events
<li> Annotated Data Flows, Narrations and/or Document Flows
<li> Key Controls structured by their "data flow focus":
<ul>
<li> Inputs
<li> Processes
<li> Outputs
<li> Storage
</ul>
</ul>
And evaluated within the assertion/control attribute structure outlined earlier.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%">
<tr>
<td>
<div class="left">
[[Image:IAAnotatedDataFlow.png]]
</div>
</td>
<td>
<div class="right">
[[Image:IASegOfDutiesChart.png]]
</div>
</td>
</tr>
</table>
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IAAssertionMatrix.png]]
</div>
</td>
</tr>
</table>
The Working Paper's documentation of the system culminates in the Assertion Matrix and Control Strength & Weaknesses Chart. The Systems Analysis of section 12 of the file is summarised in these charts.
Within the assertion structured systems model are many subsystems. These are documented throughout the documentation "tree". Each subsystem should be documented in the way that best suits our analytic needs. For transaction flows this might be some type of annotated data flow, for a delegations analysis it might be an organisation chart, and for a risk analysis it might be a Fitzgerald Matrix, etc.
There are a number of techniques available to the auditor for use in documenting systems of internal control, such as:
* Narration
* Process and Document Flows
* Annotated Data Flows
* Organisation Charts
* Segregation of Duties Chart
* Assertion Matrix
* Lancaster Modelling
* Algorithm Pseudo-programming
* Simulation
Irrespective of which method is chosen, documentation should include:
* the origin of every document and record in the system
* all processing that takes place on the document
* the disposition of every document and record in the system
* a description of internal controls operating within the system
==What Are Some of the Types of Reviews Conducted Within the ALSBA?==
Management Assurance services utilising the ALSBA cover the full range of Internal Audit work including:
<table width="100%" border=1 >
<tr>
<td>
* Internal Audit Unit Performance Review;
* Efficiency and Effectiveness Reviews;
* Compliance and Integrity Reviews;
* Strategic and Tactical Planning Reviews;
* Financial Audits;
* Systems Analysis and Design Review;
* Quality Audit (TQM);
* Computer Controls Implementation;
* Methodology Design and Development Review;
* Control Systems Design;
</td>
<td>
* Training Review;
* EDP Reviews (15 different types);
* Corporate Design and Planning Reviews;
* Risk Management Review;
* Change Control;
* Occupational Health & Safety;
* Inventory Management;
* Maintenance Systems;
* Process Control;
* Fraud Control; and
* Quality Management System Integration.
</td>
</tr>
</table>
==How Do We Report?==
Ultimately the product produced and of greatest significance to management is the report. Our reporting is standardised to ensure consistency of structure, coverage, presentation, language and quality.
The significant features of our reports include:
* Standardised structure;
* Systems documentation and flow charts;
* Every finding is presented with: "Observation", "Risks and Implications", "Recommendations", and "Management Comment" sub sections;
* Clear, specific and relevant recommendations, not vague references to the need to "review" an area or "correct a problem";
* Clearly argued risks and implications of each finding. An observation is analysed by:
** The assertions affected,
** Risks and exposures from the observation,
** Arguments in favour of the breach and audit's comment on that argument;Inclusion of and focus on Action Plans; and
** Linking of findings to a clearly stated premise for the finding's importance: The Assertions affected.
Although the Report structure is one of the aspects of RIAM specifically tailored to the client, most adopt a close variation of one standard structure. RIAM includes five distinct report structures to assist clients identify their reporting needs.
The report is presented under the following headings/sections:
<ol>
<li> Executive Summary<br>
Provides a summary of the purpose, objectives, assertions, approach, scope, the overall opinion, key findings and issues arising.
<br>
<li> Objectives and Approach<br>
Addresses the "How" and "Why" of the review, and defines the assertions on which the conclusions and findings are based.
<br>
<li> Scope and Boundary<br>
Clearly defines the matters covered by the review, and most importantly the matters excluded from the review.
<br>
<li> Brief Description of the System Reviewed<br>
Covers the Purpose of the Section/Systems, The People and Organisation Structure, the Principal Activities of the Section/Systems, Documents and Records (both manual and computer) and the Reports Produced from and to the Section/Systems.
<br>
<li> Checklist of Findings, Recommendations and Action Plans<br>
Presents in Landscape form a summary of the findings and recommendations in section 6 under the headings: "Findings" and "Recommendations". Tables include boxes for Action Plans to be referenced or detailed. This section assists in monitoring and following up responses to audit recommendations by the Audit Committee.
<br>
<li> Detailed Findings and Recommendations<br>
The findings and recommendations have a standard structure:<br>
* Observation
** The observed facts, relevant legislation, directions and industry relevant information.
* Implications and Risks
** Assertions suppressed or supported.
** Principal risks and exposures.
** Arguments in favour of, or reasons for, the breach and audit's comment.
** Summation of audit's conclusion as to risk or exposure.
* Recommendations
** Numbered, clear, specific and relevant recommendations for action.
** Where alternatives are identified either by audit or the client they are presented and evaluated.
* Management Comment
** Management's response to the issues raised and action taken. After discussion and exit interviews the vast body of your recommendations should be accepted by management. If not, you have not done your job correctly!
</ol>
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
[[Category:Internal Audit - RIAM]]
{{BackLinks}}
</noinclude>
f16b463043523a28c430d6952430f5ca3868579b
355
351
2010-08-05T14:27:42Z
Bishopj
1
wikitext
text/x-wiki
==Introduction==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:ALSBA.png]]
</div>
</td>
</tr>
</table>
The keys to the method are '''structure''' and '''focus'''. With RIAM, Internal Audit applies a Systems and Assertion Based approach to answer targetted 'questions' about an audit area. Questions focus the review, while assertions define the criterion for the answer.
Many systems based approaches merely measure the compliance of an organisation's staff with a particular system. The ALSBA method is a substantial enhancement to this commonly used approach. The RIAM auditor analyses compliance of the system process with a strategic and/or tactical purpose, compliance of practice with procedure and the awareness and readiness presented by potential (risks and opportunities) in the system itself.
The avoidance of checklists makes the auditor adaptable by permanently adopting a 'learning' posture. More importantly, the process is universal in that the same logic structure can be applied from the strategic level through to transactional compliance level, and from 'hard' financial processes to 'soft' subjective process.
Very large organisations present some particular challenges for the systems based audit, including coordination of teams across multiple jurisdictions, locations and organisation units. Here we present an overview to the technical aspects of the RIAM SBA, in [[RIAM:Conduct of the Very Large Audit|Conduct of the Very Large Audit]] we explore the method in detail in both the large audit and small audit context.
==What Are Assertions?==
The figure on the preceding diagramme summarises the Assertion Linked Systems Based Audit analytic structure. The process starts with the five areas of Internal Audit's "Scope of work" within which Assertions are defined. Support for the selected Assertions is classified into management's 10 Control Classes (areas for management action). The systems built by management to support the Assertions within the Control Classes will have identifiable "Control Attributes" identical to those used in our Control Implementation Service, and are classifiable according to the "Type" - preventive, detective or corrective.
The concept of Assertions is the core of a RIAM Systems Based Audit. Assertions are truths that we wish to express about a system. They formulated as statements of "fact" about a system. Examples of typical Compliance Assertions for financial aspects of a Grants Scheme might be:
That:
a. Grant expenditure is bona fide (ie that acquittals are for actual grants and for services appropriate to grant activity);
b. Grant data reported/processed is:
* Attributed to the '''proper period''',
* '''Accurately''' calculated,
* Correctly and appropriately '''accumulated''',
* Accurately '''recorded''',
* Correctly '''disclosed''',
* '''Properly authorised''' with respect to transactions (ie grantee approved costs and the Commission is satisfied that the amount is for an appropriate expense),
* Providing benefits to which grantees are '''eligible''',
c. The relevant '''management directions''' and '''legislation are observed''':
* Payments are in accordance with legislation, and
* Approvals for grants are in accordance with the legislation (ie properly vetted by the Grant Committee and approval is given by the Board); and
d. The assets of the organisation are efficiently, effectively and otherwise '''appropriately protected and applied''' (ie having an appropriate process of grant approval that assures projects are of an appropriate standard, and that Commission resources are used efficiently).
==For What are Assertions Used?==
When we say a given system is operating satisfactorily we mean that our review has tested the truth of a set of assertions and we have found that they have been sustained. Thus testing the assertions is the purpose of the audit.
Assertions are the focus and underlay the structure of the RIAM analytic method. All review activities, findings, discussions and recommendations must be able to be tied back to the review's assertions.
The result is that both the auditor and the auditee have a precise understanding of the level of comfort a given review offers.
Assertions have another huge advantage for the auditor: They allow us to frame focus questions about a system in "yes" or "no" form, which are answered by proving or disproving the assertions. For example, the question "Is system XYZ operating effectively?", is, by its nature, subjective. My meaning of the word 'effective' may be radically different from your understanding of theat same word. If we say, "effective in this context means accurate and timely" then we both know that neither of us meant "authorised and consistent", or "fair and equitable".
Thus by combining a focus question, with the assertions that define a "yes" answer, we as auditors, can give management and the governance committee what they want: certainty. We do not need to hedge for the unknown - because we have stated clearly our context specific meaning.
Thus we say that assertions are the definitions of the audit project focus question.
For a detailed discussion of assertions and example assertion sets in various kinds of systems see:
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
==How do we Establish Assertions?==
A reviews assertions are agreed with the auditee management before a review commences. In many cases, such as financial balances audits and quality audits, we are able to recommend appropriate assertions. In other reviews, particularly those specifically requested by management, the managers will have a clear idea of particular "Questions" they wish answered by the review.
The establishment of "Questions" is the first step in selecting audit assertions.
During the entrance interview phase of the audit management identifies a number of questions about the target system they wish to have answered. The auditor then proposes a series of Assertions, the sustaining of which will constitute an affirmative answer, and the suppressing of which will constitute a negative answer. These assertions are agreed with management.
==What is the Assertion Linked Systems Based Approach?==
===The Objectives===
The objectives of the reviews are summarised as:
* Document the procedures in operation within the section so far as they relate to the target activities;
* Collect sufficient data and analyse that data to support assertions that address management's critical success factors represented by questions they request audit to answer;
* Identify risk and efficiency exposures to the organisation and the critical success factors of management;
* Recommend relevant and practicable changes in the systems and procedures to management where these exposures are present; and
* Form an opinion as to the overall reliability of the systems in place and as modified.
===Meeting The Objectives===
[[Image:ALSBASteps.png]]
The structure of the approach, diagrammed above, that meets the audit objectives has four phases. Here we summarise those phases. A more detailed discussion of these phases mapped into the context of both small team and large multi-location, team audits is explored in:
* [[RIAM:VLA:The Four Phases of the RALSBA|The Four Phases of the RIAM Systems Based Audit]]
<table width="100%" border=1 >
<tr ><td >
====PHASE 1: FAMILIARISATION, SCOPE AND PLANNING====
<ol>
<li> Define View of the Audit Area, Establish Risks, Threats & Benefits expected by Management.<br>
<br>
Identify the objectives and purposes of the section being reviewed, and the review being conducted; document critical success factors. Entrance interviews are held with senior management during which management's concerns and directions are communicated as well as the Critical Success Factors of the audit and the section being audited. Certain objectives, such as legislative compliance, are always assumed to be present;<br>
<br>
Identify the functions in place to realise the objectives, critical success factors and purposes. A series of initial interviews are conducted with relevant middle and line management and staff to:
* Introduce the review and reassure staff as to the assisting rather than policing nature of the review,
* Identify the operations and organisation structure adopted to meet the objectives, purposes and critical success factors.
<br>
<li> Set Focus Questions, Audit Scope, Boundary & Assertions
Establish focus questions and their associated answering assertions , the satisfaction of which will represent a "pass" result. The assertions represent the criteria for evaluation;
</ol>
This topic is explored in more detail in:
*[[RIAM:VLA:FAMILIARISATION, SCOPE & PLANNING|ALSBA - Phase 1. FAMILIARISATION, SCOPE & PLANNING in the Very Large Audit]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 2: DOCUMENTATION AND SYSTEMS ANALYSIS====
<ol>
<li> Systems Description<br>
Build functional description of the area under review, focussing on the ten Control Classes or other appropriate classification of management action areas.<br>
<br>
Build a cyclic description of control systems, examining both time based cycles and data flows.<br>
<br>
Investigate the control systems in place to implement the functions. Tasks include:
* Document the procedures in operation so far as they relate to the scope and boundary of the Audit task,
* Compare actual procedures to legislation, policies, guidelines and documented procedures noting exceptions;
<br>
<br>
Examine management information and reporting systems in place to monitor the operations;
<br><br>
<li> Threat Causing Assertion Failure & Controls Addressing Threats<br>
Evaluate the systems against the assertions to be supported, noting key controls in the systems, and which assertions they affect, to determine:
* Potential strengths and weaknesses of the designed systems;
* Preliminary ranking of risk and exposures including efficiency exposures.
</ol>
More detail is available on this topic in:
* [[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL|PHASE 2: DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]]
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 3: TESTING AND RESULTS ANALYSIS====
<ol>
<li> Test Systems<br>
<br>
Design a testing program and Test the system and its transactions and/or data for:
* Compliance of operations with specified system (strengths);
* Occurrence of the identified weaknesses, risks or exposures;
<br>
<li> Evaluate Results<br>
Analyse the results of systems analysis and compliance testing stages to accept or refute the established assertions and operating compliance. <br>
</ol>
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 4: REPORTING AND FOLLOW UP====
<ol>
<li> Design Corrections<br>
Conclude and report in which we:<br>
* Identify risk and efficiency exposures to the Organisation;
* Recommend changes in the systems and procedures to the Organisation's management where these exposures are present;
* Form an opinion as to the overall reliability of the systems in place and as modified;
* Report to both management and the Audit Committee after and during each task;
<br>
<br>
The control system's ability to support the assertions and therefore the key controls identified are analysed at three levels:<br>
<br>
* '''Preventive Controls/Treatments'''
** Including direct controls such as authorisation and certification of forms, indirect controls such as training, maintenance of up-to-date reference material, section administration and organisation;
<br>
* '''Detective Controls/Treatments'''
** Such as supervisor review, batch control totals, edit checks and periodic system reconciliations;
<br>
* '''Corrective Controls/Treatments'''
** Such as routing an error back through the same control system that originally processed and detected the error and response to exception reports.
<br>
<br>
<li> Gain Management Ownership<br>
Conduct exit interviews, produce the final report and review action plans as required.
<br><br>
Although steps presented here suggest a linear sequence of steps, the correct approach involves regular, on-going to management during the conduct of the review. Interim reports, either formal or informal should be provided during the review. The key factor is that there should be NO SURPRISES for management at the end of the review. This facilitates ownership and acceptance of the findings, recommendations and the audit generally.
<br><br>
<li> Classify Findings, Facilitate Action Plans and Update the Organisation Risk Model
The final stage of the Review is to formalise the findings and recommendations by classifying their effects on the risk evaluation of the organisation and feed these back into the risk model. The risk model both provides an ongoing measure of the organisation's risk level, and eventually feeds back into the planning process for the identification of either further action or necessary reviews.
<br><br>
</ol>
</td></tr>
</table>
===Establishing the framework===
The key principles of the framework include:
* Interviews to scope and focus the review and involvement of Management and Staff throughout the process;
* Ensuring agreement as to the purpose, focus, scope, boundary, approach and findings of the review;
* Assertions as criteria for evaluation.
* Application of Risk Analysis, not just at the Planning stage, but also the Threat Analysis stage when assessing Systems Design, and the Reporting Stage when finalising recommendations. The Audit Risk is the risk that the audit will provide a wrong opinion. This is a function of:
** The Inherent Risk in the organisation
*** the risk that an error is likely to occur;
** The Control Risk
*** the risk that the control system will not prevent, detect or correct the error; and
** The Detection Risk
*** the risk that our procedures will not identify the existence of a material error.
The ALSBA uses Assertion focussed Risk and Threat analytic procedures to minimise this risk.
* Risk and Threat analysis aims to minimise the cost of reviews by keeping procedures tuned to the real exposures, and when combined with assertions, raises the certainty that our systems opinion is correct.
* Use of a variety of report and presentation styles to best communicate information; and
* The Internal Auditor MUST become part of the management & systems improvement process, not a disinterested, occasional observer.
* Analysis of control systems performance in meeting objectives.
* Clear discussion and specific recommendations to provide improvements.
==What is Threat Testing?==
Threat testing is an approach to assertion testing used as an alternative to a Desired Control Model. RIAM supports both concepts.
The key benefits of threat testing are:
* Controls analysis is kept current to the ACTUAL systems in place rather than an out-of-date control model;
* The audit process recognises and supports improvement and change in systems - essential for environments where Total Quality Management is operating;
* By evaluating the sources of possible problems, the process RESULTS in the development of Desired Control Models;
* Management is involved in the assessment of risks of systems failure;
This is a brief outline of the Threat Testing process :
* Each assertion is examined in turn. For each assertion a list of causes for failure of an assertion is prepared based on experience, statistical sampling, management advice, consultant advice, and checklists, etc. These causes are called threats. To each threat a probability of occurrence may be assigned if desired (perhaps based on historic samples).
* Each threat is then applied to the control system model (developed during the systems documentation phase) to investigate the probability of the system preventing the threat (ie. mitigating the risk). This probability is expressed as a probability of system failure.
* The risk of the threat occurring multiplied by the risk of system failure (Control Risk) is probability of the assertion not being sustained in operation.
The sum of all such threat related probabilities is the total risk of assertion failure in the system.
==How Do We Document Systems?==
===Working Papers===
RIAM working papers are designed to form a "tree" or pyramid with the apex being the opinion of the systems in operation, and the base being the detailed "views" or models of the organisation's systems and the testing results verifying aspects of the system's operations.
<table width="100%" border=1 >
<tr ><th >REF</th><th>CONTENTS</th></tr>
<tr><td>1</td><td>Final Audit Report and Other Relevant Files</td></tr>
<tr><td>2</td><td>Supervisor, Manager & Partner Reviews and Follow Up</td></tr>
<tr><td>3</td><td>Engagement Letters, Contract and Contacts</td></tr>
<tr><td>4</td><td>Action Plan, Client Follow Up and Correspondence</td></tr>
<tr><td>5</td><td>Matters for Manager & Partner Attention</td></tr>
<tr><td>6</td><td>Matters for Review Next Audit</td></tr>
<tr><td>7</td><td>Planning Documents and Audit Program</td></tr>
<tr><td>8</td><td>Work & Time recording Schedule</td></tr>
<tr><td>9</td><td>Background and Organisation Details</td></tr>
<tr><td>10</td><td>Organisation Objectives, Operating & Financial Policies, and Performance Measures</td></tr>
<tr><td>11</td><td>Strength & Weakness Schedule</td></tr>
<tr><td>12</td><td>Control System Documentation and Conclusion<br>
(Control Questionnaires, flowcharts, checklists and narratives)</td></tr>
<tr><td>13</td><td>Records of Interview</td></tr>
<tr><td>14</td><td>Legislation and Management Directives - Compliance<br>
(Including Important Contracts and Agreements)</td></tr>
<tr><td>15</td><td>Analysis and Tests of Transactions, Processes and Account Balances</td></tr>
<tr><td>16</td><td>Other Background Data and Notes</td></tr>
</table>
''The Index for The Standard RIAM Audit File''
The foregoing index shows that the files are self contained units including not only plans and tests, but also:
* date records of client contacts;
* relevant legislation and directions;
* full internal and external cross references;
* systems documentation; and
* organisation background and structures.
Section 12 of the file contains the detailed analysis of the systems under review:
<table width="100%" border=1 >
<tr ><th >PHASE</th><th>ACTION</th><th>WHO</th><th>REF</th></tr>
<tr><td>1</td><td>Conclusion</td><td>12.</td></tr>
<tr><td>2</td><td>Objectives (Purpose) of the Control System</td><td>12.</td></tr>
<tr><td>3</td><td>Framework of Analysis (Assertions to be supported)</td><td>12.</td></tr>
<tr><td>4</td><td>Key Controls</td><td>12.</td></tr>
<tr><td>5</td><td>Overview of the Control System (Principal Flows)</td><td>12.</td></tr>
<tr><td>6</td><td>Control System Flowcharts/Documentation</td><td>12.</td></tr>
<tr><td>7</td><td>Files & Records in the System</td><td>12.</td></tr>
<tr><td>8</td><td>Cycles in the System</td><td>12.</td></tr>
<tr><td>9</td><td>Transactions and Value</td><td>12.</td></tr>
<tr><td>10</td><td>Documents in the System</td><td>12.</td></tr>
<tr><td>11</td><td>Segregation of Duties</td><td>12.</td></tr>
<tr><td>12</td><td>Other</td><td>12.</td></tr>
</table>
''Index for Section 12 of the Standard Audit File - Control System Documentation''
The continuation of the "tree" structured analysis is evident in the above index. Each subsection contains further structured working papers, the details of which can be found in the volume "Standard Forms & Papers" of this series.
===Methods of Analysis===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IA9DocMethods.png]]
</div>
</td>
</tr>
</table>
The working papers require that systems design is analysed BEFORE any testing is performed. While prewritten test programs can be used, the full benefit of the method is received when the systems analysis is performed using the various systems models:
<ul>
<li> Segregation of Duties Chart
<li> Client Provider Analysis
<li> Key Quantities (transaction values and volumes)
<li> Cyclic Events
<li> Annotated Data Flows, Narrations and/or Document Flows
<li> Key Controls structured by their "data flow focus":
<ul>
<li> Inputs
<li> Processes
<li> Outputs
<li> Storage
</ul>
</ul>
And evaluated within the assertion/control attribute structure outlined earlier.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%">
<tr>
<td>
<div class="left">
[[Image:IAAnotatedDataFlow.png]]
</div>
</td>
<td>
<div class="right">
[[Image:IASegOfDutiesChart.png]]
</div>
</td>
</tr>
</table>
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IAAssertionMatrix.png]]
</div>
</td>
</tr>
</table>
The Working Paper's documentation of the system culminates in the Assertion Matrix and Control Strength & Weaknesses Chart. The Systems Analysis of section 12 of the file is summarised in these charts.
Within the assertion structured systems model are many subsystems. These are documented throughout the documentation "tree". Each subsystem should be documented in the way that best suits our analytic needs. For transaction flows this might be some type of annotated data flow, for a delegations analysis it might be an organisation chart, and for a risk analysis it might be a Fitzgerald Matrix, etc.
There are a number of techniques available to the auditor for use in documenting systems of internal control, such as:
* Narration
* Process and Document Flows
* Annotated Data Flows
* Organisation Charts
* Segregation of Duties Chart
* Assertion Matrix
* Lancaster Modelling
* Algorithm Pseudo-programming
* Simulation
Irrespective of which method is chosen, documentation should include:
* the origin of every document and record in the system
* all processing that takes place on the document
* the disposition of every document and record in the system
* a description of internal controls operating within the system
==What Are Some of the Types of Reviews Conducted Within the ALSBA?==
Management Assurance services utilising the ALSBA cover the full range of Internal Audit work including:
<table width="100%" border=1 >
<tr>
<td>
* Internal Audit Unit Performance Review;
* Efficiency and Effectiveness Reviews;
* Compliance and Integrity Reviews;
* Strategic and Tactical Planning Reviews;
* Financial Audits;
* Systems Analysis and Design Review;
* Quality Audit (TQM);
* Computer Controls Implementation;
* Methodology Design and Development Review;
* Control Systems Design;
</td>
<td>
* Training Review;
* EDP Reviews (15 different types);
* Corporate Design and Planning Reviews;
* Risk Management Review;
* Change Control;
* Occupational Health & Safety;
* Inventory Management;
* Maintenance Systems;
* Process Control;
* Fraud Control; and
* Quality Management System Integration.
</td>
</tr>
</table>
==How Do We Report?==
Ultimately the product produced and of greatest significance to management is the report. Our reporting is standardised to ensure consistency of structure, coverage, presentation, language and quality.
The significant features of our reports include:
* Standardised structure;
* Systems documentation and flow charts;
* Every finding is presented with: "Observation", "Risks and Implications", "Recommendations", and "Management Comment" sub sections;
* Clear, specific and relevant recommendations, not vague references to the need to "review" an area or "correct a problem";
* Clearly argued risks and implications of each finding. An observation is analysed by:
** The assertions affected,
** Risks and exposures from the observation,
** Arguments in favour of the breach and audit's comment on that argument;Inclusion of and focus on Action Plans; and
** Linking of findings to a clearly stated premise for the finding's importance: The Assertions affected.
Although the Report structure is one of the aspects of RIAM specifically tailored to the client, most adopt a close variation of one standard structure. RIAM includes five distinct report structures to assist clients identify their reporting needs.
The report is presented under the following headings/sections:
<ol>
<li> Executive Summary<br>
Provides a summary of the purpose, objectives, assertions, approach, scope, the overall opinion, key findings and issues arising.
<br>
<li> Objectives and Approach<br>
Addresses the "How" and "Why" of the review, and defines the assertions on which the conclusions and findings are based.
<br>
<li> Scope and Boundary<br>
Clearly defines the matters covered by the review, and most importantly the matters excluded from the review.
<br>
<li> Brief Description of the System Reviewed<br>
Covers the Purpose of the Section/Systems, The People and Organisation Structure, the Principal Activities of the Section/Systems, Documents and Records (both manual and computer) and the Reports Produced from and to the Section/Systems.
<br>
<li> Checklist of Findings, Recommendations and Action Plans<br>
Presents in Landscape form a summary of the findings and recommendations in section 6 under the headings: "Findings" and "Recommendations". Tables include boxes for Action Plans to be referenced or detailed. This section assists in monitoring and following up responses to audit recommendations by the Audit Committee.
<br>
<li> Detailed Findings and Recommendations<br>
The findings and recommendations have a standard structure:<br>
* Observation
** The observed facts, relevant legislation, directions and industry relevant information.
* Implications and Risks
** Assertions suppressed or supported.
** Principal risks and exposures.
** Arguments in favour of, or reasons for, the breach and audit's comment.
** Summation of audit's conclusion as to risk or exposure.
* Recommendations
** Numbered, clear, specific and relevant recommendations for action.
** Where alternatives are identified either by audit or the client they are presented and evaluated.
* Management Comment
** Management's response to the issues raised and action taken. After discussion and exit interviews the vast body of your recommendations should be accepted by management. If not, you have not done your job correctly!
</ol>
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
[[Category:Internal Audit - RIAM]]
{{BackLinks}}
</noinclude>
f16b463043523a28c430d6952430f5ca3868579b
377
355
2010-08-05T14:27:42Z
Bishopj
1
wikitext
text/x-wiki
==Introduction==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:ALSBA.png]]
</div>
</td>
</tr>
</table>
The keys to the method are '''structure''' and '''focus'''. With RIAM, Internal Audit applies a Systems and Assertion Based approach to answer targetted 'questions' about an audit area. Questions focus the review, while assertions define the criterion for the answer.
Many systems based approaches merely measure the compliance of an organisation's staff with a particular system. The ALSBA method is a substantial enhancement to this commonly used approach. The RIAM auditor analyses compliance of the system process with a strategic and/or tactical purpose, compliance of practice with procedure and the awareness and readiness presented by potential (risks and opportunities) in the system itself.
The avoidance of checklists makes the auditor adaptable by permanently adopting a 'learning' posture. More importantly, the process is universal in that the same logic structure can be applied from the strategic level through to transactional compliance level, and from 'hard' financial processes to 'soft' subjective process.
Very large organisations present some particular challenges for the systems based audit, including coordination of teams across multiple jurisdictions, locations and organisation units. Here we present an overview to the technical aspects of the RIAM SBA, in [[RIAM:Conduct of the Very Large Audit|Conduct of the Very Large Audit]] we explore the method in detail in both the large audit and small audit context.
==What Are Assertions?==
The figure on the preceding diagramme summarises the Assertion Linked Systems Based Audit analytic structure. The process starts with the five areas of Internal Audit's "Scope of work" within which Assertions are defined. Support for the selected Assertions is classified into management's 10 Control Classes (areas for management action). The systems built by management to support the Assertions within the Control Classes will have identifiable "Control Attributes" identical to those used in our Control Implementation Service, and are classifiable according to the "Type" - preventive, detective or corrective.
The concept of Assertions is the core of a RIAM Systems Based Audit. Assertions are truths that we wish to express about a system. They formulated as statements of "fact" about a system. Examples of typical Compliance Assertions for financial aspects of a Grants Scheme might be:
That:
a. Grant expenditure is bona fide (ie that acquittals are for actual grants and for services appropriate to grant activity);
b. Grant data reported/processed is:
* Attributed to the '''proper period''',
* '''Accurately''' calculated,
* Correctly and appropriately '''accumulated''',
* Accurately '''recorded''',
* Correctly '''disclosed''',
* '''Properly authorised''' with respect to transactions (ie grantee approved costs and the Commission is satisfied that the amount is for an appropriate expense),
* Providing benefits to which grantees are '''eligible''',
c. The relevant '''management directions''' and '''legislation are observed''':
* Payments are in accordance with legislation, and
* Approvals for grants are in accordance with the legislation (ie properly vetted by the Grant Committee and approval is given by the Board); and
d. The assets of the organisation are efficiently, effectively and otherwise '''appropriately protected and applied''' (ie having an appropriate process of grant approval that assures projects are of an appropriate standard, and that Commission resources are used efficiently).
==For What are Assertions Used?==
When we say a given system is operating satisfactorily we mean that our review has tested the truth of a set of assertions and we have found that they have been sustained. Thus testing the assertions is the purpose of the audit.
Assertions are the focus and underlay the structure of the RIAM analytic method. All review activities, findings, discussions and recommendations must be able to be tied back to the review's assertions.
The result is that both the auditor and the auditee have a precise understanding of the level of comfort a given review offers.
Assertions have another huge advantage for the auditor: They allow us to frame focus questions about a system in "yes" or "no" form, which are answered by proving or disproving the assertions. For example, the question "Is system XYZ operating effectively?", is, by its nature, subjective. My meaning of the word 'effective' may be radically different from your understanding of theat same word. If we say, "effective in this context means accurate and timely" then we both know that neither of us meant "authorised and consistent", or "fair and equitable".
Thus by combining a focus question, with the assertions that define a "yes" answer, we as auditors, can give management and the governance committee what they want: certainty. We do not need to hedge for the unknown - because we have stated clearly our context specific meaning.
Thus we say that assertions are the definitions of the audit project focus question.
For a detailed discussion of assertions and example assertion sets in various kinds of systems see:
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
==How do we Establish Assertions?==
A reviews assertions are agreed with the auditee management before a review commences. In many cases, such as financial balances audits and quality audits, we are able to recommend appropriate assertions. In other reviews, particularly those specifically requested by management, the managers will have a clear idea of particular "Questions" they wish answered by the review.
The establishment of "Questions" is the first step in selecting audit assertions.
During the entrance interview phase of the audit management identifies a number of questions about the target system they wish to have answered. The auditor then proposes a series of Assertions, the sustaining of which will constitute an affirmative answer, and the suppressing of which will constitute a negative answer. These assertions are agreed with management.
==What is the Assertion Linked Systems Based Approach?==
===The Objectives===
The objectives of the reviews are summarised as:
* Document the procedures in operation within the section so far as they relate to the target activities;
* Collect sufficient data and analyse that data to support assertions that address management's critical success factors represented by questions they request audit to answer;
* Identify risk and efficiency exposures to the organisation and the critical success factors of management;
* Recommend relevant and practicable changes in the systems and procedures to management where these exposures are present; and
* Form an opinion as to the overall reliability of the systems in place and as modified.
===Meeting The Objectives===
[[Image:ALSBASteps.png]]
The structure of the approach, diagrammed above, that meets the audit objectives has four phases. Here we summarise those phases. A more detailed discussion of these phases mapped into the context of both small team and large multi-location, team audits is explored in:
* [[RIAM:VLA:The Four Phases of the RALSBA|The Four Phases of the RIAM Systems Based Audit]]
<table width="100%" border=1 >
<tr ><td >
====PHASE 1: FAMILIARISATION, SCOPE AND PLANNING====
<ol>
<li> Define View of the Audit Area, Establish Risks, Threats & Benefits expected by Management.<br>
<br>
Identify the objectives and purposes of the section being reviewed, and the review being conducted; document critical success factors. Entrance interviews are held with senior management during which management's concerns and directions are communicated as well as the Critical Success Factors of the audit and the section being audited. Certain objectives, such as legislative compliance, are always assumed to be present;<br>
<br>
Identify the functions in place to realise the objectives, critical success factors and purposes. A series of initial interviews are conducted with relevant middle and line management and staff to:
* Introduce the review and reassure staff as to the assisting rather than policing nature of the review,
* Identify the operations and organisation structure adopted to meet the objectives, purposes and critical success factors.
<br>
<li> Set Focus Questions, Audit Scope, Boundary & Assertions
Establish focus questions and their associated answering assertions , the satisfaction of which will represent a "pass" result. The assertions represent the criteria for evaluation;
</ol>
This topic is explored in more detail in:
*[[RIAM:VLA:FAMILIARISATION, SCOPE & PLANNING|ALSBA - Phase 1. FAMILIARISATION, SCOPE & PLANNING in the Very Large Audit]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 2: DOCUMENTATION AND SYSTEMS ANALYSIS====
<ol>
<li> Systems Description<br>
Build functional description of the area under review, focussing on the ten Control Classes or other appropriate classification of management action areas.<br>
<br>
Build a cyclic description of control systems, examining both time based cycles and data flows.<br>
<br>
Investigate the control systems in place to implement the functions. Tasks include:
* Document the procedures in operation so far as they relate to the scope and boundary of the Audit task,
* Compare actual procedures to legislation, policies, guidelines and documented procedures noting exceptions;
<br>
<br>
Examine management information and reporting systems in place to monitor the operations;
<br><br>
<li> Threat Causing Assertion Failure & Controls Addressing Threats<br>
Evaluate the systems against the assertions to be supported, noting key controls in the systems, and which assertions they affect, to determine:
* Potential strengths and weaknesses of the designed systems;
* Preliminary ranking of risk and exposures including efficiency exposures.
</ol>
More detail is available on this topic in:
* [[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL|PHASE 2: DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]]
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 3: TESTING AND RESULTS ANALYSIS====
<ol>
<li> Test Systems<br>
<br>
Design a testing program and Test the system and its transactions and/or data for:
* Compliance of operations with specified system (strengths);
* Occurrence of the identified weaknesses, risks or exposures;
<br>
<li> Evaluate Results<br>
Analyse the results of systems analysis and compliance testing stages to accept or refute the established assertions and operating compliance. <br>
</ol>
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 4: REPORTING AND FOLLOW UP====
<ol>
<li> Design Corrections<br>
Conclude and report in which we:<br>
* Identify risk and efficiency exposures to the Organisation;
* Recommend changes in the systems and procedures to the Organisation's management where these exposures are present;
* Form an opinion as to the overall reliability of the systems in place and as modified;
* Report to both management and the Audit Committee after and during each task;
<br>
<br>
The control system's ability to support the assertions and therefore the key controls identified are analysed at three levels:<br>
<br>
* '''Preventive Controls/Treatments'''
** Including direct controls such as authorisation and certification of forms, indirect controls such as training, maintenance of up-to-date reference material, section administration and organisation;
<br>
* '''Detective Controls/Treatments'''
** Such as supervisor review, batch control totals, edit checks and periodic system reconciliations;
<br>
* '''Corrective Controls/Treatments'''
** Such as routing an error back through the same control system that originally processed and detected the error and response to exception reports.
<br>
<br>
<li> Gain Management Ownership<br>
Conduct exit interviews, produce the final report and review action plans as required.
<br><br>
Although steps presented here suggest a linear sequence of steps, the correct approach involves regular, on-going to management during the conduct of the review. Interim reports, either formal or informal should be provided during the review. The key factor is that there should be NO SURPRISES for management at the end of the review. This facilitates ownership and acceptance of the findings, recommendations and the audit generally.
<br><br>
<li> Classify Findings, Facilitate Action Plans and Update the Organisation Risk Model
The final stage of the Review is to formalise the findings and recommendations by classifying their effects on the risk evaluation of the organisation and feed these back into the risk model. The risk model both provides an ongoing measure of the organisation's risk level, and eventually feeds back into the planning process for the identification of either further action or necessary reviews.
<br><br>
</ol>
</td></tr>
</table>
===Establishing the framework===
The key principles of the framework include:
* Interviews to scope and focus the review and involvement of Management and Staff throughout the process;
* Ensuring agreement as to the purpose, focus, scope, boundary, approach and findings of the review;
* Assertions as criteria for evaluation.
* Application of Risk Analysis, not just at the Planning stage, but also the Threat Analysis stage when assessing Systems Design, and the Reporting Stage when finalising recommendations. The Audit Risk is the risk that the audit will provide a wrong opinion. This is a function of:
** The Inherent Risk in the organisation
*** the risk that an error is likely to occur;
** The Control Risk
*** the risk that the control system will not prevent, detect or correct the error; and
** The Detection Risk
*** the risk that our procedures will not identify the existence of a material error.
The ALSBA uses Assertion focussed Risk and Threat analytic procedures to minimise this risk.
* Risk and Threat analysis aims to minimise the cost of reviews by keeping procedures tuned to the real exposures, and when combined with assertions, raises the certainty that our systems opinion is correct.
* Use of a variety of report and presentation styles to best communicate information; and
* The Internal Auditor MUST become part of the management & systems improvement process, not a disinterested, occasional observer.
* Analysis of control systems performance in meeting objectives.
* Clear discussion and specific recommendations to provide improvements.
==What is Threat Testing?==
Threat testing is an approach to assertion testing used as an alternative to a Desired Control Model. RIAM supports both concepts.
The key benefits of threat testing are:
* Controls analysis is kept current to the ACTUAL systems in place rather than an out-of-date control model;
* The audit process recognises and supports improvement and change in systems - essential for environments where Total Quality Management is operating;
* By evaluating the sources of possible problems, the process RESULTS in the development of Desired Control Models;
* Management is involved in the assessment of risks of systems failure;
This is a brief outline of the Threat Testing process :
* Each assertion is examined in turn. For each assertion a list of causes for failure of an assertion is prepared based on experience, statistical sampling, management advice, consultant advice, and checklists, etc. These causes are called threats. To each threat a probability of occurrence may be assigned if desired (perhaps based on historic samples).
* Each threat is then applied to the control system model (developed during the systems documentation phase) to investigate the probability of the system preventing the threat (ie. mitigating the risk). This probability is expressed as a probability of system failure.
* The risk of the threat occurring multiplied by the risk of system failure (Control Risk) is probability of the assertion not being sustained in operation.
The sum of all such threat related probabilities is the total risk of assertion failure in the system.
==How Do We Document Systems?==
===Working Papers===
RIAM working papers are designed to form a "tree" or pyramid with the apex being the opinion of the systems in operation, and the base being the detailed "views" or models of the organisation's systems and the testing results verifying aspects of the system's operations.
<table width="100%" border=1 >
<tr ><th >REF</th><th>CONTENTS</th></tr>
<tr><td>1</td><td>Final Audit Report and Other Relevant Files</td></tr>
<tr><td>2</td><td>Supervisor, Manager & Partner Reviews and Follow Up</td></tr>
<tr><td>3</td><td>Engagement Letters, Contract and Contacts</td></tr>
<tr><td>4</td><td>Action Plan, Client Follow Up and Correspondence</td></tr>
<tr><td>5</td><td>Matters for Manager & Partner Attention</td></tr>
<tr><td>6</td><td>Matters for Review Next Audit</td></tr>
<tr><td>7</td><td>Planning Documents and Audit Program</td></tr>
<tr><td>8</td><td>Work & Time recording Schedule</td></tr>
<tr><td>9</td><td>Background and Organisation Details</td></tr>
<tr><td>10</td><td>Organisation Objectives, Operating & Financial Policies, and Performance Measures</td></tr>
<tr><td>11</td><td>Strength & Weakness Schedule</td></tr>
<tr><td>12</td><td>Control System Documentation and Conclusion<br>
(Control Questionnaires, flowcharts, checklists and narratives)</td></tr>
<tr><td>13</td><td>Records of Interview</td></tr>
<tr><td>14</td><td>Legislation and Management Directives - Compliance<br>
(Including Important Contracts and Agreements)</td></tr>
<tr><td>15</td><td>Analysis and Tests of Transactions, Processes and Account Balances</td></tr>
<tr><td>16</td><td>Other Background Data and Notes</td></tr>
</table>
''The Index for The Standard RIAM Audit File''
The foregoing index shows that the files are self contained units including not only plans and tests, but also:
* date records of client contacts;
* relevant legislation and directions;
* full internal and external cross references;
* systems documentation; and
* organisation background and structures.
Section 12 of the file contains the detailed analysis of the systems under review:
<table width="100%" border=1 >
<tr ><th >PHASE</th><th>ACTION</th><th>WHO</th><th>REF</th></tr>
<tr><td>1</td><td>Conclusion</td><td>12.</td></tr>
<tr><td>2</td><td>Objectives (Purpose) of the Control System</td><td>12.</td></tr>
<tr><td>3</td><td>Framework of Analysis (Assertions to be supported)</td><td>12.</td></tr>
<tr><td>4</td><td>Key Controls</td><td>12.</td></tr>
<tr><td>5</td><td>Overview of the Control System (Principal Flows)</td><td>12.</td></tr>
<tr><td>6</td><td>Control System Flowcharts/Documentation</td><td>12.</td></tr>
<tr><td>7</td><td>Files & Records in the System</td><td>12.</td></tr>
<tr><td>8</td><td>Cycles in the System</td><td>12.</td></tr>
<tr><td>9</td><td>Transactions and Value</td><td>12.</td></tr>
<tr><td>10</td><td>Documents in the System</td><td>12.</td></tr>
<tr><td>11</td><td>Segregation of Duties</td><td>12.</td></tr>
<tr><td>12</td><td>Other</td><td>12.</td></tr>
</table>
''Index for Section 12 of the Standard Audit File - Control System Documentation''
The continuation of the "tree" structured analysis is evident in the above index. Each subsection contains further structured working papers, the details of which can be found in the volume "Standard Forms & Papers" of this series.
===Methods of Analysis===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IA9DocMethods.png]]
</div>
</td>
</tr>
</table>
The working papers require that systems design is analysed BEFORE any testing is performed. While prewritten test programs can be used, the full benefit of the method is received when the systems analysis is performed using the various systems models:
<ul>
<li> Segregation of Duties Chart
<li> Client Provider Analysis
<li> Key Quantities (transaction values and volumes)
<li> Cyclic Events
<li> Annotated Data Flows, Narrations and/or Document Flows
<li> Key Controls structured by their "data flow focus":
<ul>
<li> Inputs
<li> Processes
<li> Outputs
<li> Storage
</ul>
</ul>
And evaluated within the assertion/control attribute structure outlined earlier.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%">
<tr>
<td>
<div class="left">
[[Image:IAAnotatedDataFlow.png]]
</div>
</td>
<td>
<div class="right">
[[Image:IASegOfDutiesChart.png]]
</div>
</td>
</tr>
</table>
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IAAssertionMatrix.png]]
</div>
</td>
</tr>
</table>
The Working Paper's documentation of the system culminates in the Assertion Matrix and Control Strength & Weaknesses Chart. The Systems Analysis of section 12 of the file is summarised in these charts.
Within the assertion structured systems model are many subsystems. These are documented throughout the documentation "tree". Each subsystem should be documented in the way that best suits our analytic needs. For transaction flows this might be some type of annotated data flow, for a delegations analysis it might be an organisation chart, and for a risk analysis it might be a Fitzgerald Matrix, etc.
There are a number of techniques available to the auditor for use in documenting systems of internal control, such as:
* Narration
* Process and Document Flows
* Annotated Data Flows
* Organisation Charts
* Segregation of Duties Chart
* Assertion Matrix
* Lancaster Modelling
* Algorithm Pseudo-programming
* Simulation
Irrespective of which method is chosen, documentation should include:
* the origin of every document and record in the system
* all processing that takes place on the document
* the disposition of every document and record in the system
* a description of internal controls operating within the system
==What Are Some of the Types of Reviews Conducted Within the ALSBA?==
Management Assurance services utilising the ALSBA cover the full range of Internal Audit work including:
<table width="100%" border=1 >
<tr>
<td>
* Internal Audit Unit Performance Review;
* Efficiency and Effectiveness Reviews;
* Compliance and Integrity Reviews;
* Strategic and Tactical Planning Reviews;
* Financial Audits;
* Systems Analysis and Design Review;
* Quality Audit (TQM);
* Computer Controls Implementation;
* Methodology Design and Development Review;
* Control Systems Design;
</td>
<td>
* Training Review;
* EDP Reviews (15 different types);
* Corporate Design and Planning Reviews;
* Risk Management Review;
* Change Control;
* Occupational Health & Safety;
* Inventory Management;
* Maintenance Systems;
* Process Control;
* Fraud Control; and
* Quality Management System Integration.
</td>
</tr>
</table>
==How Do We Report?==
Ultimately the product produced and of greatest significance to management is the report. Our reporting is standardised to ensure consistency of structure, coverage, presentation, language and quality.
The significant features of our reports include:
* Standardised structure;
* Systems documentation and flow charts;
* Every finding is presented with: "Observation", "Risks and Implications", "Recommendations", and "Management Comment" sub sections;
* Clear, specific and relevant recommendations, not vague references to the need to "review" an area or "correct a problem";
* Clearly argued risks and implications of each finding. An observation is analysed by:
** The assertions affected,
** Risks and exposures from the observation,
** Arguments in favour of the breach and audit's comment on that argument;Inclusion of and focus on Action Plans; and
** Linking of findings to a clearly stated premise for the finding's importance: The Assertions affected.
Although the Report structure is one of the aspects of RIAM specifically tailored to the client, most adopt a close variation of one standard structure. RIAM includes five distinct report structures to assist clients identify their reporting needs.
The report is presented under the following headings/sections:
<ol>
<li> Executive Summary<br>
Provides a summary of the purpose, objectives, assertions, approach, scope, the overall opinion, key findings and issues arising.
<br>
<li> Objectives and Approach<br>
Addresses the "How" and "Why" of the review, and defines the assertions on which the conclusions and findings are based.
<br>
<li> Scope and Boundary<br>
Clearly defines the matters covered by the review, and most importantly the matters excluded from the review.
<br>
<li> Brief Description of the System Reviewed<br>
Covers the Purpose of the Section/Systems, The People and Organisation Structure, the Principal Activities of the Section/Systems, Documents and Records (both manual and computer) and the Reports Produced from and to the Section/Systems.
<br>
<li> Checklist of Findings, Recommendations and Action Plans<br>
Presents in Landscape form a summary of the findings and recommendations in section 6 under the headings: "Findings" and "Recommendations". Tables include boxes for Action Plans to be referenced or detailed. This section assists in monitoring and following up responses to audit recommendations by the Audit Committee.
<br>
<li> Detailed Findings and Recommendations<br>
The findings and recommendations have a standard structure:<br>
* Observation
** The observed facts, relevant legislation, directions and industry relevant information.
* Implications and Risks
** Assertions suppressed or supported.
** Principal risks and exposures.
** Arguments in favour of, or reasons for, the breach and audit's comment.
** Summation of audit's conclusion as to risk or exposure.
* Recommendations
** Numbered, clear, specific and relevant recommendations for action.
** Where alternatives are identified either by audit or the client they are presented and evaluated.
* Management Comment
** Management's response to the issues raised and action taken. After discussion and exit interviews the vast body of your recommendations should be accepted by management. If not, you have not done your job correctly!
</ol>
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
[[Category:Internal Audit - RIAM]]
{{BackLinks}}
</noinclude>
f16b463043523a28c430d6952430f5ca3868579b
Business Process Reengineering - Project Plan
0
287
321
2010-08-05T19:07:03Z
Bishopj
1
wikitext
text/x-wiki
==A Simple Business Standard Process Reengineering Project Plan==
The activities in the project might include:
<ol>
<li> Detailed Planning
<ul>
<li> Familiarisation and detailed planning for the project.
</ul>
<li> Data Collection
<ul>
<li> Review of the organisation culture, organisation structure, business plans, relevant benchmarks, policy framework, quality objectives, controlling legislation and operating constraints to identify externally and internally imposed organisation objectives. Expressed in both qualitative and quantitative terms, these form the basis for decision information needs of management. The objectives are classified into either static ( permanent and intrinsic to the purpose of the organisation such as cost minimisation, timeliness, independence, etc) and dynamic ( short term and generally project based such as delivery of a specific service, or completion of a specific marketing activity).
<li> Vertical (top-down) and horizontal ( functional) review of current management decision information needs including:
<ul>
<li> Performance measures,
<li> Cost drivers
<li> Performance targets
<li> Reporting cycles
</ul>
<li> Review of the system’s decision support information facilities
<li> Decision requirements assessment and Process mapping of the operations for front and back office processes including:
<ul>
<li> A business process risk assessment to identify the key control objectives (and a Pareto Analysis if statistical control data is available for the existing system);
<li> A client-provider analysis in which the interaction of the various business functions are viewed as either receivers or providers of information to one another governed by “contractual” undertakings as to the quality of the data exchanged;
<li> A data-flow analysis in which we trace the movement and storage of data throughout the processes both on an off the computer system. The data flow analysis provides a detailed framework for:
</ul>
<li> Eliminating duplication of data handling and storage;
<li> Eliminating unnecessary data;
<li> Identifying data requirements for each process;
<li> Optimising controls to business risks
<li> Defining critical data paths between the initial creation of data (eg. the application clerk with whom the first point of contact is made) through to the ultimate use of that data in decision support (eg. the applicant whose business commencement is awaiting the approval, or the officer charged with the responsibility of maintaining application turn around times). The critical path is the longest route through which any component of the data in a decision must pass and therefore the path on which any delays are critical to performance. Time related performance objectives will be established and monitored for critical paths.
</ul>
<li> System Analysis
<ul>
<li> Analysis of information collected in the preceding steps and agree lists of:
<ul>
<li> global information requirements of the system
<li> global control objectives (including performance characteristics of accuracy, timeliness, reliability, privacy, completeness, and relevance, etc)
<li> organisational characteristics (behavioural model)
<li> targets for key information processing times and other performance objectives
<li> system client-provider(s) and their data dependency relationships
<li> processes (tasks)
</ul>
</ul>
<li> System Design
<ul>
<li> Establish appropriate preferred behavioural model for the control system framework .
<li> Design, chart and document the new front and back office processes including the Active Control Management (ACM) control system which provides the backbone for continuous performance management of the system. The ACM tracks the performance of the control system providing regular statistical data.
<li> Develop roll-out strategy for implementation of new process modules and identify change management risks and strategies.
<li> Propose new system and its roll-out strategy to management and staff and adjust until agreement is reached.
</ul>
<li> System Implementation
<ul>
<li> Implement and test automated support systems (if any).
<li> Commence Training of staff
<li> Implement and test new processes on a staggered basis.
<li> Use ACM performance reporting system to tune and modify the control system as appropriate.
</ul>
<li> Project Wrap-Up
<ul>
<li> Report to the CEO and the Board as to the success (or otherwise!) of the project, benefits achieved, key operational assumptions, built in performance measures (with their safe operating
</ul>
</ol>
<noinclude>
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
{{BackLinks}}
</noinclude>
0dc17bd6e421ae5d6a88bfab437948793ec33b37
379
321
2010-08-05T19:07:03Z
Bishopj
1
wikitext
text/x-wiki
==A Simple Business Standard Process Reengineering Project Plan==
The activities in the project might include:
<ol>
<li> Detailed Planning
<ul>
<li> Familiarisation and detailed planning for the project.
</ul>
<li> Data Collection
<ul>
<li> Review of the organisation culture, organisation structure, business plans, relevant benchmarks, policy framework, quality objectives, controlling legislation and operating constraints to identify externally and internally imposed organisation objectives. Expressed in both qualitative and quantitative terms, these form the basis for decision information needs of management. The objectives are classified into either static ( permanent and intrinsic to the purpose of the organisation such as cost minimisation, timeliness, independence, etc) and dynamic ( short term and generally project based such as delivery of a specific service, or completion of a specific marketing activity).
<li> Vertical (top-down) and horizontal ( functional) review of current management decision information needs including:
<ul>
<li> Performance measures,
<li> Cost drivers
<li> Performance targets
<li> Reporting cycles
</ul>
<li> Review of the system’s decision support information facilities
<li> Decision requirements assessment and Process mapping of the operations for front and back office processes including:
<ul>
<li> A business process risk assessment to identify the key control objectives (and a Pareto Analysis if statistical control data is available for the existing system);
<li> A client-provider analysis in which the interaction of the various business functions are viewed as either receivers or providers of information to one another governed by “contractual” undertakings as to the quality of the data exchanged;
<li> A data-flow analysis in which we trace the movement and storage of data throughout the processes both on an off the computer system. The data flow analysis provides a detailed framework for:
</ul>
<li> Eliminating duplication of data handling and storage;
<li> Eliminating unnecessary data;
<li> Identifying data requirements for each process;
<li> Optimising controls to business risks
<li> Defining critical data paths between the initial creation of data (eg. the application clerk with whom the first point of contact is made) through to the ultimate use of that data in decision support (eg. the applicant whose business commencement is awaiting the approval, or the officer charged with the responsibility of maintaining application turn around times). The critical path is the longest route through which any component of the data in a decision must pass and therefore the path on which any delays are critical to performance. Time related performance objectives will be established and monitored for critical paths.
</ul>
<li> System Analysis
<ul>
<li> Analysis of information collected in the preceding steps and agree lists of:
<ul>
<li> global information requirements of the system
<li> global control objectives (including performance characteristics of accuracy, timeliness, reliability, privacy, completeness, and relevance, etc)
<li> organisational characteristics (behavioural model)
<li> targets for key information processing times and other performance objectives
<li> system client-provider(s) and their data dependency relationships
<li> processes (tasks)
</ul>
</ul>
<li> System Design
<ul>
<li> Establish appropriate preferred behavioural model for the control system framework .
<li> Design, chart and document the new front and back office processes including the Active Control Management (ACM) control system which provides the backbone for continuous performance management of the system. The ACM tracks the performance of the control system providing regular statistical data.
<li> Develop roll-out strategy for implementation of new process modules and identify change management risks and strategies.
<li> Propose new system and its roll-out strategy to management and staff and adjust until agreement is reached.
</ul>
<li> System Implementation
<ul>
<li> Implement and test automated support systems (if any).
<li> Commence Training of staff
<li> Implement and test new processes on a staggered basis.
<li> Use ACM performance reporting system to tune and modify the control system as appropriate.
</ul>
<li> Project Wrap-Up
<ul>
<li> Report to the CEO and the Board as to the success (or otherwise!) of the project, benefits achieved, key operational assumptions, built in performance measures (with their safe operating
</ul>
</ol>
<noinclude>
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
{{BackLinks}}
</noinclude>
0dc17bd6e421ae5d6a88bfab437948793ec33b37
Managing Risk in Mergers & Acquisitions - Causes of Success & Failure
0
291
333
2010-08-06T15:49:13Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2010 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
For the purposes of this article we will use the terms merger, acquisition and M&A interchangeably for the general activity of conducting a merger or acquisition of one legal business entity by another. The discussion will focus on the M&A activities between distinct legal entities rather than business units within a legel entity, as the issues in the latter case are fundamentally different from those in the former case.
Strictly speaking, a merger differs from an acquisition in that in an acquisition one entity assumes control and absorbs another entity, usually expunging the acquired entities operational distinctiveness. In a merger two or more entities join their business and control structures in a manner that delivers some level of shared control and business profile. In reality, the actual outcomes are rarely purely those of an acquisition or a merger - regardless of the original intentions. The act of acquiring or merging almost always results in irrevocable cultural and operational change for all entities involved - not just the entity acquired.
For this reason, and for reasons that will become apparent later on, we shall treat both activities as essentially the same.
Irrespective of the retoric for the merger, in order to succeed, it is critical for the parties to the merger (and particularly the dominant party) to understand clearly why they are really merging. Typical reasons for merging include (in no particular order):
* Economies of scale through larger productive capacity or ability to share services
* Vertical integration of productive capacity or the supply chain
* Market share / elimination of direct or indirect competition
* Securing supply
* Asset acquisition or stripping
* Strategic hedging through addition of counter cyclical products to the group mix
* Acquisition of access to Intellectual Proiperty
* Geographic expansion or access to markets with entry barriers
* Accumulation of complimentary product/service sets
* Suppression of emerging product line / Intellectual Property threats
* Acquisition of customers
Not all of these motivations will pass traditional measures of success such as "improved productivity" or "staff retention" - as clearly in a number of these cases the underlying purpose of the merger has nothing to do with establishing a bigger, better, more efficient business - just a safer business environment.
If your purpose is merely to eliminate a competitor, or acquire their IP, or strip their assets, etc. much of the discussion in the paper will be of limited applicability to your situation. Your objectives are met if the price you pay for acquisition and business wind-up delivers these outcomes for less than you gain in return. If your purpose is to gain productivity improvement, or economies of scale, complimentary product mix outcomes and retain as much of the acquired (or junior partner's) business / delivery capability as possible (etc.) then this paper is relevant to your circumstances.
=M&A - The State of the Industry=
==What Measure Success?==
The most obvious outcome of any M&A is prima-facie the elimination of an actual or potential competitor from the competitive mix.
In 1999 KPMG published a study of merger outcomes over the preceding 10 years. The study identified that 75% to 83% of mergers fail where failure was measured by lower productivity, labour unrest, higher absenteeism & loss of shareholder value or even dissolution of companies.
This and other studies highlight a central question in determining the strategy for a successful merger - what is the basis for measuring the success of an M&A project?
<table>
<tr>
<th>
Success Measure
</th>
<th>
Survey Outcome
</th>
<th>
Year of Study
</th>
</tr>
<tr>
<td>
Achievement of anticipated purpose
</td>
<td>
30-45%
</td>
<td>
1997
</td>
</tr>
<tr>
<td>
Achievement of strategic or financial object
</td>
<td>
<20%
</td>
<td>
1983, 1991, 1994
</td>
</tr>
<tr>
<td>
Preserve or Enhance book value
</td>
<td>
25%-45%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Enhance shareholder value
</td>
<td>
17%
</td>
<td>
1995
</td>
</tr>
<tr>
<td>
Preserve or improve NPAT
</td>
<td>
<50%
</td>
<td>
1996, 1999
</td>
</tr>
<tr>
<td>
Preserve or improve productivity
</td>
<td>
<25%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Preserve strike, absenteeism and accidents levels
</td>
<td>
<50%
</td>
<td>
1977, 1981, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Long Term
</td>
<td>
20-50%
</td>
<td>
1978, 1988, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Short Term
</td>
<td>
50%
</td>
<td>
1996
</td>
</tr>
</table>
A summary of the conclusions from a number of these studies can be found in [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
It is clear from the range of studies and the span of years they cover, that successful mergers, are distressingly and consistently unlikely - at least with respect to these measures of success. A Merger, like life, is not a dress rehearsal. Unfortunately, as most executives go through a merger only rarely, mistakes are common, and the first time you do it, it will be for real. It is therefore important to learn, as far as possible, from the conclusions of others that have gone before - because the odds of success are not in your favour.
Both the Zweig (1995) and KPMG (1999) study found in their respective studies of merger outcomes that on 17% of mergers resulted in an enhancement of either shareholder value or key performance drivers. Perhaps of even greater concern, Zweig found that shareholder value was actually destroyed in 53% of cases, and KPMG determined the performance drivers actually weakened in 78% of cases:
<table>
<tr>
<td>
[[image:Zweig95_M&A_ImpactOnShareValue.jpg]]
</td>
<td>
[[image:KPMG99_M&A_ImpactOnKPI.jpg]]
</td>
</tr>
</table>
=Why Merge=
Studies of merger outcomes in terms of only classical performance or direct shareholder value enhancement imply a need for successful integration of the pre-merger businesses. This assumption does not capture the total range of success measures that might properly apply to merger motivations (regardless of the public retoric of the entities involved). The need for successful integration of the pre-merger businesses depends on the true underlying motivation for the merger:
[[Image:MnA WhyMerge.jpg]]
The fundamental driver for measuring post-merger success is to first clearly define the reason(s) for the merger. As success integration of the merged businesses is possibly among the hardest to the successful outcomes to achieve, it is essential to map the requirement for this strategy to the reason for the merger. Ordered from least to highest need for integration, typical merger motivations might include:
# Eliminate a competitor
# Hedge market cycles
# Acquire brand
# Enter a geographic market
# Integrate vertically
# Opportunistic
# Grow market share
# Cut costs – economies of scale
# Grow size (defensive)
# Acquire technical or management expertise
=Reasons For Failure=
==A Summary of the Recent Studies==
Integration of the of the pre-merger businesses in the post-merger entity is a precursor to success in (possibly) the majority of merger strategies. From a comprehensive review of the literature we have identified the the most common reasons sited for integration failure, (with two added by the author from direct (anecdotal) experience).
{| border="1"
! !! Reason !! %
|-
|1 || Poorly planned and managed integration || 100
|-
|2 || Neglect of existing business due to the attention being paid to the acquired business || 68
|-
|3 || Underestimating the depth & pervasiveness of human issues triggered by the merger || 50
|-
|4 || Loss of key staff in acquired business || 50
|-
|5 || Demotivation of employees of acquired business || 50
|-
|6 || Underestimating problems of skill transfer || 34
|-
|7 || Selecting the wrong partner || 34
|-
|8 || Cultural incompatibility || 17
|-
|9 || Delayed decisions due to breakdown of responsibilities, delegations & authority || 17
|-
|10 || Too much focus on doing the deal - not enough on to integration planning & management || 17
|-
|11 || Insufficient research (due diligence) into the acquired business || 17
|-
|12 || Paying the wrong price or at the wrong time || 17
|-
|13 || Buying for the wrong reasons || 17
|-
|14 || Incompatible business and IT systems || JB
|-
|15 || Doomed by negotiation || JB
|-
|}
IT systems are likely to increase in importance because in the last 10-15 years they have become more entwined with business models & processes than was possibly the case when some of these studies on which this data is based were conducted, and in larger organisations can represent a key (and diferentiating) part of the businesses infrastructure investment. Incompatibility can be a critical financial and technical barrier to successful integration.
The last point emphasises that where one party in the pre-merger negotiation wins, the merged entities generally lose.
==Failure in a Nutshell==
Where business integration is a key ingredient of the post-merger mix, the studies allow us to identify the top 5 risks of that result in merger failure:
# Integration poorly planned and managed
# Underestimated cultural & human risks
# Loss of key success enablers (eg staff)
# Inaccurate financial due diligence
# Neglecting current business
As these studies examined mergers that actually completed (i.e. the tacke over survived the acquisition process), the studies ignored a common reason for merger failure: That of non-completion. Reasons for non completion might include:
# Legal (non participating competitor) or regulatory intervention
# Unacceptable risks, asset/liability valuations or cultural issues emerging during sue-dilligence
# Exogenous market shifts during the merger process (such as changes in market conditions of demand, financing, etc.)
# Death or departure of key personnel from the target entitites
# Excessive regulatory or judicial hurdles causing the process to extend unacceptably for the participants
# Failure, or inability to offer sufficient compensation to the vendors
# Gazumping by competitor acquirers
=Reasons for Success=
Conversely both formal studies and deductive reasoning allows us to identify the key reasons for successful mergers.
* No need to achieve an integrated business, and "right" price paid
* Nature of post merger structure (vertical, conglomerate or geographic, etc)
* Clearly enunciated & communicated direction
* Acquisition-specific & flexible integration strategy
* Clear decision structure and role definitions
* A sense of urgency and outcome ownership
* Compatible business systems
* Compatible business cultures
* Compatible accounting practices
* Integration ready culture
* Commonality of merger goals
* Active risk management strategy
* Actively managed, tracked & resourced integration project
* Minimised debt service load
* Pre-existing partnering or cohabitation
=Further Reading=
In our next article [[Managing Risk in Mergers & Acquisitions - A Success Strategy]], we examine how to apply this knowledge to create a successful merger strategy.
A cross linked review of the of the literature over a span of 20 years is available at [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]].
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
c05ec1f1dcaeb33c93138ca7f37f54140649ecab
375
333
2010-08-06T15:49:13Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2010 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
For the purposes of this article we will use the terms merger, acquisition and M&A interchangeably for the general activity of conducting a merger or acquisition of one legal business entity by another. The discussion will focus on the M&A activities between distinct legal entities rather than business units within a legel entity, as the issues in the latter case are fundamentally different from those in the former case.
Strictly speaking, a merger differs from an acquisition in that in an acquisition one entity assumes control and absorbs another entity, usually expunging the acquired entities operational distinctiveness. In a merger two or more entities join their business and control structures in a manner that delivers some level of shared control and business profile. In reality, the actual outcomes are rarely purely those of an acquisition or a merger - regardless of the original intentions. The act of acquiring or merging almost always results in irrevocable cultural and operational change for all entities involved - not just the entity acquired.
For this reason, and for reasons that will become apparent later on, we shall treat both activities as essentially the same.
Irrespective of the retoric for the merger, in order to succeed, it is critical for the parties to the merger (and particularly the dominant party) to understand clearly why they are really merging. Typical reasons for merging include (in no particular order):
* Economies of scale through larger productive capacity or ability to share services
* Vertical integration of productive capacity or the supply chain
* Market share / elimination of direct or indirect competition
* Securing supply
* Asset acquisition or stripping
* Strategic hedging through addition of counter cyclical products to the group mix
* Acquisition of access to Intellectual Proiperty
* Geographic expansion or access to markets with entry barriers
* Accumulation of complimentary product/service sets
* Suppression of emerging product line / Intellectual Property threats
* Acquisition of customers
Not all of these motivations will pass traditional measures of success such as "improved productivity" or "staff retention" - as clearly in a number of these cases the underlying purpose of the merger has nothing to do with establishing a bigger, better, more efficient business - just a safer business environment.
If your purpose is merely to eliminate a competitor, or acquire their IP, or strip their assets, etc. much of the discussion in the paper will be of limited applicability to your situation. Your objectives are met if the price you pay for acquisition and business wind-up delivers these outcomes for less than you gain in return. If your purpose is to gain productivity improvement, or economies of scale, complimentary product mix outcomes and retain as much of the acquired (or junior partner's) business / delivery capability as possible (etc.) then this paper is relevant to your circumstances.
=M&A - The State of the Industry=
==What Measure Success?==
The most obvious outcome of any M&A is prima-facie the elimination of an actual or potential competitor from the competitive mix.
In 1999 KPMG published a study of merger outcomes over the preceding 10 years. The study identified that 75% to 83% of mergers fail where failure was measured by lower productivity, labour unrest, higher absenteeism & loss of shareholder value or even dissolution of companies.
This and other studies highlight a central question in determining the strategy for a successful merger - what is the basis for measuring the success of an M&A project?
<table>
<tr>
<th>
Success Measure
</th>
<th>
Survey Outcome
</th>
<th>
Year of Study
</th>
</tr>
<tr>
<td>
Achievement of anticipated purpose
</td>
<td>
30-45%
</td>
<td>
1997
</td>
</tr>
<tr>
<td>
Achievement of strategic or financial object
</td>
<td>
<20%
</td>
<td>
1983, 1991, 1994
</td>
</tr>
<tr>
<td>
Preserve or Enhance book value
</td>
<td>
25%-45%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Enhance shareholder value
</td>
<td>
17%
</td>
<td>
1995
</td>
</tr>
<tr>
<td>
Preserve or improve NPAT
</td>
<td>
<50%
</td>
<td>
1996, 1999
</td>
</tr>
<tr>
<td>
Preserve or improve productivity
</td>
<td>
<25%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Preserve strike, absenteeism and accidents levels
</td>
<td>
<50%
</td>
<td>
1977, 1981, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Long Term
</td>
<td>
20-50%
</td>
<td>
1978, 1988, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Short Term
</td>
<td>
50%
</td>
<td>
1996
</td>
</tr>
</table>
A summary of the conclusions from a number of these studies can be found in [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
It is clear from the range of studies and the span of years they cover, that successful mergers, are distressingly and consistently unlikely - at least with respect to these measures of success. A Merger, like life, is not a dress rehearsal. Unfortunately, as most executives go through a merger only rarely, mistakes are common, and the first time you do it, it will be for real. It is therefore important to learn, as far as possible, from the conclusions of others that have gone before - because the odds of success are not in your favour.
Both the Zweig (1995) and KPMG (1999) study found in their respective studies of merger outcomes that on 17% of mergers resulted in an enhancement of either shareholder value or key performance drivers. Perhaps of even greater concern, Zweig found that shareholder value was actually destroyed in 53% of cases, and KPMG determined the performance drivers actually weakened in 78% of cases:
<table>
<tr>
<td>
[[image:Zweig95_M&A_ImpactOnShareValue.jpg]]
</td>
<td>
[[image:KPMG99_M&A_ImpactOnKPI.jpg]]
</td>
</tr>
</table>
=Why Merge=
Studies of merger outcomes in terms of only classical performance or direct shareholder value enhancement imply a need for successful integration of the pre-merger businesses. This assumption does not capture the total range of success measures that might properly apply to merger motivations (regardless of the public retoric of the entities involved). The need for successful integration of the pre-merger businesses depends on the true underlying motivation for the merger:
[[Image:MnA WhyMerge.jpg]]
The fundamental driver for measuring post-merger success is to first clearly define the reason(s) for the merger. As success integration of the merged businesses is possibly among the hardest to the successful outcomes to achieve, it is essential to map the requirement for this strategy to the reason for the merger. Ordered from least to highest need for integration, typical merger motivations might include:
# Eliminate a competitor
# Hedge market cycles
# Acquire brand
# Enter a geographic market
# Integrate vertically
# Opportunistic
# Grow market share
# Cut costs – economies of scale
# Grow size (defensive)
# Acquire technical or management expertise
=Reasons For Failure=
==A Summary of the Recent Studies==
Integration of the of the pre-merger businesses in the post-merger entity is a precursor to success in (possibly) the majority of merger strategies. From a comprehensive review of the literature we have identified the the most common reasons sited for integration failure, (with two added by the author from direct (anecdotal) experience).
{| border="1"
! !! Reason !! %
|-
|1 || Poorly planned and managed integration || 100
|-
|2 || Neglect of existing business due to the attention being paid to the acquired business || 68
|-
|3 || Underestimating the depth & pervasiveness of human issues triggered by the merger || 50
|-
|4 || Loss of key staff in acquired business || 50
|-
|5 || Demotivation of employees of acquired business || 50
|-
|6 || Underestimating problems of skill transfer || 34
|-
|7 || Selecting the wrong partner || 34
|-
|8 || Cultural incompatibility || 17
|-
|9 || Delayed decisions due to breakdown of responsibilities, delegations & authority || 17
|-
|10 || Too much focus on doing the deal - not enough on to integration planning & management || 17
|-
|11 || Insufficient research (due diligence) into the acquired business || 17
|-
|12 || Paying the wrong price or at the wrong time || 17
|-
|13 || Buying for the wrong reasons || 17
|-
|14 || Incompatible business and IT systems || JB
|-
|15 || Doomed by negotiation || JB
|-
|}
IT systems are likely to increase in importance because in the last 10-15 years they have become more entwined with business models & processes than was possibly the case when some of these studies on which this data is based were conducted, and in larger organisations can represent a key (and diferentiating) part of the businesses infrastructure investment. Incompatibility can be a critical financial and technical barrier to successful integration.
The last point emphasises that where one party in the pre-merger negotiation wins, the merged entities generally lose.
==Failure in a Nutshell==
Where business integration is a key ingredient of the post-merger mix, the studies allow us to identify the top 5 risks of that result in merger failure:
# Integration poorly planned and managed
# Underestimated cultural & human risks
# Loss of key success enablers (eg staff)
# Inaccurate financial due diligence
# Neglecting current business
As these studies examined mergers that actually completed (i.e. the tacke over survived the acquisition process), the studies ignored a common reason for merger failure: That of non-completion. Reasons for non completion might include:
# Legal (non participating competitor) or regulatory intervention
# Unacceptable risks, asset/liability valuations or cultural issues emerging during sue-dilligence
# Exogenous market shifts during the merger process (such as changes in market conditions of demand, financing, etc.)
# Death or departure of key personnel from the target entitites
# Excessive regulatory or judicial hurdles causing the process to extend unacceptably for the participants
# Failure, or inability to offer sufficient compensation to the vendors
# Gazumping by competitor acquirers
=Reasons for Success=
Conversely both formal studies and deductive reasoning allows us to identify the key reasons for successful mergers.
* No need to achieve an integrated business, and "right" price paid
* Nature of post merger structure (vertical, conglomerate or geographic, etc)
* Clearly enunciated & communicated direction
* Acquisition-specific & flexible integration strategy
* Clear decision structure and role definitions
* A sense of urgency and outcome ownership
* Compatible business systems
* Compatible business cultures
* Compatible accounting practices
* Integration ready culture
* Commonality of merger goals
* Active risk management strategy
* Actively managed, tracked & resourced integration project
* Minimised debt service load
* Pre-existing partnering or cohabitation
=Further Reading=
In our next article [[Managing Risk in Mergers & Acquisitions - A Success Strategy]], we examine how to apply this knowledge to create a successful merger strategy.
A cross linked review of the of the literature over a span of 20 years is available at [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]].
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
c05ec1f1dcaeb33c93138ca7f37f54140649ecab
399
375
2010-08-06T15:49:13Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2010 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
For the purposes of this article we will use the terms merger, acquisition and M&A interchangeably for the general activity of conducting a merger or acquisition of one legal business entity by another. The discussion will focus on the M&A activities between distinct legal entities rather than business units within a legel entity, as the issues in the latter case are fundamentally different from those in the former case.
Strictly speaking, a merger differs from an acquisition in that in an acquisition one entity assumes control and absorbs another entity, usually expunging the acquired entities operational distinctiveness. In a merger two or more entities join their business and control structures in a manner that delivers some level of shared control and business profile. In reality, the actual outcomes are rarely purely those of an acquisition or a merger - regardless of the original intentions. The act of acquiring or merging almost always results in irrevocable cultural and operational change for all entities involved - not just the entity acquired.
For this reason, and for reasons that will become apparent later on, we shall treat both activities as essentially the same.
Irrespective of the retoric for the merger, in order to succeed, it is critical for the parties to the merger (and particularly the dominant party) to understand clearly why they are really merging. Typical reasons for merging include (in no particular order):
* Economies of scale through larger productive capacity or ability to share services
* Vertical integration of productive capacity or the supply chain
* Market share / elimination of direct or indirect competition
* Securing supply
* Asset acquisition or stripping
* Strategic hedging through addition of counter cyclical products to the group mix
* Acquisition of access to Intellectual Proiperty
* Geographic expansion or access to markets with entry barriers
* Accumulation of complimentary product/service sets
* Suppression of emerging product line / Intellectual Property threats
* Acquisition of customers
Not all of these motivations will pass traditional measures of success such as "improved productivity" or "staff retention" - as clearly in a number of these cases the underlying purpose of the merger has nothing to do with establishing a bigger, better, more efficient business - just a safer business environment.
If your purpose is merely to eliminate a competitor, or acquire their IP, or strip their assets, etc. much of the discussion in the paper will be of limited applicability to your situation. Your objectives are met if the price you pay for acquisition and business wind-up delivers these outcomes for less than you gain in return. If your purpose is to gain productivity improvement, or economies of scale, complimentary product mix outcomes and retain as much of the acquired (or junior partner's) business / delivery capability as possible (etc.) then this paper is relevant to your circumstances.
=M&A - The State of the Industry=
==What Measure Success?==
The most obvious outcome of any M&A is prima-facie the elimination of an actual or potential competitor from the competitive mix.
In 1999 KPMG published a study of merger outcomes over the preceding 10 years. The study identified that 75% to 83% of mergers fail where failure was measured by lower productivity, labour unrest, higher absenteeism & loss of shareholder value or even dissolution of companies.
This and other studies highlight a central question in determining the strategy for a successful merger - what is the basis for measuring the success of an M&A project?
<table>
<tr>
<th>
Success Measure
</th>
<th>
Survey Outcome
</th>
<th>
Year of Study
</th>
</tr>
<tr>
<td>
Achievement of anticipated purpose
</td>
<td>
30-45%
</td>
<td>
1997
</td>
</tr>
<tr>
<td>
Achievement of strategic or financial object
</td>
<td>
<20%
</td>
<td>
1983, 1991, 1994
</td>
</tr>
<tr>
<td>
Preserve or Enhance book value
</td>
<td>
25%-45%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Enhance shareholder value
</td>
<td>
17%
</td>
<td>
1995
</td>
</tr>
<tr>
<td>
Preserve or improve NPAT
</td>
<td>
<50%
</td>
<td>
1996, 1999
</td>
</tr>
<tr>
<td>
Preserve or improve productivity
</td>
<td>
<25%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Preserve strike, absenteeism and accidents levels
</td>
<td>
<50%
</td>
<td>
1977, 1981, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Long Term
</td>
<td>
20-50%
</td>
<td>
1978, 1988, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Short Term
</td>
<td>
50%
</td>
<td>
1996
</td>
</tr>
</table>
A summary of the conclusions from a number of these studies can be found in [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
It is clear from the range of studies and the span of years they cover, that successful mergers, are distressingly and consistently unlikely - at least with respect to these measures of success. A Merger, like life, is not a dress rehearsal. Unfortunately, as most executives go through a merger only rarely, mistakes are common, and the first time you do it, it will be for real. It is therefore important to learn, as far as possible, from the conclusions of others that have gone before - because the odds of success are not in your favour.
Both the Zweig (1995) and KPMG (1999) study found in their respective studies of merger outcomes that on 17% of mergers resulted in an enhancement of either shareholder value or key performance drivers. Perhaps of even greater concern, Zweig found that shareholder value was actually destroyed in 53% of cases, and KPMG determined the performance drivers actually weakened in 78% of cases:
<table>
<tr>
<td>
[[image:Zweig95_M&A_ImpactOnShareValue.jpg]]
</td>
<td>
[[image:KPMG99_M&A_ImpactOnKPI.jpg]]
</td>
</tr>
</table>
=Why Merge=
Studies of merger outcomes in terms of only classical performance or direct shareholder value enhancement imply a need for successful integration of the pre-merger businesses. This assumption does not capture the total range of success measures that might properly apply to merger motivations (regardless of the public retoric of the entities involved). The need for successful integration of the pre-merger businesses depends on the true underlying motivation for the merger:
[[Image:MnA WhyMerge.jpg]]
The fundamental driver for measuring post-merger success is to first clearly define the reason(s) for the merger. As success integration of the merged businesses is possibly among the hardest to the successful outcomes to achieve, it is essential to map the requirement for this strategy to the reason for the merger. Ordered from least to highest need for integration, typical merger motivations might include:
# Eliminate a competitor
# Hedge market cycles
# Acquire brand
# Enter a geographic market
# Integrate vertically
# Opportunistic
# Grow market share
# Cut costs – economies of scale
# Grow size (defensive)
# Acquire technical or management expertise
=Reasons For Failure=
==A Summary of the Recent Studies==
Integration of the of the pre-merger businesses in the post-merger entity is a precursor to success in (possibly) the majority of merger strategies. From a comprehensive review of the literature we have identified the the most common reasons sited for integration failure, (with two added by the author from direct (anecdotal) experience).
{| border="1"
! !! Reason !! %
|-
|1 || Poorly planned and managed integration || 100
|-
|2 || Neglect of existing business due to the attention being paid to the acquired business || 68
|-
|3 || Underestimating the depth & pervasiveness of human issues triggered by the merger || 50
|-
|4 || Loss of key staff in acquired business || 50
|-
|5 || Demotivation of employees of acquired business || 50
|-
|6 || Underestimating problems of skill transfer || 34
|-
|7 || Selecting the wrong partner || 34
|-
|8 || Cultural incompatibility || 17
|-
|9 || Delayed decisions due to breakdown of responsibilities, delegations & authority || 17
|-
|10 || Too much focus on doing the deal - not enough on to integration planning & management || 17
|-
|11 || Insufficient research (due diligence) into the acquired business || 17
|-
|12 || Paying the wrong price or at the wrong time || 17
|-
|13 || Buying for the wrong reasons || 17
|-
|14 || Incompatible business and IT systems || JB
|-
|15 || Doomed by negotiation || JB
|-
|}
IT systems are likely to increase in importance because in the last 10-15 years they have become more entwined with business models & processes than was possibly the case when some of these studies on which this data is based were conducted, and in larger organisations can represent a key (and diferentiating) part of the businesses infrastructure investment. Incompatibility can be a critical financial and technical barrier to successful integration.
The last point emphasises that where one party in the pre-merger negotiation wins, the merged entities generally lose.
==Failure in a Nutshell==
Where business integration is a key ingredient of the post-merger mix, the studies allow us to identify the top 5 risks of that result in merger failure:
# Integration poorly planned and managed
# Underestimated cultural & human risks
# Loss of key success enablers (eg staff)
# Inaccurate financial due diligence
# Neglecting current business
As these studies examined mergers that actually completed (i.e. the tacke over survived the acquisition process), the studies ignored a common reason for merger failure: That of non-completion. Reasons for non completion might include:
# Legal (non participating competitor) or regulatory intervention
# Unacceptable risks, asset/liability valuations or cultural issues emerging during sue-dilligence
# Exogenous market shifts during the merger process (such as changes in market conditions of demand, financing, etc.)
# Death or departure of key personnel from the target entitites
# Excessive regulatory or judicial hurdles causing the process to extend unacceptably for the participants
# Failure, or inability to offer sufficient compensation to the vendors
# Gazumping by competitor acquirers
=Reasons for Success=
Conversely both formal studies and deductive reasoning allows us to identify the key reasons for successful mergers.
* No need to achieve an integrated business, and "right" price paid
* Nature of post merger structure (vertical, conglomerate or geographic, etc)
* Clearly enunciated & communicated direction
* Acquisition-specific & flexible integration strategy
* Clear decision structure and role definitions
* A sense of urgency and outcome ownership
* Compatible business systems
* Compatible business cultures
* Compatible accounting practices
* Integration ready culture
* Commonality of merger goals
* Active risk management strategy
* Actively managed, tracked & resourced integration project
* Minimised debt service load
* Pre-existing partnering or cohabitation
=Further Reading=
In our next article [[Managing Risk in Mergers & Acquisitions - A Success Strategy]], we examine how to apply this knowledge to create a successful merger strategy.
A cross linked review of the of the literature over a span of 20 years is available at [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]].
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
c05ec1f1dcaeb33c93138ca7f37f54140649ecab
471
399
2010-08-06T15:49:13Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2010 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
For the purposes of this article we will use the terms merger, acquisition and M&A interchangeably for the general activity of conducting a merger or acquisition of one legal business entity by another. The discussion will focus on the M&A activities between distinct legal entities rather than business units within a legel entity, as the issues in the latter case are fundamentally different from those in the former case.
Strictly speaking, a merger differs from an acquisition in that in an acquisition one entity assumes control and absorbs another entity, usually expunging the acquired entities operational distinctiveness. In a merger two or more entities join their business and control structures in a manner that delivers some level of shared control and business profile. In reality, the actual outcomes are rarely purely those of an acquisition or a merger - regardless of the original intentions. The act of acquiring or merging almost always results in irrevocable cultural and operational change for all entities involved - not just the entity acquired.
For this reason, and for reasons that will become apparent later on, we shall treat both activities as essentially the same.
Irrespective of the retoric for the merger, in order to succeed, it is critical for the parties to the merger (and particularly the dominant party) to understand clearly why they are really merging. Typical reasons for merging include (in no particular order):
* Economies of scale through larger productive capacity or ability to share services
* Vertical integration of productive capacity or the supply chain
* Market share / elimination of direct or indirect competition
* Securing supply
* Asset acquisition or stripping
* Strategic hedging through addition of counter cyclical products to the group mix
* Acquisition of access to Intellectual Proiperty
* Geographic expansion or access to markets with entry barriers
* Accumulation of complimentary product/service sets
* Suppression of emerging product line / Intellectual Property threats
* Acquisition of customers
Not all of these motivations will pass traditional measures of success such as "improved productivity" or "staff retention" - as clearly in a number of these cases the underlying purpose of the merger has nothing to do with establishing a bigger, better, more efficient business - just a safer business environment.
If your purpose is merely to eliminate a competitor, or acquire their IP, or strip their assets, etc. much of the discussion in the paper will be of limited applicability to your situation. Your objectives are met if the price you pay for acquisition and business wind-up delivers these outcomes for less than you gain in return. If your purpose is to gain productivity improvement, or economies of scale, complimentary product mix outcomes and retain as much of the acquired (or junior partner's) business / delivery capability as possible (etc.) then this paper is relevant to your circumstances.
=M&A - The State of the Industry=
==What Measure Success?==
The most obvious outcome of any M&A is prima-facie the elimination of an actual or potential competitor from the competitive mix.
In 1999 KPMG published a study of merger outcomes over the preceding 10 years. The study identified that 75% to 83% of mergers fail where failure was measured by lower productivity, labour unrest, higher absenteeism & loss of shareholder value or even dissolution of companies.
This and other studies highlight a central question in determining the strategy for a successful merger - what is the basis for measuring the success of an M&A project?
<table>
<tr>
<th>
Success Measure
</th>
<th>
Survey Outcome
</th>
<th>
Year of Study
</th>
</tr>
<tr>
<td>
Achievement of anticipated purpose
</td>
<td>
30-45%
</td>
<td>
1997
</td>
</tr>
<tr>
<td>
Achievement of strategic or financial object
</td>
<td>
<20%
</td>
<td>
1983, 1991, 1994
</td>
</tr>
<tr>
<td>
Preserve or Enhance book value
</td>
<td>
25%-45%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Enhance shareholder value
</td>
<td>
17%
</td>
<td>
1995
</td>
</tr>
<tr>
<td>
Preserve or improve NPAT
</td>
<td>
<50%
</td>
<td>
1996, 1999
</td>
</tr>
<tr>
<td>
Preserve or improve productivity
</td>
<td>
<25%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Preserve strike, absenteeism and accidents levels
</td>
<td>
<50%
</td>
<td>
1977, 1981, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Long Term
</td>
<td>
20-50%
</td>
<td>
1978, 1988, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Short Term
</td>
<td>
50%
</td>
<td>
1996
</td>
</tr>
</table>
A summary of the conclusions from a number of these studies can be found in [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
It is clear from the range of studies and the span of years they cover, that successful mergers, are distressingly and consistently unlikely - at least with respect to these measures of success. A Merger, like life, is not a dress rehearsal. Unfortunately, as most executives go through a merger only rarely, mistakes are common, and the first time you do it, it will be for real. It is therefore important to learn, as far as possible, from the conclusions of others that have gone before - because the odds of success are not in your favour.
Both the Zweig (1995) and KPMG (1999) study found in their respective studies of merger outcomes that on 17% of mergers resulted in an enhancement of either shareholder value or key performance drivers. Perhaps of even greater concern, Zweig found that shareholder value was actually destroyed in 53% of cases, and KPMG determined the performance drivers actually weakened in 78% of cases:
<table>
<tr>
<td>
[[image:Zweig95_M&A_ImpactOnShareValue.jpg]]
</td>
<td>
[[image:KPMG99_M&A_ImpactOnKPI.jpg]]
</td>
</tr>
</table>
=Why Merge=
Studies of merger outcomes in terms of only classical performance or direct shareholder value enhancement imply a need for successful integration of the pre-merger businesses. This assumption does not capture the total range of success measures that might properly apply to merger motivations (regardless of the public retoric of the entities involved). The need for successful integration of the pre-merger businesses depends on the true underlying motivation for the merger:
[[Image:MnA WhyMerge.jpg]]
The fundamental driver for measuring post-merger success is to first clearly define the reason(s) for the merger. As success integration of the merged businesses is possibly among the hardest to the successful outcomes to achieve, it is essential to map the requirement for this strategy to the reason for the merger. Ordered from least to highest need for integration, typical merger motivations might include:
# Eliminate a competitor
# Hedge market cycles
# Acquire brand
# Enter a geographic market
# Integrate vertically
# Opportunistic
# Grow market share
# Cut costs – economies of scale
# Grow size (defensive)
# Acquire technical or management expertise
=Reasons For Failure=
==A Summary of the Recent Studies==
Integration of the of the pre-merger businesses in the post-merger entity is a precursor to success in (possibly) the majority of merger strategies. From a comprehensive review of the literature we have identified the the most common reasons sited for integration failure, (with two added by the author from direct (anecdotal) experience).
{| border="1"
! !! Reason !! %
|-
|1 || Poorly planned and managed integration || 100
|-
|2 || Neglect of existing business due to the attention being paid to the acquired business || 68
|-
|3 || Underestimating the depth & pervasiveness of human issues triggered by the merger || 50
|-
|4 || Loss of key staff in acquired business || 50
|-
|5 || Demotivation of employees of acquired business || 50
|-
|6 || Underestimating problems of skill transfer || 34
|-
|7 || Selecting the wrong partner || 34
|-
|8 || Cultural incompatibility || 17
|-
|9 || Delayed decisions due to breakdown of responsibilities, delegations & authority || 17
|-
|10 || Too much focus on doing the deal - not enough on to integration planning & management || 17
|-
|11 || Insufficient research (due diligence) into the acquired business || 17
|-
|12 || Paying the wrong price or at the wrong time || 17
|-
|13 || Buying for the wrong reasons || 17
|-
|14 || Incompatible business and IT systems || JB
|-
|15 || Doomed by negotiation || JB
|-
|}
IT systems are likely to increase in importance because in the last 10-15 years they have become more entwined with business models & processes than was possibly the case when some of these studies on which this data is based were conducted, and in larger organisations can represent a key (and diferentiating) part of the businesses infrastructure investment. Incompatibility can be a critical financial and technical barrier to successful integration.
The last point emphasises that where one party in the pre-merger negotiation wins, the merged entities generally lose.
==Failure in a Nutshell==
Where business integration is a key ingredient of the post-merger mix, the studies allow us to identify the top 5 risks of that result in merger failure:
# Integration poorly planned and managed
# Underestimated cultural & human risks
# Loss of key success enablers (eg staff)
# Inaccurate financial due diligence
# Neglecting current business
As these studies examined mergers that actually completed (i.e. the tacke over survived the acquisition process), the studies ignored a common reason for merger failure: That of non-completion. Reasons for non completion might include:
# Legal (non participating competitor) or regulatory intervention
# Unacceptable risks, asset/liability valuations or cultural issues emerging during sue-dilligence
# Exogenous market shifts during the merger process (such as changes in market conditions of demand, financing, etc.)
# Death or departure of key personnel from the target entitites
# Excessive regulatory or judicial hurdles causing the process to extend unacceptably for the participants
# Failure, or inability to offer sufficient compensation to the vendors
# Gazumping by competitor acquirers
=Reasons for Success=
Conversely both formal studies and deductive reasoning allows us to identify the key reasons for successful mergers.
* No need to achieve an integrated business, and "right" price paid
* Nature of post merger structure (vertical, conglomerate or geographic, etc)
* Clearly enunciated & communicated direction
* Acquisition-specific & flexible integration strategy
* Clear decision structure and role definitions
* A sense of urgency and outcome ownership
* Compatible business systems
* Compatible business cultures
* Compatible accounting practices
* Integration ready culture
* Commonality of merger goals
* Active risk management strategy
* Actively managed, tracked & resourced integration project
* Minimised debt service load
* Pre-existing partnering or cohabitation
=Further Reading=
In our next article [[Managing Risk in Mergers & Acquisitions - A Success Strategy]], we examine how to apply this knowledge to create a successful merger strategy.
A cross linked review of the of the literature over a span of 20 years is available at [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]].
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
c05ec1f1dcaeb33c93138ca7f37f54140649ecab
487
471
2010-08-06T15:49:13Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2010 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
For the purposes of this article we will use the terms merger, acquisition and M&A interchangeably for the general activity of conducting a merger or acquisition of one legal business entity by another. The discussion will focus on the M&A activities between distinct legal entities rather than business units within a legel entity, as the issues in the latter case are fundamentally different from those in the former case.
Strictly speaking, a merger differs from an acquisition in that in an acquisition one entity assumes control and absorbs another entity, usually expunging the acquired entities operational distinctiveness. In a merger two or more entities join their business and control structures in a manner that delivers some level of shared control and business profile. In reality, the actual outcomes are rarely purely those of an acquisition or a merger - regardless of the original intentions. The act of acquiring or merging almost always results in irrevocable cultural and operational change for all entities involved - not just the entity acquired.
For this reason, and for reasons that will become apparent later on, we shall treat both activities as essentially the same.
Irrespective of the retoric for the merger, in order to succeed, it is critical for the parties to the merger (and particularly the dominant party) to understand clearly why they are really merging. Typical reasons for merging include (in no particular order):
* Economies of scale through larger productive capacity or ability to share services
* Vertical integration of productive capacity or the supply chain
* Market share / elimination of direct or indirect competition
* Securing supply
* Asset acquisition or stripping
* Strategic hedging through addition of counter cyclical products to the group mix
* Acquisition of access to Intellectual Proiperty
* Geographic expansion or access to markets with entry barriers
* Accumulation of complimentary product/service sets
* Suppression of emerging product line / Intellectual Property threats
* Acquisition of customers
Not all of these motivations will pass traditional measures of success such as "improved productivity" or "staff retention" - as clearly in a number of these cases the underlying purpose of the merger has nothing to do with establishing a bigger, better, more efficient business - just a safer business environment.
If your purpose is merely to eliminate a competitor, or acquire their IP, or strip their assets, etc. much of the discussion in the paper will be of limited applicability to your situation. Your objectives are met if the price you pay for acquisition and business wind-up delivers these outcomes for less than you gain in return. If your purpose is to gain productivity improvement, or economies of scale, complimentary product mix outcomes and retain as much of the acquired (or junior partner's) business / delivery capability as possible (etc.) then this paper is relevant to your circumstances.
=M&A - The State of the Industry=
==What Measure Success?==
The most obvious outcome of any M&A is prima-facie the elimination of an actual or potential competitor from the competitive mix.
In 1999 KPMG published a study of merger outcomes over the preceding 10 years. The study identified that 75% to 83% of mergers fail where failure was measured by lower productivity, labour unrest, higher absenteeism & loss of shareholder value or even dissolution of companies.
This and other studies highlight a central question in determining the strategy for a successful merger - what is the basis for measuring the success of an M&A project?
<table>
<tr>
<th>
Success Measure
</th>
<th>
Survey Outcome
</th>
<th>
Year of Study
</th>
</tr>
<tr>
<td>
Achievement of anticipated purpose
</td>
<td>
30-45%
</td>
<td>
1997
</td>
</tr>
<tr>
<td>
Achievement of strategic or financial object
</td>
<td>
<20%
</td>
<td>
1983, 1991, 1994
</td>
</tr>
<tr>
<td>
Preserve or Enhance book value
</td>
<td>
25%-45%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Enhance shareholder value
</td>
<td>
17%
</td>
<td>
1995
</td>
</tr>
<tr>
<td>
Preserve or improve NPAT
</td>
<td>
<50%
</td>
<td>
1996, 1999
</td>
</tr>
<tr>
<td>
Preserve or improve productivity
</td>
<td>
<25%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Preserve strike, absenteeism and accidents levels
</td>
<td>
<50%
</td>
<td>
1977, 1981, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Long Term
</td>
<td>
20-50%
</td>
<td>
1978, 1988, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Short Term
</td>
<td>
50%
</td>
<td>
1996
</td>
</tr>
</table>
A summary of the conclusions from a number of these studies can be found in [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
It is clear from the range of studies and the span of years they cover, that successful mergers, are distressingly and consistently unlikely - at least with respect to these measures of success. A Merger, like life, is not a dress rehearsal. Unfortunately, as most executives go through a merger only rarely, mistakes are common, and the first time you do it, it will be for real. It is therefore important to learn, as far as possible, from the conclusions of others that have gone before - because the odds of success are not in your favour.
Both the Zweig (1995) and KPMG (1999) study found in their respective studies of merger outcomes that on 17% of mergers resulted in an enhancement of either shareholder value or key performance drivers. Perhaps of even greater concern, Zweig found that shareholder value was actually destroyed in 53% of cases, and KPMG determined the performance drivers actually weakened in 78% of cases:
<table>
<tr>
<td>
[[image:Zweig95_M&A_ImpactOnShareValue.jpg]]
</td>
<td>
[[image:KPMG99_M&A_ImpactOnKPI.jpg]]
</td>
</tr>
</table>
=Why Merge=
Studies of merger outcomes in terms of only classical performance or direct shareholder value enhancement imply a need for successful integration of the pre-merger businesses. This assumption does not capture the total range of success measures that might properly apply to merger motivations (regardless of the public retoric of the entities involved). The need for successful integration of the pre-merger businesses depends on the true underlying motivation for the merger:
[[Image:MnA WhyMerge.jpg]]
The fundamental driver for measuring post-merger success is to first clearly define the reason(s) for the merger. As success integration of the merged businesses is possibly among the hardest to the successful outcomes to achieve, it is essential to map the requirement for this strategy to the reason for the merger. Ordered from least to highest need for integration, typical merger motivations might include:
# Eliminate a competitor
# Hedge market cycles
# Acquire brand
# Enter a geographic market
# Integrate vertically
# Opportunistic
# Grow market share
# Cut costs – economies of scale
# Grow size (defensive)
# Acquire technical or management expertise
=Reasons For Failure=
==A Summary of the Recent Studies==
Integration of the of the pre-merger businesses in the post-merger entity is a precursor to success in (possibly) the majority of merger strategies. From a comprehensive review of the literature we have identified the the most common reasons sited for integration failure, (with two added by the author from direct (anecdotal) experience).
{| border="1"
! !! Reason !! %
|-
|1 || Poorly planned and managed integration || 100
|-
|2 || Neglect of existing business due to the attention being paid to the acquired business || 68
|-
|3 || Underestimating the depth & pervasiveness of human issues triggered by the merger || 50
|-
|4 || Loss of key staff in acquired business || 50
|-
|5 || Demotivation of employees of acquired business || 50
|-
|6 || Underestimating problems of skill transfer || 34
|-
|7 || Selecting the wrong partner || 34
|-
|8 || Cultural incompatibility || 17
|-
|9 || Delayed decisions due to breakdown of responsibilities, delegations & authority || 17
|-
|10 || Too much focus on doing the deal - not enough on to integration planning & management || 17
|-
|11 || Insufficient research (due diligence) into the acquired business || 17
|-
|12 || Paying the wrong price or at the wrong time || 17
|-
|13 || Buying for the wrong reasons || 17
|-
|14 || Incompatible business and IT systems || JB
|-
|15 || Doomed by negotiation || JB
|-
|}
IT systems are likely to increase in importance because in the last 10-15 years they have become more entwined with business models & processes than was possibly the case when some of these studies on which this data is based were conducted, and in larger organisations can represent a key (and diferentiating) part of the businesses infrastructure investment. Incompatibility can be a critical financial and technical barrier to successful integration.
The last point emphasises that where one party in the pre-merger negotiation wins, the merged entities generally lose.
==Failure in a Nutshell==
Where business integration is a key ingredient of the post-merger mix, the studies allow us to identify the top 5 risks of that result in merger failure:
# Integration poorly planned and managed
# Underestimated cultural & human risks
# Loss of key success enablers (eg staff)
# Inaccurate financial due diligence
# Neglecting current business
As these studies examined mergers that actually completed (i.e. the tacke over survived the acquisition process), the studies ignored a common reason for merger failure: That of non-completion. Reasons for non completion might include:
# Legal (non participating competitor) or regulatory intervention
# Unacceptable risks, asset/liability valuations or cultural issues emerging during sue-dilligence
# Exogenous market shifts during the merger process (such as changes in market conditions of demand, financing, etc.)
# Death or departure of key personnel from the target entitites
# Excessive regulatory or judicial hurdles causing the process to extend unacceptably for the participants
# Failure, or inability to offer sufficient compensation to the vendors
# Gazumping by competitor acquirers
=Reasons for Success=
Conversely both formal studies and deductive reasoning allows us to identify the key reasons for successful mergers.
* No need to achieve an integrated business, and "right" price paid
* Nature of post merger structure (vertical, conglomerate or geographic, etc)
* Clearly enunciated & communicated direction
* Acquisition-specific & flexible integration strategy
* Clear decision structure and role definitions
* A sense of urgency and outcome ownership
* Compatible business systems
* Compatible business cultures
* Compatible accounting practices
* Integration ready culture
* Commonality of merger goals
* Active risk management strategy
* Actively managed, tracked & resourced integration project
* Minimised debt service load
* Pre-existing partnering or cohabitation
=Further Reading=
In our next article [[Managing Risk in Mergers & Acquisitions - A Success Strategy]], we examine how to apply this knowledge to create a successful merger strategy.
A cross linked review of the of the literature over a span of 20 years is available at [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]].
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
c05ec1f1dcaeb33c93138ca7f37f54140649ecab
Managing Risk in Mergers & Acquisitions - A Review of the Literature
0
296
383
2010-08-06T15:54:25Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
==About The Author & This Article==
Rachel Curry, Research Consultant, Bishop Phillips Consulting
This article presents a summary of the literature examining the risks in corporate mergers and acquisitions over a 20 year period up until 2003. It was originally prepared by Rachel Curry of our research team as background detail for a briefing provided to the Members or the Bendigo Stock Exchange by [[Jonathan Bishop]]. The subheadings represent the names of the articles or papers summarised. Document links were added after the initial paper was prepared, and some references may be in error. The original summaries were compiled from printed editions of the papers or texts, and some page references may differ from the online references. Most of the links will navigate to subscription or book distributers as appropriate. Please advise any identified discrepancies.
</noinclude>
==MERGER FAILURE RATES AND REASONS FOR FAILURE==
===Managing Mergers, Acquisitions & Strategic Alliances===
[http://books.google.com/books?id=w2YR9LwY7FQC&dq=MERGER+FAILURE+RATES+AND+REASONS+FOR+FAILURE&pg=PA5&ots=CSqEPdOcJl&sig=cZKsAhRXXl1LH_lmGHgwNjIOhxI&prev=http://www.google.com/search%3Fsourceid%3Dnavclient%26ie%3DUTF-8%26rls%3DGGLG,GGLG:2005-34,GGLG:en%26q%3DMERGER%2BFAILURE%2BRATES%2BAND%2BREASONS%2BFOR%2BFAILURE&sa=X&oi=print&ct=result&cd=3&cad=legacy]
Sue Cartwright, Cary L. Cooper
Diagnosis and analysis of merger failure has traditionally focused on financial and strategic factors, with mergers considered to fail for rational economic reasons such as economies of scale not achieved to the magnitude expected, poor strategic fit or unexpected changes in market conditions. However, considering financial and strategic factors only is insufficient to achieve a successful merger or acquisition. Two important human factors to merger and acquisition success which impact on integration are:
<ul>
<li> ‘The culture compatibility of the combining organizations, and the resultant cultural dynamics.’
<li> ‘The way in which the merger/acquisition integration process is managed.’
</ul>
A lack of cultural compatibility can inhibit the creation of a ‘cohesive and coherent organizational entity’. A survey conducted by the British Institute of Management (1986) determined that ‘managerial underestimation of the difficulties of merging two cultures was a major contributory factor to merger and acquisition failure.’
The factors often held responsible for merger and joint venture failure include the selection of inappropriate venture partners, cultural incompatibility, and general “parenting” problems. (p.18)
There has been much debate about the most appropriate and accurate way to assess the gains arising from mergers, including both managerial and mathematical methods. Despite the method selected, many studies indicate mergers have an unfavourable impact on profitability, with research conducted by Mecks (1977) and Sinetar (1981) concluding that mergers have been associated with lowered productivity, worse strike records, higher absenteeism, and poorer accident rates.
Further research conducted by Ellis and Pekar (1978) and Marks (1988) suggest that in the long term between 50 and 80 per cent of all mergers and takeovers are considered financially unsuccessful, while a study conducted by the Department of Trade and Industry, published by the British Institute of Management (1988) and another by Hunt (1988) determined the success rates post-acquisition to be around 50 per cent. More current studies show similar trends continuing, with Cartwright and Cooper (1996) determining, on the basis of financial results in the first year of combined trading, that only half of mergers and acquisitions studied were successful.
Estimate by Davy et al (1988) held ‘employee problems’ to be responsible for between one-third and half of all merger failures, while a discussion paper by the British Institute of Management (1986) identified sixteen factors related to unsuccessful mergers and acquisitions, including (p.28):
<ul>
<li> underestimating the difficulties of merging two cultures
<li> underestimating the problems of skill transfer
<li> demotivation of employees of acquired company
<li> departure of key people in acquired company
<li> too much energy devoted to ‘doing the deal’, not enough to post-acquisition planning and integration
<li> decision making delayed by unclear responsibilities and post-acquisition conflicts
<li> neglecting existing business due to the amount of attention going into the acquired company
<li> insufficient research about the acquired company
</ul>
‘Ability to integrate the new company’ (p.28) was ranked as the most important factor for acquisition success according to a study by Booz, Allen and Hamilton (1985) while Kitching (1967) determined ‘the key to merger success was essentially the way in which the “transitional process” was managed and the quality of the working relationship between the partnering organizations.’
===Consulting in Mergers and Acquisitions===
[http://www.ingentaconnect.com/content/mcb/023/1997/00000010/00000003/art00006]
Marks M.L.
Three studies (Davidson, 1991; Elsass and Veiga, 1994; Lubatkin, 1983) have found that ‘fewer than 20 per cent of corporate combinations achieve their desired financial or strategic objectives.’
Zweig (1995) studied deals value at $500 million or more, and found that half of these deals destroyed shareholder value, 30 per cent had a minimal impact and only 17 per cent created shareholder value.
Many factors attributable to this low success rate, including (p.1):
<ul>
<li> paying the wrong price
<li> buying for the wrong reasons
<li> selecting the wrong partner
<li> buying at the wrong time
<li> managing the post-merger integration process inappropriately
</ul>
Marks (1997) together with previous studies (Marks and Mirvis, 1997; Mirvis and Marks, 1992) found the common factor restricting ability to achieve hoped-for synergies and financial gains to be (p. 1- 2):
<ul>
<li> ‘underestimating the multitude of integration issues and problems that arise as organizations come together;
<li> underestimating the drain on resources and the distraction from performance required to manage the transition from pre- to post-merger status; and
<li> underestimating the pervasiveness and depth of the human issues triggered in a merger or acquisition.’
</ul>
Since mid-1980s, many aspects of mergers and acquisitions have changed, including (p.3):
<ul>
<li> ‘deals are more strategically driven
<li> technological advances are driving deals
<li> globalization is driving more deals
<li> deals are involving larger organizations
<li> entire industries are put into play (deregulation, social policies and changing customer demands)
<li> managers are smarter about doing deals and managing integration
<li> human assets are even more crucial to merger and acquisition success than before.’
</ul>
“Consultations to facilitate mergers and acquisitions emanate from sound change management principles, yet must be sensitive to the special requirements of combining complex organizations.” (p.4)
===Enhancing the Success of Mergers and Acquisitions===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=A600DFCDB0CD4D4945CE767ABBAC9918?contentType=Article&hdAction=lnkhtml&contentId=865419]
Mike Schraeder, Dennis R. Self
Research conducted by Carleton (1997) indicate between 55 – 70 per cent of mergers and acquisitions fail to meet their anticipated purpose.
Number of researchers determine that cultural incompatibility of the companies involved in the merger/acquisition are partly responsible for financial benefits anticipated not being achieved (Fralicx and Bolster, 1997; Cartwright and Cooper, 1993). Chatterje et al (1992) also agree that poor cultural fit has contributed to several merger and acquisition failures where the companies involved appeared to be suitable strategic partners.
Mirvis (1985) highlighted four factors that were believed to impact on the integration of organizations:
<ul>
<li> top management relations (including reporting relationships, decision making and flexibility)
<li> compatibility of business systems
<li> existence of a culture that will support the integration of business systems
<li> goals the respective parties intend to achieve
</ul>
Several other factors impacting on integration that have been identified through other research are:
<ul>
<li> compatibility of respective business systems (Mirvis, 1985)
<li> organizational members experience difficulty adjusting to new procedures and performance standards (Marks and Mirvis, 1992)
<li> differences in managerial styles and accounting practices (Cartwright and Cooper, 1993)
</ul>
Weber (1996) identifies that anticipated benefits from mergers and acquisitions are other unrealized because of productivity losses and the ‘traumatic effect of mergers and acquisitions on a firm’s human resources.’ Also finds that ‘the magnitude of cultural differences can effectively impede a successful integration during mergers and acquisitions, resulting in poor financial performance.’
Coopers and Lybrand (1992) studied failed mergers and acquisitions, and over 80 per cent of the executives involved identified that different management practices and styles as the primary contributor to integration issues.
To achieve merger and acquisition success, several researchers have determined the following factors need to be considered:
<ul>
<li> develop a flexible and comprehensive integration plan
<li> share information and encourage communication
<li> encourage participation by involving others in the process
<li> enhance commitment by establishing relationships and building trust
</ul>
===Due Diligence: The Devil in the Details===
[http://www.workforce.com/archive/feature/22/22/68/index.php]
Greengard, Samuel
“HR has a critical role in due diligence – both from the benefits and compensation side and the cultural side” – Deborah Rochelle, senior merger and acquisition consultant, Watson Wyatt Worldwide. She believes that ‘due diligence must encompass people, programs, plans, policies and processes.’
Clemente (1999) states that ‘ultimately, many mergers fail because of human resource–related issues, such as culture clash.’
Studies have found that between 50 and 75 per cent of all merging companies fail to retain book value two years after merging, and ‘many others are torpedoed by ongoing culture clash and an erosion of top talent.’ (p. 2)
Mitchell Lee Marks, management consultant, believes a number of failed mergers aren’t because of inept management or inadequate due diligence, but because the two organizations haven’t determined whether they have compatible cultures or how to overcome these differences if the cultures aren’t compatible.
Organizations should develop a detailed checklist to work through due diligence process to allow the organization to evaluate which factors are most important.
===On Managing Cultural Integration and Cultural Change Process in M & A===
Bijilsma-Frankema, K. (2001)
Journal of European Industrial Training, Vol.25
Magnet (1984) and Gilkey, 1991) have found that between 60 per cent and two-thirds of mergers and acquisitions fail to meet expectations.
Gilkey argues that:
‘the high percentage of failure is mainly due to the fact that mergers and acquisitions are still designed with business and financial fit as primary conditions, leaving psychological and cultural issues as secondary concerns. A close examination of these issues could have brought about a learning process, directed at successfully managing such ventures.’ (Gilkey, 1991, p.331)
Eisele (1996) found three factors that generally influence the success of mergers and acquisitions (p.6):
<ul>
<li> cultural fit
<li> cultural potential
<li> competent managers to guide the process
</ul>
===The Effective Management of Mergers===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=D784A9C7145AEEB97AB42AC75F0E6A95?contentType=Article&hdAction=lnkhtml&contentId=1410708]
Han Nguyen, Brian H. Kleiner
YTD 2002, there were over 4,363 mergers and acquisitions, worth over $291.7 billion.
Prime reason for most mergers and acquisitions is to maintain or increase market share, and to increase shareholder value by cutting costs, and introducing new, expanded and improved services.
Study by KPMG (publishing in PR Newswire, 1999) found that between 75 and 83 per cent of mergers and acquisitions failed, where failure meant lowered productivity, labour unrest, higher absenteeism and loss of shareholder value, or even a dissolution of the companies involved.
Merger success is directly correlated with the level and quality of planning, with insufficient time often being spent analyzing current and future market trends and integration issues. Failure is often also due to an insufficient due diligence (Oon, 1998).
Simpson (2000) found the opportunity for mergers to fail is greatest during the integration phase because of improper managing and strategy, culture differences, delays in communications, and lack of clear vision.
Bijilsma-Frankema (2001) found ‘increasing evidence that cultural incompatibility is single largest cause of lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’
KPMG developed best practice guidelines, with the following main keys necessary for successful integration (p.4):
<ul>
<li> ‘Directors must get out of the boardroom
<li> Set direction for the new business
<li> Understand the emotional political and rational issues
<li> Maximize involvement
<li> Focus on communication
<li> Provide clarity around roles and decision lines
<li> Continue to focus on customers
<li> Be flexible’
</ul>
Communication is listed as the key factor to make integration effective and successful.
===Managing Merger Madness===
[http://www.emeraldinsight.com/Insight/viewContentItem.do?contentType=Article&hdAction=lnkhtml&contentId=869290]
Journal: Strategic Direction (Author unkown)
Successful mergers and acquisitions consist of (p.1):
<ul>
<li> Acquisition target being carefully and dispassionately selected
<li> A post-acquisition strategy relevant to the newly merger organization need to be developed from the start
</ul>
In pre-merger planning stage, the most common mistakes are (p.1):
<ul>
<li> Failure to conduct a detailed risk assessment and management profile of the acquisition target
<li> Allowing pressure to increase share value to take the place of a convincing strategy
<li> Assuming total synergy
</ul>
The most common mistakes in integration processes are (p.1):
<ul>
<li> Slow post-merger integration
<li> Cultural conflicts
<li> No risk management strategy
</ul>
===Merging for Success===
[http://www.ingentaconnect.com/content/mcb/056/2002/00000018/00000006/art00003]
Author: Unknown
Found that in the first few months following the announcement of an acquisition, productivity falls by up to 50 per cent. Most mergers and acquisitions fail for reasons other than money, such as leadership issues involving unclear objectives or cultural clashes.
===Anatomy of a Merger===
Unknown.
Success of mergers and acquisitions range from 20 to 60 per cent (British Institute of Management, 1986; Hunt, 1988; Marks, 1988; Weber, 1996). Poor results have now generally come to be attributed to poor human resource planning.
Research identifies communication to be the most important factor during the merger and acquisition process.
Both Balmer and Dinnie (1999) and De Voge and Spreier (1999) indicate that communication is the key to a successful integration of two clashing cultures.
Ernst and Young (1994) identified cultural incompatibility as the single largest cause of ‘lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’ (p. 3)
For sustained competitive advantage to be achieved, it is imperative the mergers and acquisitions be implemented from a financially and legally sound standpoint, as well as a behavioural approach.
Leadership from top-level management is also important for merger success. Weber (1996) found the higher the commitment of the acquired firm’s top management, the higher the effectiveness and the financial performance of the merged entity. Success mergers are led by CEOs who (p.6, Part II):
<ul>
<li> Dedicate executive time and focus
<li> Put together a leadership team
<li> Focus management attention on success factors
<li> Create a sense of human purpose and direction
<li> Model desired behaviour and ‘rules of the road’
</ul>
It is recommended a merger-tracking program be implemented to determine whether the organization is working towards its goals, and what the merger outcomes were. It should cover things such as (p.7 – 8, Part II):
<ul>
<li> ‘Is the combination achieving financial and operational goals?
<li> Are schedules on target, and are changes being implemented effectively?
<li> Do employees understand and support the need for change?
<li> What is the effect on people’s well-being and esprit de corps?
<li> Are managers at all levels taking steps to minimize negative reactions and build positive feelings?
<li> Are productivity or work quality being affected?
<li> Do people understand their new roles and what is expected of them?
</ul>
==ATTRIBUTES LEADING TO SUCCESS OR FAILURE==
===Mergers and Acquisitions: A Guide to Creating Value for Stakeholders===
[http://www.questia.com/PM.qst?a=o&d=106499472#]
Michael A. Hitt, Jeffrey S. Harrison, R. Duane Ireland
Some important factors that can contribute to success or failure in mergers and acquisitions are:
'''Due Diligence'''
Lack of due diligence has caused many merger failures. Involves comprehensive analysis of firm characteristics such as financial condition, management capabilities, physical assets and intangible assets.
'''Financing'''
Manageable debt levels should be ensured.
'''Complementary Resources'''
Occurs when the ‘primary resources of the acquiring and target firms are somewhat different, yet simultaneously supportive of one another.’ (p.179) This tends to create economic value to a greater value that exists when the merging firms have identical or unrelated resources.
'''Friendly/Hostile Acquisitions'''
Friendly acquisitions tend to create greater economic value. A hostile acquisition can reduce the transfer of information during due diligence and merger integration, and increase turnover of key executives in the firm being acquired.
'''Synergy Creation'''
Four foundations to creation of synergy are strategic fit, organizational fit, managerial actions and value creation.
'''Organizational Learning'''
Many people should participate in the acquisition process to ensure knowledge about acquisitions is being spread throughout the firm, and isn’t lost if one of the key people typically involved leaves. The learning process should be managed, with steps taken to study and learn from acquisitions, with the information gained recorded.
'''Focus on Core Business'''
Cultural and management differences are more greatly magnified the less firms have in common, therefore constraining the sharing of resources and capabilities. ‘Result is that positive benefits from financial synergy are not enough to offset the negative effects of diversification.’ (p.181)
'''Emphasis on Innovation'''
Innovation is critical to organizational competitiveness. ‘Companies that innovate enjoy the first-mover advantages of acquiring a deep knowledge of new markets and developing strong relationships with key stakeholders in those markets’ (p. 181)
'''Ethical Concerns / Opportunism'''
Risk in mergers and acquisitions is that the information received may be incorrect, misleading or deceptive. Steps should be taken to ensure that the information is accurate and hasn’t been manipulated by management with the aim to making performance appear higher than it is.
===The Complete Guide to Mergers & Acquisitions: Process Tools to Support M&A: Integration at every level===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The likelihood of a successful merger is increased by considering the following ten key recommendations (p. 196 – 197):
<ul>
<li> ‘Conduct due-diligence analyses in the financial and human-capital-related areas.
<li> Determine the required or desired degree of integration.
<li> Speed up decisions instead of focusing on precision.
<li> Get support and commitment from senior managers.
<li> Clearly define an approach to integration.
<li> Select a highly respected and capable integration leader.
<li> Select dedicated, capable people for the integration core team and task forces.
<li> Use best practices.
<li> Set measurable goals and objectives.
<li> Provide continuous communication and feedback.’
</ul>
'''Due Diligence'''
Human resource due diligence analysis as well as financial due diligence is important. It provides details about where the companies converge or diverge in areas such as leadership, communication, training and performance management. Identifying this can allow the companies to plan for any conflicts that might occur during the integration phase in respect to these matters.
'''Speedy Decisions'''
Tends to allow faster integration, and enables people to refocus more quickly on work, customers and results.
'''Clearly Defined Approach'''
Allows faster decision making and organizes the entire integration process. ‘Without a defined approach that includes clear deliverables, due dates, milestones, information flows, and so on, each function of the enterprise will be working on a different schedule and producing deliverables that vary widely in terms of quality and content.’ (p.198)
'''Capable Leadership'''
‘The integration leader should be an excellent project manager with a broad view of the enterprise and good people skills.’ (p. 198)
'''Measurable Goals and Objectives'''
Measurable goals and objectives let people involved know what a successful integration consists of, and how long it should take.
==COMMON PROBLEMS AND CHALLENGES IN ACQUISITIONS==
===Managing Acquisitions: Creating Value Through Corporate Renewal===
[http://www.amazon.com/Managing-Acquisitions-Creating-Through-Corporate/dp/0029141656]
David B. Jemison, Philippe C. Haspeslagh
Four common challenges in managing acquisitions are (p. 8):
<ul>
<li> ‘Ensuring that acquisitions support the firm’s overall corporate renewal strategy
<li> Developing a pre-acquisition decision-making process that will allow consideration of the “right” acquisitions and that will develop for any particular acquisition a meaningful justification, given limited information and the need for speed and secrecy.
<li> Managing the post-acquisition integration process to create the value hoped for when the acquisition was conceived.
<li> Fostering both acquisition-specific and broader organizational learning from the exposure to the acquisition.’
</ul>
‘The key to integration is to obtain the participation of the people involved without compromising the strategic task.’ (p.11)
Acquisition integration has several challenges (p.11):
<ul>
<li> ‘Adapting pre-acquisition views to embrace reality,
<li> An ability to create the atmosphere necessary for capability transfer,
<li> The leadership to provide a common vision,
<li> And careful management of the interactions between the organizations.’
</ul>
'''Process Perspective'''
‘Adopting a process perspective shifts the focus from an acquisition’s results to the drivers that cause these results: the transfer of capabilities that will lead to competitive advantage. In the process perspective, acquisitions are not independent, one-off deals. Instead, they are a means to the end of corporate renewal. The transaction itself does not bring the expected benefits; instead, actions and activities of the managers after the agreement determine the results.’ (p.12)
(A summary of the entire chapter is provided on p. 15)
===Winning at Mergers and Acquisitions: The Guide to Market-Focused |Planning and Integration===
[http://www.wiley.com/WileyCDA/WileyTitle/productCd-047119056X.html]
Mark N. Clemente, David S. Greenspan
Key to successful mergers and acquisitions is ‘being able to take the differences inherent in the two companies and meld them to create an enhanced capability.’ (p. 43)
Problem is often that stakeholders focus on the short-term benefits from mergers and acquisitions such as cost reduction, which results in decisions being made that can sacrifice long-term goals to achieve short-term savings.
‘When companies seek to merge or acquire, and can cite more than two strategic drivers as reasons to come together, then the chances of success are higher.’ (p.44)
Twelve common challenges present in the majority of mergers and acquisitions are (p.163):
<ul>
<li> ‘Embracing the concept of change
<li> Setting priorities
<li> Sharing information and effecting corporate understanding
<li> Melding cultures
<li> Forging a new corporate identity
<li> Determining managerial roles and responsibilities
<li> Effecting teamwork and cooperation
<li> Combining corporate functions and internal processes
<li> Aligning capabilities, services, and products
<li> Measuring results
<li> Acknowledging the two levels of integration
<li> Maintaining flexibility’
</ul>
The long-term success or failure of mergers and acquisitions can be determined by the steps put in place to meet these challenges – each challenge should be ‘met with a clear focus and forward-thinking tactics.’ (p.163)
'''Setting Priorities'''
Integration planning is the number-one priority once a deal has been closed. The critical steps in the integration process itself are:
<ul>
<li> Address corporate information, marketing, and sales departments quickly, as these represent the company to stakeholders
<li> Corporate image and branding aspects are important to begin promoting the new image. This allows the company to display ‘the best face on the merger to external audiences while you grapple with many of the longer-term internal and operational issues.’ (p.165)
<li> Focus on retaining key employees
<li> Focus on customer retention – this is critical to maintain the value of the acquired company.
</ul>
'''Sharing Information and Effecting Corporate Understanding'''
The two companies need to share information, and understand the nature of the new corporate relationship. This should address issues such as ‘What is the company’s corporate philosophy? What are the strategic intentions of senior management? Why has the company come to develop, commercialize, and invest in the products and services it does? How are the sales and production people compensated and why?’ (p. 166)
'''Melding Cultures'''
‘Cultural compatibility is one of the most significant determinants of a successful M&A transaction.’ (p.167)
‘Acknowledging whether cultural compatibility can exist should be a factor in determining whether to pursue a given deal. Integration can never be attaining – and growth strategies never realized – if two companies are worlds apart culturally.’ (p.167)
This alignment of cultures can be achieved through information sharing, emphasizing similarities and ‘mitigating dissimilarities’ (p.167) through effective communication.
'''Determining Managerial Roles and Responsibilities'''
‘Allowing the acquired company’s managers to maintain responsibility for activities central to its core operations will help to accelerate integration by minimizing gaps in performance or production. Ideally, the acquiring management should audit and counsel the existing management, augmenting it where it is weak but leaving the previous management team intact until key processes have been successfully incorporated into the merged firm’s operational infrastructure.’ (p. 169)
Defining the character traits required in the new organization, and then identifying people possessing these assists in the selection of the management team that will best achieve strategic objectives.
Staffing decisions must be made early in the integration process to avoid employee uncertainty, which can impact on productivity.
'''Measuring Results'''
The integration program must have measurable criteria to assess the progress of the merger. ‘Must strive to set forth measurement criteria wherever it is possible to do so, whether it is by setting time parameters by which certain integration tasks must be completed, by gauging attitude changes via employee research, or by tracking the number of people who stay with the merged company against expected levels of attrition.’ (p. 175)
'''Acknowledging the Two Levels of Integration'''
‘The key to a prompt and effective integration launch is focusing on the similarities inherent in each organization and building on them.’ (p.175)
‘The key to successful integration is identifying the similarities inherent in each organization and building on them while maintaining a disciplined yet flexible approach…’ (p.177)
‘Isolating common factors and focusing on similarities provides the essence of the growth planning approach to devising and implementing a successful integration strategy.’ (p. 177)
==MEASURING MERGER SUCCESS==
===Keeping Track of Success: Merger Measurement Systems===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The benefits that arise from a formal tracking process are (p.145):
<ul>
<li> ‘Determining whether the transition is proceeding according to plan
<li> Identifying “hot spots” before they flare out of control
<li> Ensuring a good flow of communication
<li> Highlighting the need for midcourse corrections
<li> Demonstrating interest in the human side of change
<li> Involving more people in the combination process
<li> Sending a message about the new company’s culture.’
</ul>
‘Four areas for which separate but interrelated measurement processes must be continually managed during merger integration’: (p.145)
<ul>
<li> Integration measures: assess the integration events and determine whether ‘overall integration approach is accomplishing its mission of leading the organization through change.’ (p.145)
<li> Operational measures: track ‘any potential merger-related impact on the organization’s ability to conduct its continuing, day-to-day business.’ (p.145)
<li> Process and cultural measures: determine the ‘status of merger-driven efforts to redesign business processes or elements of the organizational culture.’ (p.145)
<li> Financial measures – track and report whether the company is achieving its expected synergies.
</ul>
(Examples of measures used for the above are included on p.145)
'''Integration Measures'''
‘Merger measurement systems need to evolve as the integration evolves into each successive phase.’ (p.146)
‘Near the end of the project, it is essential to capture feedback, learning, and process upgrades that can be used to build an ongoing institutional knowledge base regarding the integration process itself.’ (p.150)
Refer to p.150 for Automated Feedback Channels – several interesting points regarding use of IT in integration.
'''Operational Measures'''
The company should establish and communicate critical success factors. These critical success factors ‘summarize the essential strategic business outcomes that must be achieved.’ (p.152)
(Diagram on p.153 provides a summary of the process involved in defining operational measures)
'''Process and Cultural Measures'''
A ‘formal process for measuring the effectiveness of major merger-related redesign and cultural integration efforts’ (p.154) should be created by the company to track progress.
One method for this is the ‘Merger Integration Scorecard’ which provides a status update showing the progress of the most important critical success factors in key measurement categories. An example of this is provided on p.159-161.
'''Financial Measures'''
Four components are recommended to ensure a company identifies and achieves its essential objectives (p.162):
<ul>
<li> ‘An education process
<li> A verification process
<li> Document templates for submitting, tracking, and summarizing the achievement of synergies
<li> A process for reporting and communicating the achievement of synergies.’
</ul>
It is also important to identify the sources of synergies. Synergies typically come from: (p.163)
<ul>
<li> Income generation – ‘produce efficiencies whereby increased production is achieved via changes to processes, new or different equipment, new products, new channels for sales or distribution, enhanced quality, new management techniques, or best practices.’ (p.163)
<li> Expense reductions unrelated to reductions in staffing expenses – result from the avoidance and reduction of costs that were made possible due to the integration.
<li> Avoidance of capital outlay – ‘involve any reduction in planned use of capital, or in the scope of capital projects, that is made possible by improvements in plant use or by the sharing of resources.’ (p.163)
<li> Expense reductions related to reductions in staffing expenses – ‘involves the elimination of redundant roles, positions, or units when these reductions are attributable to the integration.’ (p.163)
</ul>
==BENEFITS FROM INTEGRATION MANAGEMENT==
===Integration Managers: Special Leaders for Special Times===
[http://www1.ximb.ac.in/users/fac/dpdash/dpdash.nsf/23e5e39594c064ee852564ae004fa010/7216b2f7b30b5247e52568b2001830f5/$FILE/ATT8WDSA/Integration_Managers.pdf]
Ronald N. Ashkenas, Suzanne C. Francis
(Article basically covers the role of integration managers, and looks at case studies involving integration managers)
‘Integration managers help the process in four principal ways: they speed it up, create a structure for it, forge social connections between the two organization, and help engineer short-term successes that produce business results.’ (p.183-184)
‘The integration manager can clear paths between the two cultures by facilitating the social connections among people on both sides.’ (p.191) This can help to overcome the problem of culture clash.
Five personality factors that are likely to increase the success of individuals in the role of integration manager are (p.196 – 201):
<ul>
<li> Deep knowledge of the acquiring company
<li> No need for credit – ‘The integration manager cannot be concerned with getting credit – or even recognition – for an effective integration.’ (p.198)
<li> Comfort with chaos – The integration manager need to have strong project management and organizational skills. ‘The best integration managers keep the process moving by constantly recalibrating their plans.’ (p.199)
<li> A responsible independence – Needs to be able to take initiative and make independent judgments, as there is no one providing instructions for what they need to do. It is also ‘vitally important that the integration manager have – or win – the trust of the most senior executives in his or her company.’ (p.200)
<li> Emotional and cultural intelligence – Integration manager must be able to understand the emotional and cultural issues that are involved in a merger, and recognize that it isn’t just an ‘engineering exercise’, but involves people.
</ul>
Summary, p. 202 – 203 ‘What Integration Managers Do’
'''Inject Speed'''
<ul>
<li> Ramp up planning efforts
<li> Accelerate implementation
<li> Push for decisions and actions
<li> Monitor progress against goals, and pace the integration efforts to meet deadlines
</ul>
'''Engineer Success'''
<ul>
<li> Help identify critical business synergies
<li> Launch 100-day projects to achieve short-term bottom-line results
<li> Orchestrate transfers of best practices between companies
</ul>
'''Make Social Connections'''
<ul>
<li> Act as traveling ambassador between locations and businesses
<li> Serve as a lighting rod for hot issues; allow employees to vent
<li> Interpret the customs, language, and cultures of both companies
</ul>
Cr'''eate Structure'''
<ul>
<li> Provide flexible integration frameworks
<li> Mobilize joint teams
<li> Create key events and timelines
<li> Facilitate team and executive reviews’ (p.202 – 203)
</ul>
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
6e31233413f04229fc45c9c48f3a9109be21ba02
403
383
2010-08-06T15:54:25Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
==About The Author & This Article==
Rachel Curry, Research Consultant, Bishop Phillips Consulting
This article presents a summary of the literature examining the risks in corporate mergers and acquisitions over a 20 year period up until 2003. It was originally prepared by Rachel Curry of our research team as background detail for a briefing provided to the Members or the Bendigo Stock Exchange by [[Jonathan Bishop]]. The subheadings represent the names of the articles or papers summarised. Document links were added after the initial paper was prepared, and some references may be in error. The original summaries were compiled from printed editions of the papers or texts, and some page references may differ from the online references. Most of the links will navigate to subscription or book distributers as appropriate. Please advise any identified discrepancies.
</noinclude>
==MERGER FAILURE RATES AND REASONS FOR FAILURE==
===Managing Mergers, Acquisitions & Strategic Alliances===
[http://books.google.com/books?id=w2YR9LwY7FQC&dq=MERGER+FAILURE+RATES+AND+REASONS+FOR+FAILURE&pg=PA5&ots=CSqEPdOcJl&sig=cZKsAhRXXl1LH_lmGHgwNjIOhxI&prev=http://www.google.com/search%3Fsourceid%3Dnavclient%26ie%3DUTF-8%26rls%3DGGLG,GGLG:2005-34,GGLG:en%26q%3DMERGER%2BFAILURE%2BRATES%2BAND%2BREASONS%2BFOR%2BFAILURE&sa=X&oi=print&ct=result&cd=3&cad=legacy]
Sue Cartwright, Cary L. Cooper
Diagnosis and analysis of merger failure has traditionally focused on financial and strategic factors, with mergers considered to fail for rational economic reasons such as economies of scale not achieved to the magnitude expected, poor strategic fit or unexpected changes in market conditions. However, considering financial and strategic factors only is insufficient to achieve a successful merger or acquisition. Two important human factors to merger and acquisition success which impact on integration are:
<ul>
<li> ‘The culture compatibility of the combining organizations, and the resultant cultural dynamics.’
<li> ‘The way in which the merger/acquisition integration process is managed.’
</ul>
A lack of cultural compatibility can inhibit the creation of a ‘cohesive and coherent organizational entity’. A survey conducted by the British Institute of Management (1986) determined that ‘managerial underestimation of the difficulties of merging two cultures was a major contributory factor to merger and acquisition failure.’
The factors often held responsible for merger and joint venture failure include the selection of inappropriate venture partners, cultural incompatibility, and general “parenting” problems. (p.18)
There has been much debate about the most appropriate and accurate way to assess the gains arising from mergers, including both managerial and mathematical methods. Despite the method selected, many studies indicate mergers have an unfavourable impact on profitability, with research conducted by Mecks (1977) and Sinetar (1981) concluding that mergers have been associated with lowered productivity, worse strike records, higher absenteeism, and poorer accident rates.
Further research conducted by Ellis and Pekar (1978) and Marks (1988) suggest that in the long term between 50 and 80 per cent of all mergers and takeovers are considered financially unsuccessful, while a study conducted by the Department of Trade and Industry, published by the British Institute of Management (1988) and another by Hunt (1988) determined the success rates post-acquisition to be around 50 per cent. More current studies show similar trends continuing, with Cartwright and Cooper (1996) determining, on the basis of financial results in the first year of combined trading, that only half of mergers and acquisitions studied were successful.
Estimate by Davy et al (1988) held ‘employee problems’ to be responsible for between one-third and half of all merger failures, while a discussion paper by the British Institute of Management (1986) identified sixteen factors related to unsuccessful mergers and acquisitions, including (p.28):
<ul>
<li> underestimating the difficulties of merging two cultures
<li> underestimating the problems of skill transfer
<li> demotivation of employees of acquired company
<li> departure of key people in acquired company
<li> too much energy devoted to ‘doing the deal’, not enough to post-acquisition planning and integration
<li> decision making delayed by unclear responsibilities and post-acquisition conflicts
<li> neglecting existing business due to the amount of attention going into the acquired company
<li> insufficient research about the acquired company
</ul>
‘Ability to integrate the new company’ (p.28) was ranked as the most important factor for acquisition success according to a study by Booz, Allen and Hamilton (1985) while Kitching (1967) determined ‘the key to merger success was essentially the way in which the “transitional process” was managed and the quality of the working relationship between the partnering organizations.’
===Consulting in Mergers and Acquisitions===
[http://www.ingentaconnect.com/content/mcb/023/1997/00000010/00000003/art00006]
Marks M.L.
Three studies (Davidson, 1991; Elsass and Veiga, 1994; Lubatkin, 1983) have found that ‘fewer than 20 per cent of corporate combinations achieve their desired financial or strategic objectives.’
Zweig (1995) studied deals value at $500 million or more, and found that half of these deals destroyed shareholder value, 30 per cent had a minimal impact and only 17 per cent created shareholder value.
Many factors attributable to this low success rate, including (p.1):
<ul>
<li> paying the wrong price
<li> buying for the wrong reasons
<li> selecting the wrong partner
<li> buying at the wrong time
<li> managing the post-merger integration process inappropriately
</ul>
Marks (1997) together with previous studies (Marks and Mirvis, 1997; Mirvis and Marks, 1992) found the common factor restricting ability to achieve hoped-for synergies and financial gains to be (p. 1- 2):
<ul>
<li> ‘underestimating the multitude of integration issues and problems that arise as organizations come together;
<li> underestimating the drain on resources and the distraction from performance required to manage the transition from pre- to post-merger status; and
<li> underestimating the pervasiveness and depth of the human issues triggered in a merger or acquisition.’
</ul>
Since mid-1980s, many aspects of mergers and acquisitions have changed, including (p.3):
<ul>
<li> ‘deals are more strategically driven
<li> technological advances are driving deals
<li> globalization is driving more deals
<li> deals are involving larger organizations
<li> entire industries are put into play (deregulation, social policies and changing customer demands)
<li> managers are smarter about doing deals and managing integration
<li> human assets are even more crucial to merger and acquisition success than before.’
</ul>
“Consultations to facilitate mergers and acquisitions emanate from sound change management principles, yet must be sensitive to the special requirements of combining complex organizations.” (p.4)
===Enhancing the Success of Mergers and Acquisitions===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=A600DFCDB0CD4D4945CE767ABBAC9918?contentType=Article&hdAction=lnkhtml&contentId=865419]
Mike Schraeder, Dennis R. Self
Research conducted by Carleton (1997) indicate between 55 – 70 per cent of mergers and acquisitions fail to meet their anticipated purpose.
Number of researchers determine that cultural incompatibility of the companies involved in the merger/acquisition are partly responsible for financial benefits anticipated not being achieved (Fralicx and Bolster, 1997; Cartwright and Cooper, 1993). Chatterje et al (1992) also agree that poor cultural fit has contributed to several merger and acquisition failures where the companies involved appeared to be suitable strategic partners.
Mirvis (1985) highlighted four factors that were believed to impact on the integration of organizations:
<ul>
<li> top management relations (including reporting relationships, decision making and flexibility)
<li> compatibility of business systems
<li> existence of a culture that will support the integration of business systems
<li> goals the respective parties intend to achieve
</ul>
Several other factors impacting on integration that have been identified through other research are:
<ul>
<li> compatibility of respective business systems (Mirvis, 1985)
<li> organizational members experience difficulty adjusting to new procedures and performance standards (Marks and Mirvis, 1992)
<li> differences in managerial styles and accounting practices (Cartwright and Cooper, 1993)
</ul>
Weber (1996) identifies that anticipated benefits from mergers and acquisitions are other unrealized because of productivity losses and the ‘traumatic effect of mergers and acquisitions on a firm’s human resources.’ Also finds that ‘the magnitude of cultural differences can effectively impede a successful integration during mergers and acquisitions, resulting in poor financial performance.’
Coopers and Lybrand (1992) studied failed mergers and acquisitions, and over 80 per cent of the executives involved identified that different management practices and styles as the primary contributor to integration issues.
To achieve merger and acquisition success, several researchers have determined the following factors need to be considered:
<ul>
<li> develop a flexible and comprehensive integration plan
<li> share information and encourage communication
<li> encourage participation by involving others in the process
<li> enhance commitment by establishing relationships and building trust
</ul>
===Due Diligence: The Devil in the Details===
[http://www.workforce.com/archive/feature/22/22/68/index.php]
Greengard, Samuel
“HR has a critical role in due diligence – both from the benefits and compensation side and the cultural side” – Deborah Rochelle, senior merger and acquisition consultant, Watson Wyatt Worldwide. She believes that ‘due diligence must encompass people, programs, plans, policies and processes.’
Clemente (1999) states that ‘ultimately, many mergers fail because of human resource–related issues, such as culture clash.’
Studies have found that between 50 and 75 per cent of all merging companies fail to retain book value two years after merging, and ‘many others are torpedoed by ongoing culture clash and an erosion of top talent.’ (p. 2)
Mitchell Lee Marks, management consultant, believes a number of failed mergers aren’t because of inept management or inadequate due diligence, but because the two organizations haven’t determined whether they have compatible cultures or how to overcome these differences if the cultures aren’t compatible.
Organizations should develop a detailed checklist to work through due diligence process to allow the organization to evaluate which factors are most important.
===On Managing Cultural Integration and Cultural Change Process in M & A===
Bijilsma-Frankema, K. (2001)
Journal of European Industrial Training, Vol.25
Magnet (1984) and Gilkey, 1991) have found that between 60 per cent and two-thirds of mergers and acquisitions fail to meet expectations.
Gilkey argues that:
‘the high percentage of failure is mainly due to the fact that mergers and acquisitions are still designed with business and financial fit as primary conditions, leaving psychological and cultural issues as secondary concerns. A close examination of these issues could have brought about a learning process, directed at successfully managing such ventures.’ (Gilkey, 1991, p.331)
Eisele (1996) found three factors that generally influence the success of mergers and acquisitions (p.6):
<ul>
<li> cultural fit
<li> cultural potential
<li> competent managers to guide the process
</ul>
===The Effective Management of Mergers===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=D784A9C7145AEEB97AB42AC75F0E6A95?contentType=Article&hdAction=lnkhtml&contentId=1410708]
Han Nguyen, Brian H. Kleiner
YTD 2002, there were over 4,363 mergers and acquisitions, worth over $291.7 billion.
Prime reason for most mergers and acquisitions is to maintain or increase market share, and to increase shareholder value by cutting costs, and introducing new, expanded and improved services.
Study by KPMG (publishing in PR Newswire, 1999) found that between 75 and 83 per cent of mergers and acquisitions failed, where failure meant lowered productivity, labour unrest, higher absenteeism and loss of shareholder value, or even a dissolution of the companies involved.
Merger success is directly correlated with the level and quality of planning, with insufficient time often being spent analyzing current and future market trends and integration issues. Failure is often also due to an insufficient due diligence (Oon, 1998).
Simpson (2000) found the opportunity for mergers to fail is greatest during the integration phase because of improper managing and strategy, culture differences, delays in communications, and lack of clear vision.
Bijilsma-Frankema (2001) found ‘increasing evidence that cultural incompatibility is single largest cause of lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’
KPMG developed best practice guidelines, with the following main keys necessary for successful integration (p.4):
<ul>
<li> ‘Directors must get out of the boardroom
<li> Set direction for the new business
<li> Understand the emotional political and rational issues
<li> Maximize involvement
<li> Focus on communication
<li> Provide clarity around roles and decision lines
<li> Continue to focus on customers
<li> Be flexible’
</ul>
Communication is listed as the key factor to make integration effective and successful.
===Managing Merger Madness===
[http://www.emeraldinsight.com/Insight/viewContentItem.do?contentType=Article&hdAction=lnkhtml&contentId=869290]
Journal: Strategic Direction (Author unkown)
Successful mergers and acquisitions consist of (p.1):
<ul>
<li> Acquisition target being carefully and dispassionately selected
<li> A post-acquisition strategy relevant to the newly merger organization need to be developed from the start
</ul>
In pre-merger planning stage, the most common mistakes are (p.1):
<ul>
<li> Failure to conduct a detailed risk assessment and management profile of the acquisition target
<li> Allowing pressure to increase share value to take the place of a convincing strategy
<li> Assuming total synergy
</ul>
The most common mistakes in integration processes are (p.1):
<ul>
<li> Slow post-merger integration
<li> Cultural conflicts
<li> No risk management strategy
</ul>
===Merging for Success===
[http://www.ingentaconnect.com/content/mcb/056/2002/00000018/00000006/art00003]
Author: Unknown
Found that in the first few months following the announcement of an acquisition, productivity falls by up to 50 per cent. Most mergers and acquisitions fail for reasons other than money, such as leadership issues involving unclear objectives or cultural clashes.
===Anatomy of a Merger===
Unknown.
Success of mergers and acquisitions range from 20 to 60 per cent (British Institute of Management, 1986; Hunt, 1988; Marks, 1988; Weber, 1996). Poor results have now generally come to be attributed to poor human resource planning.
Research identifies communication to be the most important factor during the merger and acquisition process.
Both Balmer and Dinnie (1999) and De Voge and Spreier (1999) indicate that communication is the key to a successful integration of two clashing cultures.
Ernst and Young (1994) identified cultural incompatibility as the single largest cause of ‘lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’ (p. 3)
For sustained competitive advantage to be achieved, it is imperative the mergers and acquisitions be implemented from a financially and legally sound standpoint, as well as a behavioural approach.
Leadership from top-level management is also important for merger success. Weber (1996) found the higher the commitment of the acquired firm’s top management, the higher the effectiveness and the financial performance of the merged entity. Success mergers are led by CEOs who (p.6, Part II):
<ul>
<li> Dedicate executive time and focus
<li> Put together a leadership team
<li> Focus management attention on success factors
<li> Create a sense of human purpose and direction
<li> Model desired behaviour and ‘rules of the road’
</ul>
It is recommended a merger-tracking program be implemented to determine whether the organization is working towards its goals, and what the merger outcomes were. It should cover things such as (p.7 – 8, Part II):
<ul>
<li> ‘Is the combination achieving financial and operational goals?
<li> Are schedules on target, and are changes being implemented effectively?
<li> Do employees understand and support the need for change?
<li> What is the effect on people’s well-being and esprit de corps?
<li> Are managers at all levels taking steps to minimize negative reactions and build positive feelings?
<li> Are productivity or work quality being affected?
<li> Do people understand their new roles and what is expected of them?
</ul>
==ATTRIBUTES LEADING TO SUCCESS OR FAILURE==
===Mergers and Acquisitions: A Guide to Creating Value for Stakeholders===
[http://www.questia.com/PM.qst?a=o&d=106499472#]
Michael A. Hitt, Jeffrey S. Harrison, R. Duane Ireland
Some important factors that can contribute to success or failure in mergers and acquisitions are:
'''Due Diligence'''
Lack of due diligence has caused many merger failures. Involves comprehensive analysis of firm characteristics such as financial condition, management capabilities, physical assets and intangible assets.
'''Financing'''
Manageable debt levels should be ensured.
'''Complementary Resources'''
Occurs when the ‘primary resources of the acquiring and target firms are somewhat different, yet simultaneously supportive of one another.’ (p.179) This tends to create economic value to a greater value that exists when the merging firms have identical or unrelated resources.
'''Friendly/Hostile Acquisitions'''
Friendly acquisitions tend to create greater economic value. A hostile acquisition can reduce the transfer of information during due diligence and merger integration, and increase turnover of key executives in the firm being acquired.
'''Synergy Creation'''
Four foundations to creation of synergy are strategic fit, organizational fit, managerial actions and value creation.
'''Organizational Learning'''
Many people should participate in the acquisition process to ensure knowledge about acquisitions is being spread throughout the firm, and isn’t lost if one of the key people typically involved leaves. The learning process should be managed, with steps taken to study and learn from acquisitions, with the information gained recorded.
'''Focus on Core Business'''
Cultural and management differences are more greatly magnified the less firms have in common, therefore constraining the sharing of resources and capabilities. ‘Result is that positive benefits from financial synergy are not enough to offset the negative effects of diversification.’ (p.181)
'''Emphasis on Innovation'''
Innovation is critical to organizational competitiveness. ‘Companies that innovate enjoy the first-mover advantages of acquiring a deep knowledge of new markets and developing strong relationships with key stakeholders in those markets’ (p. 181)
'''Ethical Concerns / Opportunism'''
Risk in mergers and acquisitions is that the information received may be incorrect, misleading or deceptive. Steps should be taken to ensure that the information is accurate and hasn’t been manipulated by management with the aim to making performance appear higher than it is.
===The Complete Guide to Mergers & Acquisitions: Process Tools to Support M&A: Integration at every level===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The likelihood of a successful merger is increased by considering the following ten key recommendations (p. 196 – 197):
<ul>
<li> ‘Conduct due-diligence analyses in the financial and human-capital-related areas.
<li> Determine the required or desired degree of integration.
<li> Speed up decisions instead of focusing on precision.
<li> Get support and commitment from senior managers.
<li> Clearly define an approach to integration.
<li> Select a highly respected and capable integration leader.
<li> Select dedicated, capable people for the integration core team and task forces.
<li> Use best practices.
<li> Set measurable goals and objectives.
<li> Provide continuous communication and feedback.’
</ul>
'''Due Diligence'''
Human resource due diligence analysis as well as financial due diligence is important. It provides details about where the companies converge or diverge in areas such as leadership, communication, training and performance management. Identifying this can allow the companies to plan for any conflicts that might occur during the integration phase in respect to these matters.
'''Speedy Decisions'''
Tends to allow faster integration, and enables people to refocus more quickly on work, customers and results.
'''Clearly Defined Approach'''
Allows faster decision making and organizes the entire integration process. ‘Without a defined approach that includes clear deliverables, due dates, milestones, information flows, and so on, each function of the enterprise will be working on a different schedule and producing deliverables that vary widely in terms of quality and content.’ (p.198)
'''Capable Leadership'''
‘The integration leader should be an excellent project manager with a broad view of the enterprise and good people skills.’ (p. 198)
'''Measurable Goals and Objectives'''
Measurable goals and objectives let people involved know what a successful integration consists of, and how long it should take.
==COMMON PROBLEMS AND CHALLENGES IN ACQUISITIONS==
===Managing Acquisitions: Creating Value Through Corporate Renewal===
[http://www.amazon.com/Managing-Acquisitions-Creating-Through-Corporate/dp/0029141656]
David B. Jemison, Philippe C. Haspeslagh
Four common challenges in managing acquisitions are (p. 8):
<ul>
<li> ‘Ensuring that acquisitions support the firm’s overall corporate renewal strategy
<li> Developing a pre-acquisition decision-making process that will allow consideration of the “right” acquisitions and that will develop for any particular acquisition a meaningful justification, given limited information and the need for speed and secrecy.
<li> Managing the post-acquisition integration process to create the value hoped for when the acquisition was conceived.
<li> Fostering both acquisition-specific and broader organizational learning from the exposure to the acquisition.’
</ul>
‘The key to integration is to obtain the participation of the people involved without compromising the strategic task.’ (p.11)
Acquisition integration has several challenges (p.11):
<ul>
<li> ‘Adapting pre-acquisition views to embrace reality,
<li> An ability to create the atmosphere necessary for capability transfer,
<li> The leadership to provide a common vision,
<li> And careful management of the interactions between the organizations.’
</ul>
'''Process Perspective'''
‘Adopting a process perspective shifts the focus from an acquisition’s results to the drivers that cause these results: the transfer of capabilities that will lead to competitive advantage. In the process perspective, acquisitions are not independent, one-off deals. Instead, they are a means to the end of corporate renewal. The transaction itself does not bring the expected benefits; instead, actions and activities of the managers after the agreement determine the results.’ (p.12)
(A summary of the entire chapter is provided on p. 15)
===Winning at Mergers and Acquisitions: The Guide to Market-Focused |Planning and Integration===
[http://www.wiley.com/WileyCDA/WileyTitle/productCd-047119056X.html]
Mark N. Clemente, David S. Greenspan
Key to successful mergers and acquisitions is ‘being able to take the differences inherent in the two companies and meld them to create an enhanced capability.’ (p. 43)
Problem is often that stakeholders focus on the short-term benefits from mergers and acquisitions such as cost reduction, which results in decisions being made that can sacrifice long-term goals to achieve short-term savings.
‘When companies seek to merge or acquire, and can cite more than two strategic drivers as reasons to come together, then the chances of success are higher.’ (p.44)
Twelve common challenges present in the majority of mergers and acquisitions are (p.163):
<ul>
<li> ‘Embracing the concept of change
<li> Setting priorities
<li> Sharing information and effecting corporate understanding
<li> Melding cultures
<li> Forging a new corporate identity
<li> Determining managerial roles and responsibilities
<li> Effecting teamwork and cooperation
<li> Combining corporate functions and internal processes
<li> Aligning capabilities, services, and products
<li> Measuring results
<li> Acknowledging the two levels of integration
<li> Maintaining flexibility’
</ul>
The long-term success or failure of mergers and acquisitions can be determined by the steps put in place to meet these challenges – each challenge should be ‘met with a clear focus and forward-thinking tactics.’ (p.163)
'''Setting Priorities'''
Integration planning is the number-one priority once a deal has been closed. The critical steps in the integration process itself are:
<ul>
<li> Address corporate information, marketing, and sales departments quickly, as these represent the company to stakeholders
<li> Corporate image and branding aspects are important to begin promoting the new image. This allows the company to display ‘the best face on the merger to external audiences while you grapple with many of the longer-term internal and operational issues.’ (p.165)
<li> Focus on retaining key employees
<li> Focus on customer retention – this is critical to maintain the value of the acquired company.
</ul>
'''Sharing Information and Effecting Corporate Understanding'''
The two companies need to share information, and understand the nature of the new corporate relationship. This should address issues such as ‘What is the company’s corporate philosophy? What are the strategic intentions of senior management? Why has the company come to develop, commercialize, and invest in the products and services it does? How are the sales and production people compensated and why?’ (p. 166)
'''Melding Cultures'''
‘Cultural compatibility is one of the most significant determinants of a successful M&A transaction.’ (p.167)
‘Acknowledging whether cultural compatibility can exist should be a factor in determining whether to pursue a given deal. Integration can never be attaining – and growth strategies never realized – if two companies are worlds apart culturally.’ (p.167)
This alignment of cultures can be achieved through information sharing, emphasizing similarities and ‘mitigating dissimilarities’ (p.167) through effective communication.
'''Determining Managerial Roles and Responsibilities'''
‘Allowing the acquired company’s managers to maintain responsibility for activities central to its core operations will help to accelerate integration by minimizing gaps in performance or production. Ideally, the acquiring management should audit and counsel the existing management, augmenting it where it is weak but leaving the previous management team intact until key processes have been successfully incorporated into the merged firm’s operational infrastructure.’ (p. 169)
Defining the character traits required in the new organization, and then identifying people possessing these assists in the selection of the management team that will best achieve strategic objectives.
Staffing decisions must be made early in the integration process to avoid employee uncertainty, which can impact on productivity.
'''Measuring Results'''
The integration program must have measurable criteria to assess the progress of the merger. ‘Must strive to set forth measurement criteria wherever it is possible to do so, whether it is by setting time parameters by which certain integration tasks must be completed, by gauging attitude changes via employee research, or by tracking the number of people who stay with the merged company against expected levels of attrition.’ (p. 175)
'''Acknowledging the Two Levels of Integration'''
‘The key to a prompt and effective integration launch is focusing on the similarities inherent in each organization and building on them.’ (p.175)
‘The key to successful integration is identifying the similarities inherent in each organization and building on them while maintaining a disciplined yet flexible approach…’ (p.177)
‘Isolating common factors and focusing on similarities provides the essence of the growth planning approach to devising and implementing a successful integration strategy.’ (p. 177)
==MEASURING MERGER SUCCESS==
===Keeping Track of Success: Merger Measurement Systems===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The benefits that arise from a formal tracking process are (p.145):
<ul>
<li> ‘Determining whether the transition is proceeding according to plan
<li> Identifying “hot spots” before they flare out of control
<li> Ensuring a good flow of communication
<li> Highlighting the need for midcourse corrections
<li> Demonstrating interest in the human side of change
<li> Involving more people in the combination process
<li> Sending a message about the new company’s culture.’
</ul>
‘Four areas for which separate but interrelated measurement processes must be continually managed during merger integration’: (p.145)
<ul>
<li> Integration measures: assess the integration events and determine whether ‘overall integration approach is accomplishing its mission of leading the organization through change.’ (p.145)
<li> Operational measures: track ‘any potential merger-related impact on the organization’s ability to conduct its continuing, day-to-day business.’ (p.145)
<li> Process and cultural measures: determine the ‘status of merger-driven efforts to redesign business processes or elements of the organizational culture.’ (p.145)
<li> Financial measures – track and report whether the company is achieving its expected synergies.
</ul>
(Examples of measures used for the above are included on p.145)
'''Integration Measures'''
‘Merger measurement systems need to evolve as the integration evolves into each successive phase.’ (p.146)
‘Near the end of the project, it is essential to capture feedback, learning, and process upgrades that can be used to build an ongoing institutional knowledge base regarding the integration process itself.’ (p.150)
Refer to p.150 for Automated Feedback Channels – several interesting points regarding use of IT in integration.
'''Operational Measures'''
The company should establish and communicate critical success factors. These critical success factors ‘summarize the essential strategic business outcomes that must be achieved.’ (p.152)
(Diagram on p.153 provides a summary of the process involved in defining operational measures)
'''Process and Cultural Measures'''
A ‘formal process for measuring the effectiveness of major merger-related redesign and cultural integration efforts’ (p.154) should be created by the company to track progress.
One method for this is the ‘Merger Integration Scorecard’ which provides a status update showing the progress of the most important critical success factors in key measurement categories. An example of this is provided on p.159-161.
'''Financial Measures'''
Four components are recommended to ensure a company identifies and achieves its essential objectives (p.162):
<ul>
<li> ‘An education process
<li> A verification process
<li> Document templates for submitting, tracking, and summarizing the achievement of synergies
<li> A process for reporting and communicating the achievement of synergies.’
</ul>
It is also important to identify the sources of synergies. Synergies typically come from: (p.163)
<ul>
<li> Income generation – ‘produce efficiencies whereby increased production is achieved via changes to processes, new or different equipment, new products, new channels for sales or distribution, enhanced quality, new management techniques, or best practices.’ (p.163)
<li> Expense reductions unrelated to reductions in staffing expenses – result from the avoidance and reduction of costs that were made possible due to the integration.
<li> Avoidance of capital outlay – ‘involve any reduction in planned use of capital, or in the scope of capital projects, that is made possible by improvements in plant use or by the sharing of resources.’ (p.163)
<li> Expense reductions related to reductions in staffing expenses – ‘involves the elimination of redundant roles, positions, or units when these reductions are attributable to the integration.’ (p.163)
</ul>
==BENEFITS FROM INTEGRATION MANAGEMENT==
===Integration Managers: Special Leaders for Special Times===
[http://www1.ximb.ac.in/users/fac/dpdash/dpdash.nsf/23e5e39594c064ee852564ae004fa010/7216b2f7b30b5247e52568b2001830f5/$FILE/ATT8WDSA/Integration_Managers.pdf]
Ronald N. Ashkenas, Suzanne C. Francis
(Article basically covers the role of integration managers, and looks at case studies involving integration managers)
‘Integration managers help the process in four principal ways: they speed it up, create a structure for it, forge social connections between the two organization, and help engineer short-term successes that produce business results.’ (p.183-184)
‘The integration manager can clear paths between the two cultures by facilitating the social connections among people on both sides.’ (p.191) This can help to overcome the problem of culture clash.
Five personality factors that are likely to increase the success of individuals in the role of integration manager are (p.196 – 201):
<ul>
<li> Deep knowledge of the acquiring company
<li> No need for credit – ‘The integration manager cannot be concerned with getting credit – or even recognition – for an effective integration.’ (p.198)
<li> Comfort with chaos – The integration manager need to have strong project management and organizational skills. ‘The best integration managers keep the process moving by constantly recalibrating their plans.’ (p.199)
<li> A responsible independence – Needs to be able to take initiative and make independent judgments, as there is no one providing instructions for what they need to do. It is also ‘vitally important that the integration manager have – or win – the trust of the most senior executives in his or her company.’ (p.200)
<li> Emotional and cultural intelligence – Integration manager must be able to understand the emotional and cultural issues that are involved in a merger, and recognize that it isn’t just an ‘engineering exercise’, but involves people.
</ul>
Summary, p. 202 – 203 ‘What Integration Managers Do’
'''Inject Speed'''
<ul>
<li> Ramp up planning efforts
<li> Accelerate implementation
<li> Push for decisions and actions
<li> Monitor progress against goals, and pace the integration efforts to meet deadlines
</ul>
'''Engineer Success'''
<ul>
<li> Help identify critical business synergies
<li> Launch 100-day projects to achieve short-term bottom-line results
<li> Orchestrate transfers of best practices between companies
</ul>
'''Make Social Connections'''
<ul>
<li> Act as traveling ambassador between locations and businesses
<li> Serve as a lighting rod for hot issues; allow employees to vent
<li> Interpret the customs, language, and cultures of both companies
</ul>
Cr'''eate Structure'''
<ul>
<li> Provide flexible integration frameworks
<li> Mobilize joint teams
<li> Create key events and timelines
<li> Facilitate team and executive reviews’ (p.202 – 203)
</ul>
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
6e31233413f04229fc45c9c48f3a9109be21ba02
475
403
2010-08-06T15:54:25Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
==About The Author & This Article==
Rachel Curry, Research Consultant, Bishop Phillips Consulting
This article presents a summary of the literature examining the risks in corporate mergers and acquisitions over a 20 year period up until 2003. It was originally prepared by Rachel Curry of our research team as background detail for a briefing provided to the Members or the Bendigo Stock Exchange by [[Jonathan Bishop]]. The subheadings represent the names of the articles or papers summarised. Document links were added after the initial paper was prepared, and some references may be in error. The original summaries were compiled from printed editions of the papers or texts, and some page references may differ from the online references. Most of the links will navigate to subscription or book distributers as appropriate. Please advise any identified discrepancies.
</noinclude>
==MERGER FAILURE RATES AND REASONS FOR FAILURE==
===Managing Mergers, Acquisitions & Strategic Alliances===
[http://books.google.com/books?id=w2YR9LwY7FQC&dq=MERGER+FAILURE+RATES+AND+REASONS+FOR+FAILURE&pg=PA5&ots=CSqEPdOcJl&sig=cZKsAhRXXl1LH_lmGHgwNjIOhxI&prev=http://www.google.com/search%3Fsourceid%3Dnavclient%26ie%3DUTF-8%26rls%3DGGLG,GGLG:2005-34,GGLG:en%26q%3DMERGER%2BFAILURE%2BRATES%2BAND%2BREASONS%2BFOR%2BFAILURE&sa=X&oi=print&ct=result&cd=3&cad=legacy]
Sue Cartwright, Cary L. Cooper
Diagnosis and analysis of merger failure has traditionally focused on financial and strategic factors, with mergers considered to fail for rational economic reasons such as economies of scale not achieved to the magnitude expected, poor strategic fit or unexpected changes in market conditions. However, considering financial and strategic factors only is insufficient to achieve a successful merger or acquisition. Two important human factors to merger and acquisition success which impact on integration are:
<ul>
<li> ‘The culture compatibility of the combining organizations, and the resultant cultural dynamics.’
<li> ‘The way in which the merger/acquisition integration process is managed.’
</ul>
A lack of cultural compatibility can inhibit the creation of a ‘cohesive and coherent organizational entity’. A survey conducted by the British Institute of Management (1986) determined that ‘managerial underestimation of the difficulties of merging two cultures was a major contributory factor to merger and acquisition failure.’
The factors often held responsible for merger and joint venture failure include the selection of inappropriate venture partners, cultural incompatibility, and general “parenting” problems. (p.18)
There has been much debate about the most appropriate and accurate way to assess the gains arising from mergers, including both managerial and mathematical methods. Despite the method selected, many studies indicate mergers have an unfavourable impact on profitability, with research conducted by Mecks (1977) and Sinetar (1981) concluding that mergers have been associated with lowered productivity, worse strike records, higher absenteeism, and poorer accident rates.
Further research conducted by Ellis and Pekar (1978) and Marks (1988) suggest that in the long term between 50 and 80 per cent of all mergers and takeovers are considered financially unsuccessful, while a study conducted by the Department of Trade and Industry, published by the British Institute of Management (1988) and another by Hunt (1988) determined the success rates post-acquisition to be around 50 per cent. More current studies show similar trends continuing, with Cartwright and Cooper (1996) determining, on the basis of financial results in the first year of combined trading, that only half of mergers and acquisitions studied were successful.
Estimate by Davy et al (1988) held ‘employee problems’ to be responsible for between one-third and half of all merger failures, while a discussion paper by the British Institute of Management (1986) identified sixteen factors related to unsuccessful mergers and acquisitions, including (p.28):
<ul>
<li> underestimating the difficulties of merging two cultures
<li> underestimating the problems of skill transfer
<li> demotivation of employees of acquired company
<li> departure of key people in acquired company
<li> too much energy devoted to ‘doing the deal’, not enough to post-acquisition planning and integration
<li> decision making delayed by unclear responsibilities and post-acquisition conflicts
<li> neglecting existing business due to the amount of attention going into the acquired company
<li> insufficient research about the acquired company
</ul>
‘Ability to integrate the new company’ (p.28) was ranked as the most important factor for acquisition success according to a study by Booz, Allen and Hamilton (1985) while Kitching (1967) determined ‘the key to merger success was essentially the way in which the “transitional process” was managed and the quality of the working relationship between the partnering organizations.’
===Consulting in Mergers and Acquisitions===
[http://www.ingentaconnect.com/content/mcb/023/1997/00000010/00000003/art00006]
Marks M.L.
Three studies (Davidson, 1991; Elsass and Veiga, 1994; Lubatkin, 1983) have found that ‘fewer than 20 per cent of corporate combinations achieve their desired financial or strategic objectives.’
Zweig (1995) studied deals value at $500 million or more, and found that half of these deals destroyed shareholder value, 30 per cent had a minimal impact and only 17 per cent created shareholder value.
Many factors attributable to this low success rate, including (p.1):
<ul>
<li> paying the wrong price
<li> buying for the wrong reasons
<li> selecting the wrong partner
<li> buying at the wrong time
<li> managing the post-merger integration process inappropriately
</ul>
Marks (1997) together with previous studies (Marks and Mirvis, 1997; Mirvis and Marks, 1992) found the common factor restricting ability to achieve hoped-for synergies and financial gains to be (p. 1- 2):
<ul>
<li> ‘underestimating the multitude of integration issues and problems that arise as organizations come together;
<li> underestimating the drain on resources and the distraction from performance required to manage the transition from pre- to post-merger status; and
<li> underestimating the pervasiveness and depth of the human issues triggered in a merger or acquisition.’
</ul>
Since mid-1980s, many aspects of mergers and acquisitions have changed, including (p.3):
<ul>
<li> ‘deals are more strategically driven
<li> technological advances are driving deals
<li> globalization is driving more deals
<li> deals are involving larger organizations
<li> entire industries are put into play (deregulation, social policies and changing customer demands)
<li> managers are smarter about doing deals and managing integration
<li> human assets are even more crucial to merger and acquisition success than before.’
</ul>
“Consultations to facilitate mergers and acquisitions emanate from sound change management principles, yet must be sensitive to the special requirements of combining complex organizations.” (p.4)
===Enhancing the Success of Mergers and Acquisitions===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=A600DFCDB0CD4D4945CE767ABBAC9918?contentType=Article&hdAction=lnkhtml&contentId=865419]
Mike Schraeder, Dennis R. Self
Research conducted by Carleton (1997) indicate between 55 – 70 per cent of mergers and acquisitions fail to meet their anticipated purpose.
Number of researchers determine that cultural incompatibility of the companies involved in the merger/acquisition are partly responsible for financial benefits anticipated not being achieved (Fralicx and Bolster, 1997; Cartwright and Cooper, 1993). Chatterje et al (1992) also agree that poor cultural fit has contributed to several merger and acquisition failures where the companies involved appeared to be suitable strategic partners.
Mirvis (1985) highlighted four factors that were believed to impact on the integration of organizations:
<ul>
<li> top management relations (including reporting relationships, decision making and flexibility)
<li> compatibility of business systems
<li> existence of a culture that will support the integration of business systems
<li> goals the respective parties intend to achieve
</ul>
Several other factors impacting on integration that have been identified through other research are:
<ul>
<li> compatibility of respective business systems (Mirvis, 1985)
<li> organizational members experience difficulty adjusting to new procedures and performance standards (Marks and Mirvis, 1992)
<li> differences in managerial styles and accounting practices (Cartwright and Cooper, 1993)
</ul>
Weber (1996) identifies that anticipated benefits from mergers and acquisitions are other unrealized because of productivity losses and the ‘traumatic effect of mergers and acquisitions on a firm’s human resources.’ Also finds that ‘the magnitude of cultural differences can effectively impede a successful integration during mergers and acquisitions, resulting in poor financial performance.’
Coopers and Lybrand (1992) studied failed mergers and acquisitions, and over 80 per cent of the executives involved identified that different management practices and styles as the primary contributor to integration issues.
To achieve merger and acquisition success, several researchers have determined the following factors need to be considered:
<ul>
<li> develop a flexible and comprehensive integration plan
<li> share information and encourage communication
<li> encourage participation by involving others in the process
<li> enhance commitment by establishing relationships and building trust
</ul>
===Due Diligence: The Devil in the Details===
[http://www.workforce.com/archive/feature/22/22/68/index.php]
Greengard, Samuel
“HR has a critical role in due diligence – both from the benefits and compensation side and the cultural side” – Deborah Rochelle, senior merger and acquisition consultant, Watson Wyatt Worldwide. She believes that ‘due diligence must encompass people, programs, plans, policies and processes.’
Clemente (1999) states that ‘ultimately, many mergers fail because of human resource–related issues, such as culture clash.’
Studies have found that between 50 and 75 per cent of all merging companies fail to retain book value two years after merging, and ‘many others are torpedoed by ongoing culture clash and an erosion of top talent.’ (p. 2)
Mitchell Lee Marks, management consultant, believes a number of failed mergers aren’t because of inept management or inadequate due diligence, but because the two organizations haven’t determined whether they have compatible cultures or how to overcome these differences if the cultures aren’t compatible.
Organizations should develop a detailed checklist to work through due diligence process to allow the organization to evaluate which factors are most important.
===On Managing Cultural Integration and Cultural Change Process in M & A===
Bijilsma-Frankema, K. (2001)
Journal of European Industrial Training, Vol.25
Magnet (1984) and Gilkey, 1991) have found that between 60 per cent and two-thirds of mergers and acquisitions fail to meet expectations.
Gilkey argues that:
‘the high percentage of failure is mainly due to the fact that mergers and acquisitions are still designed with business and financial fit as primary conditions, leaving psychological and cultural issues as secondary concerns. A close examination of these issues could have brought about a learning process, directed at successfully managing such ventures.’ (Gilkey, 1991, p.331)
Eisele (1996) found three factors that generally influence the success of mergers and acquisitions (p.6):
<ul>
<li> cultural fit
<li> cultural potential
<li> competent managers to guide the process
</ul>
===The Effective Management of Mergers===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=D784A9C7145AEEB97AB42AC75F0E6A95?contentType=Article&hdAction=lnkhtml&contentId=1410708]
Han Nguyen, Brian H. Kleiner
YTD 2002, there were over 4,363 mergers and acquisitions, worth over $291.7 billion.
Prime reason for most mergers and acquisitions is to maintain or increase market share, and to increase shareholder value by cutting costs, and introducing new, expanded and improved services.
Study by KPMG (publishing in PR Newswire, 1999) found that between 75 and 83 per cent of mergers and acquisitions failed, where failure meant lowered productivity, labour unrest, higher absenteeism and loss of shareholder value, or even a dissolution of the companies involved.
Merger success is directly correlated with the level and quality of planning, with insufficient time often being spent analyzing current and future market trends and integration issues. Failure is often also due to an insufficient due diligence (Oon, 1998).
Simpson (2000) found the opportunity for mergers to fail is greatest during the integration phase because of improper managing and strategy, culture differences, delays in communications, and lack of clear vision.
Bijilsma-Frankema (2001) found ‘increasing evidence that cultural incompatibility is single largest cause of lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’
KPMG developed best practice guidelines, with the following main keys necessary for successful integration (p.4):
<ul>
<li> ‘Directors must get out of the boardroom
<li> Set direction for the new business
<li> Understand the emotional political and rational issues
<li> Maximize involvement
<li> Focus on communication
<li> Provide clarity around roles and decision lines
<li> Continue to focus on customers
<li> Be flexible’
</ul>
Communication is listed as the key factor to make integration effective and successful.
===Managing Merger Madness===
[http://www.emeraldinsight.com/Insight/viewContentItem.do?contentType=Article&hdAction=lnkhtml&contentId=869290]
Journal: Strategic Direction (Author unkown)
Successful mergers and acquisitions consist of (p.1):
<ul>
<li> Acquisition target being carefully and dispassionately selected
<li> A post-acquisition strategy relevant to the newly merger organization need to be developed from the start
</ul>
In pre-merger planning stage, the most common mistakes are (p.1):
<ul>
<li> Failure to conduct a detailed risk assessment and management profile of the acquisition target
<li> Allowing pressure to increase share value to take the place of a convincing strategy
<li> Assuming total synergy
</ul>
The most common mistakes in integration processes are (p.1):
<ul>
<li> Slow post-merger integration
<li> Cultural conflicts
<li> No risk management strategy
</ul>
===Merging for Success===
[http://www.ingentaconnect.com/content/mcb/056/2002/00000018/00000006/art00003]
Author: Unknown
Found that in the first few months following the announcement of an acquisition, productivity falls by up to 50 per cent. Most mergers and acquisitions fail for reasons other than money, such as leadership issues involving unclear objectives or cultural clashes.
===Anatomy of a Merger===
Unknown.
Success of mergers and acquisitions range from 20 to 60 per cent (British Institute of Management, 1986; Hunt, 1988; Marks, 1988; Weber, 1996). Poor results have now generally come to be attributed to poor human resource planning.
Research identifies communication to be the most important factor during the merger and acquisition process.
Both Balmer and Dinnie (1999) and De Voge and Spreier (1999) indicate that communication is the key to a successful integration of two clashing cultures.
Ernst and Young (1994) identified cultural incompatibility as the single largest cause of ‘lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’ (p. 3)
For sustained competitive advantage to be achieved, it is imperative the mergers and acquisitions be implemented from a financially and legally sound standpoint, as well as a behavioural approach.
Leadership from top-level management is also important for merger success. Weber (1996) found the higher the commitment of the acquired firm’s top management, the higher the effectiveness and the financial performance of the merged entity. Success mergers are led by CEOs who (p.6, Part II):
<ul>
<li> Dedicate executive time and focus
<li> Put together a leadership team
<li> Focus management attention on success factors
<li> Create a sense of human purpose and direction
<li> Model desired behaviour and ‘rules of the road’
</ul>
It is recommended a merger-tracking program be implemented to determine whether the organization is working towards its goals, and what the merger outcomes were. It should cover things such as (p.7 – 8, Part II):
<ul>
<li> ‘Is the combination achieving financial and operational goals?
<li> Are schedules on target, and are changes being implemented effectively?
<li> Do employees understand and support the need for change?
<li> What is the effect on people’s well-being and esprit de corps?
<li> Are managers at all levels taking steps to minimize negative reactions and build positive feelings?
<li> Are productivity or work quality being affected?
<li> Do people understand their new roles and what is expected of them?
</ul>
==ATTRIBUTES LEADING TO SUCCESS OR FAILURE==
===Mergers and Acquisitions: A Guide to Creating Value for Stakeholders===
[http://www.questia.com/PM.qst?a=o&d=106499472#]
Michael A. Hitt, Jeffrey S. Harrison, R. Duane Ireland
Some important factors that can contribute to success or failure in mergers and acquisitions are:
'''Due Diligence'''
Lack of due diligence has caused many merger failures. Involves comprehensive analysis of firm characteristics such as financial condition, management capabilities, physical assets and intangible assets.
'''Financing'''
Manageable debt levels should be ensured.
'''Complementary Resources'''
Occurs when the ‘primary resources of the acquiring and target firms are somewhat different, yet simultaneously supportive of one another.’ (p.179) This tends to create economic value to a greater value that exists when the merging firms have identical or unrelated resources.
'''Friendly/Hostile Acquisitions'''
Friendly acquisitions tend to create greater economic value. A hostile acquisition can reduce the transfer of information during due diligence and merger integration, and increase turnover of key executives in the firm being acquired.
'''Synergy Creation'''
Four foundations to creation of synergy are strategic fit, organizational fit, managerial actions and value creation.
'''Organizational Learning'''
Many people should participate in the acquisition process to ensure knowledge about acquisitions is being spread throughout the firm, and isn’t lost if one of the key people typically involved leaves. The learning process should be managed, with steps taken to study and learn from acquisitions, with the information gained recorded.
'''Focus on Core Business'''
Cultural and management differences are more greatly magnified the less firms have in common, therefore constraining the sharing of resources and capabilities. ‘Result is that positive benefits from financial synergy are not enough to offset the negative effects of diversification.’ (p.181)
'''Emphasis on Innovation'''
Innovation is critical to organizational competitiveness. ‘Companies that innovate enjoy the first-mover advantages of acquiring a deep knowledge of new markets and developing strong relationships with key stakeholders in those markets’ (p. 181)
'''Ethical Concerns / Opportunism'''
Risk in mergers and acquisitions is that the information received may be incorrect, misleading or deceptive. Steps should be taken to ensure that the information is accurate and hasn’t been manipulated by management with the aim to making performance appear higher than it is.
===The Complete Guide to Mergers & Acquisitions: Process Tools to Support M&A: Integration at every level===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The likelihood of a successful merger is increased by considering the following ten key recommendations (p. 196 – 197):
<ul>
<li> ‘Conduct due-diligence analyses in the financial and human-capital-related areas.
<li> Determine the required or desired degree of integration.
<li> Speed up decisions instead of focusing on precision.
<li> Get support and commitment from senior managers.
<li> Clearly define an approach to integration.
<li> Select a highly respected and capable integration leader.
<li> Select dedicated, capable people for the integration core team and task forces.
<li> Use best practices.
<li> Set measurable goals and objectives.
<li> Provide continuous communication and feedback.’
</ul>
'''Due Diligence'''
Human resource due diligence analysis as well as financial due diligence is important. It provides details about where the companies converge or diverge in areas such as leadership, communication, training and performance management. Identifying this can allow the companies to plan for any conflicts that might occur during the integration phase in respect to these matters.
'''Speedy Decisions'''
Tends to allow faster integration, and enables people to refocus more quickly on work, customers and results.
'''Clearly Defined Approach'''
Allows faster decision making and organizes the entire integration process. ‘Without a defined approach that includes clear deliverables, due dates, milestones, information flows, and so on, each function of the enterprise will be working on a different schedule and producing deliverables that vary widely in terms of quality and content.’ (p.198)
'''Capable Leadership'''
‘The integration leader should be an excellent project manager with a broad view of the enterprise and good people skills.’ (p. 198)
'''Measurable Goals and Objectives'''
Measurable goals and objectives let people involved know what a successful integration consists of, and how long it should take.
==COMMON PROBLEMS AND CHALLENGES IN ACQUISITIONS==
===Managing Acquisitions: Creating Value Through Corporate Renewal===
[http://www.amazon.com/Managing-Acquisitions-Creating-Through-Corporate/dp/0029141656]
David B. Jemison, Philippe C. Haspeslagh
Four common challenges in managing acquisitions are (p. 8):
<ul>
<li> ‘Ensuring that acquisitions support the firm’s overall corporate renewal strategy
<li> Developing a pre-acquisition decision-making process that will allow consideration of the “right” acquisitions and that will develop for any particular acquisition a meaningful justification, given limited information and the need for speed and secrecy.
<li> Managing the post-acquisition integration process to create the value hoped for when the acquisition was conceived.
<li> Fostering both acquisition-specific and broader organizational learning from the exposure to the acquisition.’
</ul>
‘The key to integration is to obtain the participation of the people involved without compromising the strategic task.’ (p.11)
Acquisition integration has several challenges (p.11):
<ul>
<li> ‘Adapting pre-acquisition views to embrace reality,
<li> An ability to create the atmosphere necessary for capability transfer,
<li> The leadership to provide a common vision,
<li> And careful management of the interactions between the organizations.’
</ul>
'''Process Perspective'''
‘Adopting a process perspective shifts the focus from an acquisition’s results to the drivers that cause these results: the transfer of capabilities that will lead to competitive advantage. In the process perspective, acquisitions are not independent, one-off deals. Instead, they are a means to the end of corporate renewal. The transaction itself does not bring the expected benefits; instead, actions and activities of the managers after the agreement determine the results.’ (p.12)
(A summary of the entire chapter is provided on p. 15)
===Winning at Mergers and Acquisitions: The Guide to Market-Focused |Planning and Integration===
[http://www.wiley.com/WileyCDA/WileyTitle/productCd-047119056X.html]
Mark N. Clemente, David S. Greenspan
Key to successful mergers and acquisitions is ‘being able to take the differences inherent in the two companies and meld them to create an enhanced capability.’ (p. 43)
Problem is often that stakeholders focus on the short-term benefits from mergers and acquisitions such as cost reduction, which results in decisions being made that can sacrifice long-term goals to achieve short-term savings.
‘When companies seek to merge or acquire, and can cite more than two strategic drivers as reasons to come together, then the chances of success are higher.’ (p.44)
Twelve common challenges present in the majority of mergers and acquisitions are (p.163):
<ul>
<li> ‘Embracing the concept of change
<li> Setting priorities
<li> Sharing information and effecting corporate understanding
<li> Melding cultures
<li> Forging a new corporate identity
<li> Determining managerial roles and responsibilities
<li> Effecting teamwork and cooperation
<li> Combining corporate functions and internal processes
<li> Aligning capabilities, services, and products
<li> Measuring results
<li> Acknowledging the two levels of integration
<li> Maintaining flexibility’
</ul>
The long-term success or failure of mergers and acquisitions can be determined by the steps put in place to meet these challenges – each challenge should be ‘met with a clear focus and forward-thinking tactics.’ (p.163)
'''Setting Priorities'''
Integration planning is the number-one priority once a deal has been closed. The critical steps in the integration process itself are:
<ul>
<li> Address corporate information, marketing, and sales departments quickly, as these represent the company to stakeholders
<li> Corporate image and branding aspects are important to begin promoting the new image. This allows the company to display ‘the best face on the merger to external audiences while you grapple with many of the longer-term internal and operational issues.’ (p.165)
<li> Focus on retaining key employees
<li> Focus on customer retention – this is critical to maintain the value of the acquired company.
</ul>
'''Sharing Information and Effecting Corporate Understanding'''
The two companies need to share information, and understand the nature of the new corporate relationship. This should address issues such as ‘What is the company’s corporate philosophy? What are the strategic intentions of senior management? Why has the company come to develop, commercialize, and invest in the products and services it does? How are the sales and production people compensated and why?’ (p. 166)
'''Melding Cultures'''
‘Cultural compatibility is one of the most significant determinants of a successful M&A transaction.’ (p.167)
‘Acknowledging whether cultural compatibility can exist should be a factor in determining whether to pursue a given deal. Integration can never be attaining – and growth strategies never realized – if two companies are worlds apart culturally.’ (p.167)
This alignment of cultures can be achieved through information sharing, emphasizing similarities and ‘mitigating dissimilarities’ (p.167) through effective communication.
'''Determining Managerial Roles and Responsibilities'''
‘Allowing the acquired company’s managers to maintain responsibility for activities central to its core operations will help to accelerate integration by minimizing gaps in performance or production. Ideally, the acquiring management should audit and counsel the existing management, augmenting it where it is weak but leaving the previous management team intact until key processes have been successfully incorporated into the merged firm’s operational infrastructure.’ (p. 169)
Defining the character traits required in the new organization, and then identifying people possessing these assists in the selection of the management team that will best achieve strategic objectives.
Staffing decisions must be made early in the integration process to avoid employee uncertainty, which can impact on productivity.
'''Measuring Results'''
The integration program must have measurable criteria to assess the progress of the merger. ‘Must strive to set forth measurement criteria wherever it is possible to do so, whether it is by setting time parameters by which certain integration tasks must be completed, by gauging attitude changes via employee research, or by tracking the number of people who stay with the merged company against expected levels of attrition.’ (p. 175)
'''Acknowledging the Two Levels of Integration'''
‘The key to a prompt and effective integration launch is focusing on the similarities inherent in each organization and building on them.’ (p.175)
‘The key to successful integration is identifying the similarities inherent in each organization and building on them while maintaining a disciplined yet flexible approach…’ (p.177)
‘Isolating common factors and focusing on similarities provides the essence of the growth planning approach to devising and implementing a successful integration strategy.’ (p. 177)
==MEASURING MERGER SUCCESS==
===Keeping Track of Success: Merger Measurement Systems===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The benefits that arise from a formal tracking process are (p.145):
<ul>
<li> ‘Determining whether the transition is proceeding according to plan
<li> Identifying “hot spots” before they flare out of control
<li> Ensuring a good flow of communication
<li> Highlighting the need for midcourse corrections
<li> Demonstrating interest in the human side of change
<li> Involving more people in the combination process
<li> Sending a message about the new company’s culture.’
</ul>
‘Four areas for which separate but interrelated measurement processes must be continually managed during merger integration’: (p.145)
<ul>
<li> Integration measures: assess the integration events and determine whether ‘overall integration approach is accomplishing its mission of leading the organization through change.’ (p.145)
<li> Operational measures: track ‘any potential merger-related impact on the organization’s ability to conduct its continuing, day-to-day business.’ (p.145)
<li> Process and cultural measures: determine the ‘status of merger-driven efforts to redesign business processes or elements of the organizational culture.’ (p.145)
<li> Financial measures – track and report whether the company is achieving its expected synergies.
</ul>
(Examples of measures used for the above are included on p.145)
'''Integration Measures'''
‘Merger measurement systems need to evolve as the integration evolves into each successive phase.’ (p.146)
‘Near the end of the project, it is essential to capture feedback, learning, and process upgrades that can be used to build an ongoing institutional knowledge base regarding the integration process itself.’ (p.150)
Refer to p.150 for Automated Feedback Channels – several interesting points regarding use of IT in integration.
'''Operational Measures'''
The company should establish and communicate critical success factors. These critical success factors ‘summarize the essential strategic business outcomes that must be achieved.’ (p.152)
(Diagram on p.153 provides a summary of the process involved in defining operational measures)
'''Process and Cultural Measures'''
A ‘formal process for measuring the effectiveness of major merger-related redesign and cultural integration efforts’ (p.154) should be created by the company to track progress.
One method for this is the ‘Merger Integration Scorecard’ which provides a status update showing the progress of the most important critical success factors in key measurement categories. An example of this is provided on p.159-161.
'''Financial Measures'''
Four components are recommended to ensure a company identifies and achieves its essential objectives (p.162):
<ul>
<li> ‘An education process
<li> A verification process
<li> Document templates for submitting, tracking, and summarizing the achievement of synergies
<li> A process for reporting and communicating the achievement of synergies.’
</ul>
It is also important to identify the sources of synergies. Synergies typically come from: (p.163)
<ul>
<li> Income generation – ‘produce efficiencies whereby increased production is achieved via changes to processes, new or different equipment, new products, new channels for sales or distribution, enhanced quality, new management techniques, or best practices.’ (p.163)
<li> Expense reductions unrelated to reductions in staffing expenses – result from the avoidance and reduction of costs that were made possible due to the integration.
<li> Avoidance of capital outlay – ‘involve any reduction in planned use of capital, or in the scope of capital projects, that is made possible by improvements in plant use or by the sharing of resources.’ (p.163)
<li> Expense reductions related to reductions in staffing expenses – ‘involves the elimination of redundant roles, positions, or units when these reductions are attributable to the integration.’ (p.163)
</ul>
==BENEFITS FROM INTEGRATION MANAGEMENT==
===Integration Managers: Special Leaders for Special Times===
[http://www1.ximb.ac.in/users/fac/dpdash/dpdash.nsf/23e5e39594c064ee852564ae004fa010/7216b2f7b30b5247e52568b2001830f5/$FILE/ATT8WDSA/Integration_Managers.pdf]
Ronald N. Ashkenas, Suzanne C. Francis
(Article basically covers the role of integration managers, and looks at case studies involving integration managers)
‘Integration managers help the process in four principal ways: they speed it up, create a structure for it, forge social connections between the two organization, and help engineer short-term successes that produce business results.’ (p.183-184)
‘The integration manager can clear paths between the two cultures by facilitating the social connections among people on both sides.’ (p.191) This can help to overcome the problem of culture clash.
Five personality factors that are likely to increase the success of individuals in the role of integration manager are (p.196 – 201):
<ul>
<li> Deep knowledge of the acquiring company
<li> No need for credit – ‘The integration manager cannot be concerned with getting credit – or even recognition – for an effective integration.’ (p.198)
<li> Comfort with chaos – The integration manager need to have strong project management and organizational skills. ‘The best integration managers keep the process moving by constantly recalibrating their plans.’ (p.199)
<li> A responsible independence – Needs to be able to take initiative and make independent judgments, as there is no one providing instructions for what they need to do. It is also ‘vitally important that the integration manager have – or win – the trust of the most senior executives in his or her company.’ (p.200)
<li> Emotional and cultural intelligence – Integration manager must be able to understand the emotional and cultural issues that are involved in a merger, and recognize that it isn’t just an ‘engineering exercise’, but involves people.
</ul>
Summary, p. 202 – 203 ‘What Integration Managers Do’
'''Inject Speed'''
<ul>
<li> Ramp up planning efforts
<li> Accelerate implementation
<li> Push for decisions and actions
<li> Monitor progress against goals, and pace the integration efforts to meet deadlines
</ul>
'''Engineer Success'''
<ul>
<li> Help identify critical business synergies
<li> Launch 100-day projects to achieve short-term bottom-line results
<li> Orchestrate transfers of best practices between companies
</ul>
'''Make Social Connections'''
<ul>
<li> Act as traveling ambassador between locations and businesses
<li> Serve as a lighting rod for hot issues; allow employees to vent
<li> Interpret the customs, language, and cultures of both companies
</ul>
Cr'''eate Structure'''
<ul>
<li> Provide flexible integration frameworks
<li> Mobilize joint teams
<li> Create key events and timelines
<li> Facilitate team and executive reviews’ (p.202 – 203)
</ul>
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
6e31233413f04229fc45c9c48f3a9109be21ba02
491
475
2010-08-06T15:54:25Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
==About The Author & This Article==
Rachel Curry, Research Consultant, Bishop Phillips Consulting
This article presents a summary of the literature examining the risks in corporate mergers and acquisitions over a 20 year period up until 2003. It was originally prepared by Rachel Curry of our research team as background detail for a briefing provided to the Members or the Bendigo Stock Exchange by [[Jonathan Bishop]]. The subheadings represent the names of the articles or papers summarised. Document links were added after the initial paper was prepared, and some references may be in error. The original summaries were compiled from printed editions of the papers or texts, and some page references may differ from the online references. Most of the links will navigate to subscription or book distributers as appropriate. Please advise any identified discrepancies.
</noinclude>
==MERGER FAILURE RATES AND REASONS FOR FAILURE==
===Managing Mergers, Acquisitions & Strategic Alliances===
[http://books.google.com/books?id=w2YR9LwY7FQC&dq=MERGER+FAILURE+RATES+AND+REASONS+FOR+FAILURE&pg=PA5&ots=CSqEPdOcJl&sig=cZKsAhRXXl1LH_lmGHgwNjIOhxI&prev=http://www.google.com/search%3Fsourceid%3Dnavclient%26ie%3DUTF-8%26rls%3DGGLG,GGLG:2005-34,GGLG:en%26q%3DMERGER%2BFAILURE%2BRATES%2BAND%2BREASONS%2BFOR%2BFAILURE&sa=X&oi=print&ct=result&cd=3&cad=legacy]
Sue Cartwright, Cary L. Cooper
Diagnosis and analysis of merger failure has traditionally focused on financial and strategic factors, with mergers considered to fail for rational economic reasons such as economies of scale not achieved to the magnitude expected, poor strategic fit or unexpected changes in market conditions. However, considering financial and strategic factors only is insufficient to achieve a successful merger or acquisition. Two important human factors to merger and acquisition success which impact on integration are:
<ul>
<li> ‘The culture compatibility of the combining organizations, and the resultant cultural dynamics.’
<li> ‘The way in which the merger/acquisition integration process is managed.’
</ul>
A lack of cultural compatibility can inhibit the creation of a ‘cohesive and coherent organizational entity’. A survey conducted by the British Institute of Management (1986) determined that ‘managerial underestimation of the difficulties of merging two cultures was a major contributory factor to merger and acquisition failure.’
The factors often held responsible for merger and joint venture failure include the selection of inappropriate venture partners, cultural incompatibility, and general “parenting” problems. (p.18)
There has been much debate about the most appropriate and accurate way to assess the gains arising from mergers, including both managerial and mathematical methods. Despite the method selected, many studies indicate mergers have an unfavourable impact on profitability, with research conducted by Mecks (1977) and Sinetar (1981) concluding that mergers have been associated with lowered productivity, worse strike records, higher absenteeism, and poorer accident rates.
Further research conducted by Ellis and Pekar (1978) and Marks (1988) suggest that in the long term between 50 and 80 per cent of all mergers and takeovers are considered financially unsuccessful, while a study conducted by the Department of Trade and Industry, published by the British Institute of Management (1988) and another by Hunt (1988) determined the success rates post-acquisition to be around 50 per cent. More current studies show similar trends continuing, with Cartwright and Cooper (1996) determining, on the basis of financial results in the first year of combined trading, that only half of mergers and acquisitions studied were successful.
Estimate by Davy et al (1988) held ‘employee problems’ to be responsible for between one-third and half of all merger failures, while a discussion paper by the British Institute of Management (1986) identified sixteen factors related to unsuccessful mergers and acquisitions, including (p.28):
<ul>
<li> underestimating the difficulties of merging two cultures
<li> underestimating the problems of skill transfer
<li> demotivation of employees of acquired company
<li> departure of key people in acquired company
<li> too much energy devoted to ‘doing the deal’, not enough to post-acquisition planning and integration
<li> decision making delayed by unclear responsibilities and post-acquisition conflicts
<li> neglecting existing business due to the amount of attention going into the acquired company
<li> insufficient research about the acquired company
</ul>
‘Ability to integrate the new company’ (p.28) was ranked as the most important factor for acquisition success according to a study by Booz, Allen and Hamilton (1985) while Kitching (1967) determined ‘the key to merger success was essentially the way in which the “transitional process” was managed and the quality of the working relationship between the partnering organizations.’
===Consulting in Mergers and Acquisitions===
[http://www.ingentaconnect.com/content/mcb/023/1997/00000010/00000003/art00006]
Marks M.L.
Three studies (Davidson, 1991; Elsass and Veiga, 1994; Lubatkin, 1983) have found that ‘fewer than 20 per cent of corporate combinations achieve their desired financial or strategic objectives.’
Zweig (1995) studied deals value at $500 million or more, and found that half of these deals destroyed shareholder value, 30 per cent had a minimal impact and only 17 per cent created shareholder value.
Many factors attributable to this low success rate, including (p.1):
<ul>
<li> paying the wrong price
<li> buying for the wrong reasons
<li> selecting the wrong partner
<li> buying at the wrong time
<li> managing the post-merger integration process inappropriately
</ul>
Marks (1997) together with previous studies (Marks and Mirvis, 1997; Mirvis and Marks, 1992) found the common factor restricting ability to achieve hoped-for synergies and financial gains to be (p. 1- 2):
<ul>
<li> ‘underestimating the multitude of integration issues and problems that arise as organizations come together;
<li> underestimating the drain on resources and the distraction from performance required to manage the transition from pre- to post-merger status; and
<li> underestimating the pervasiveness and depth of the human issues triggered in a merger or acquisition.’
</ul>
Since mid-1980s, many aspects of mergers and acquisitions have changed, including (p.3):
<ul>
<li> ‘deals are more strategically driven
<li> technological advances are driving deals
<li> globalization is driving more deals
<li> deals are involving larger organizations
<li> entire industries are put into play (deregulation, social policies and changing customer demands)
<li> managers are smarter about doing deals and managing integration
<li> human assets are even more crucial to merger and acquisition success than before.’
</ul>
“Consultations to facilitate mergers and acquisitions emanate from sound change management principles, yet must be sensitive to the special requirements of combining complex organizations.” (p.4)
===Enhancing the Success of Mergers and Acquisitions===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=A600DFCDB0CD4D4945CE767ABBAC9918?contentType=Article&hdAction=lnkhtml&contentId=865419]
Mike Schraeder, Dennis R. Self
Research conducted by Carleton (1997) indicate between 55 – 70 per cent of mergers and acquisitions fail to meet their anticipated purpose.
Number of researchers determine that cultural incompatibility of the companies involved in the merger/acquisition are partly responsible for financial benefits anticipated not being achieved (Fralicx and Bolster, 1997; Cartwright and Cooper, 1993). Chatterje et al (1992) also agree that poor cultural fit has contributed to several merger and acquisition failures where the companies involved appeared to be suitable strategic partners.
Mirvis (1985) highlighted four factors that were believed to impact on the integration of organizations:
<ul>
<li> top management relations (including reporting relationships, decision making and flexibility)
<li> compatibility of business systems
<li> existence of a culture that will support the integration of business systems
<li> goals the respective parties intend to achieve
</ul>
Several other factors impacting on integration that have been identified through other research are:
<ul>
<li> compatibility of respective business systems (Mirvis, 1985)
<li> organizational members experience difficulty adjusting to new procedures and performance standards (Marks and Mirvis, 1992)
<li> differences in managerial styles and accounting practices (Cartwright and Cooper, 1993)
</ul>
Weber (1996) identifies that anticipated benefits from mergers and acquisitions are other unrealized because of productivity losses and the ‘traumatic effect of mergers and acquisitions on a firm’s human resources.’ Also finds that ‘the magnitude of cultural differences can effectively impede a successful integration during mergers and acquisitions, resulting in poor financial performance.’
Coopers and Lybrand (1992) studied failed mergers and acquisitions, and over 80 per cent of the executives involved identified that different management practices and styles as the primary contributor to integration issues.
To achieve merger and acquisition success, several researchers have determined the following factors need to be considered:
<ul>
<li> develop a flexible and comprehensive integration plan
<li> share information and encourage communication
<li> encourage participation by involving others in the process
<li> enhance commitment by establishing relationships and building trust
</ul>
===Due Diligence: The Devil in the Details===
[http://www.workforce.com/archive/feature/22/22/68/index.php]
Greengard, Samuel
“HR has a critical role in due diligence – both from the benefits and compensation side and the cultural side” – Deborah Rochelle, senior merger and acquisition consultant, Watson Wyatt Worldwide. She believes that ‘due diligence must encompass people, programs, plans, policies and processes.’
Clemente (1999) states that ‘ultimately, many mergers fail because of human resource–related issues, such as culture clash.’
Studies have found that between 50 and 75 per cent of all merging companies fail to retain book value two years after merging, and ‘many others are torpedoed by ongoing culture clash and an erosion of top talent.’ (p. 2)
Mitchell Lee Marks, management consultant, believes a number of failed mergers aren’t because of inept management or inadequate due diligence, but because the two organizations haven’t determined whether they have compatible cultures or how to overcome these differences if the cultures aren’t compatible.
Organizations should develop a detailed checklist to work through due diligence process to allow the organization to evaluate which factors are most important.
===On Managing Cultural Integration and Cultural Change Process in M & A===
Bijilsma-Frankema, K. (2001)
Journal of European Industrial Training, Vol.25
Magnet (1984) and Gilkey, 1991) have found that between 60 per cent and two-thirds of mergers and acquisitions fail to meet expectations.
Gilkey argues that:
‘the high percentage of failure is mainly due to the fact that mergers and acquisitions are still designed with business and financial fit as primary conditions, leaving psychological and cultural issues as secondary concerns. A close examination of these issues could have brought about a learning process, directed at successfully managing such ventures.’ (Gilkey, 1991, p.331)
Eisele (1996) found three factors that generally influence the success of mergers and acquisitions (p.6):
<ul>
<li> cultural fit
<li> cultural potential
<li> competent managers to guide the process
</ul>
===The Effective Management of Mergers===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=D784A9C7145AEEB97AB42AC75F0E6A95?contentType=Article&hdAction=lnkhtml&contentId=1410708]
Han Nguyen, Brian H. Kleiner
YTD 2002, there were over 4,363 mergers and acquisitions, worth over $291.7 billion.
Prime reason for most mergers and acquisitions is to maintain or increase market share, and to increase shareholder value by cutting costs, and introducing new, expanded and improved services.
Study by KPMG (publishing in PR Newswire, 1999) found that between 75 and 83 per cent of mergers and acquisitions failed, where failure meant lowered productivity, labour unrest, higher absenteeism and loss of shareholder value, or even a dissolution of the companies involved.
Merger success is directly correlated with the level and quality of planning, with insufficient time often being spent analyzing current and future market trends and integration issues. Failure is often also due to an insufficient due diligence (Oon, 1998).
Simpson (2000) found the opportunity for mergers to fail is greatest during the integration phase because of improper managing and strategy, culture differences, delays in communications, and lack of clear vision.
Bijilsma-Frankema (2001) found ‘increasing evidence that cultural incompatibility is single largest cause of lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’
KPMG developed best practice guidelines, with the following main keys necessary for successful integration (p.4):
<ul>
<li> ‘Directors must get out of the boardroom
<li> Set direction for the new business
<li> Understand the emotional political and rational issues
<li> Maximize involvement
<li> Focus on communication
<li> Provide clarity around roles and decision lines
<li> Continue to focus on customers
<li> Be flexible’
</ul>
Communication is listed as the key factor to make integration effective and successful.
===Managing Merger Madness===
[http://www.emeraldinsight.com/Insight/viewContentItem.do?contentType=Article&hdAction=lnkhtml&contentId=869290]
Journal: Strategic Direction (Author unkown)
Successful mergers and acquisitions consist of (p.1):
<ul>
<li> Acquisition target being carefully and dispassionately selected
<li> A post-acquisition strategy relevant to the newly merger organization need to be developed from the start
</ul>
In pre-merger planning stage, the most common mistakes are (p.1):
<ul>
<li> Failure to conduct a detailed risk assessment and management profile of the acquisition target
<li> Allowing pressure to increase share value to take the place of a convincing strategy
<li> Assuming total synergy
</ul>
The most common mistakes in integration processes are (p.1):
<ul>
<li> Slow post-merger integration
<li> Cultural conflicts
<li> No risk management strategy
</ul>
===Merging for Success===
[http://www.ingentaconnect.com/content/mcb/056/2002/00000018/00000006/art00003]
Author: Unknown
Found that in the first few months following the announcement of an acquisition, productivity falls by up to 50 per cent. Most mergers and acquisitions fail for reasons other than money, such as leadership issues involving unclear objectives or cultural clashes.
===Anatomy of a Merger===
Unknown.
Success of mergers and acquisitions range from 20 to 60 per cent (British Institute of Management, 1986; Hunt, 1988; Marks, 1988; Weber, 1996). Poor results have now generally come to be attributed to poor human resource planning.
Research identifies communication to be the most important factor during the merger and acquisition process.
Both Balmer and Dinnie (1999) and De Voge and Spreier (1999) indicate that communication is the key to a successful integration of two clashing cultures.
Ernst and Young (1994) identified cultural incompatibility as the single largest cause of ‘lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’ (p. 3)
For sustained competitive advantage to be achieved, it is imperative the mergers and acquisitions be implemented from a financially and legally sound standpoint, as well as a behavioural approach.
Leadership from top-level management is also important for merger success. Weber (1996) found the higher the commitment of the acquired firm’s top management, the higher the effectiveness and the financial performance of the merged entity. Success mergers are led by CEOs who (p.6, Part II):
<ul>
<li> Dedicate executive time and focus
<li> Put together a leadership team
<li> Focus management attention on success factors
<li> Create a sense of human purpose and direction
<li> Model desired behaviour and ‘rules of the road’
</ul>
It is recommended a merger-tracking program be implemented to determine whether the organization is working towards its goals, and what the merger outcomes were. It should cover things such as (p.7 – 8, Part II):
<ul>
<li> ‘Is the combination achieving financial and operational goals?
<li> Are schedules on target, and are changes being implemented effectively?
<li> Do employees understand and support the need for change?
<li> What is the effect on people’s well-being and esprit de corps?
<li> Are managers at all levels taking steps to minimize negative reactions and build positive feelings?
<li> Are productivity or work quality being affected?
<li> Do people understand their new roles and what is expected of them?
</ul>
==ATTRIBUTES LEADING TO SUCCESS OR FAILURE==
===Mergers and Acquisitions: A Guide to Creating Value for Stakeholders===
[http://www.questia.com/PM.qst?a=o&d=106499472#]
Michael A. Hitt, Jeffrey S. Harrison, R. Duane Ireland
Some important factors that can contribute to success or failure in mergers and acquisitions are:
'''Due Diligence'''
Lack of due diligence has caused many merger failures. Involves comprehensive analysis of firm characteristics such as financial condition, management capabilities, physical assets and intangible assets.
'''Financing'''
Manageable debt levels should be ensured.
'''Complementary Resources'''
Occurs when the ‘primary resources of the acquiring and target firms are somewhat different, yet simultaneously supportive of one another.’ (p.179) This tends to create economic value to a greater value that exists when the merging firms have identical or unrelated resources.
'''Friendly/Hostile Acquisitions'''
Friendly acquisitions tend to create greater economic value. A hostile acquisition can reduce the transfer of information during due diligence and merger integration, and increase turnover of key executives in the firm being acquired.
'''Synergy Creation'''
Four foundations to creation of synergy are strategic fit, organizational fit, managerial actions and value creation.
'''Organizational Learning'''
Many people should participate in the acquisition process to ensure knowledge about acquisitions is being spread throughout the firm, and isn’t lost if one of the key people typically involved leaves. The learning process should be managed, with steps taken to study and learn from acquisitions, with the information gained recorded.
'''Focus on Core Business'''
Cultural and management differences are more greatly magnified the less firms have in common, therefore constraining the sharing of resources and capabilities. ‘Result is that positive benefits from financial synergy are not enough to offset the negative effects of diversification.’ (p.181)
'''Emphasis on Innovation'''
Innovation is critical to organizational competitiveness. ‘Companies that innovate enjoy the first-mover advantages of acquiring a deep knowledge of new markets and developing strong relationships with key stakeholders in those markets’ (p. 181)
'''Ethical Concerns / Opportunism'''
Risk in mergers and acquisitions is that the information received may be incorrect, misleading or deceptive. Steps should be taken to ensure that the information is accurate and hasn’t been manipulated by management with the aim to making performance appear higher than it is.
===The Complete Guide to Mergers & Acquisitions: Process Tools to Support M&A: Integration at every level===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The likelihood of a successful merger is increased by considering the following ten key recommendations (p. 196 – 197):
<ul>
<li> ‘Conduct due-diligence analyses in the financial and human-capital-related areas.
<li> Determine the required or desired degree of integration.
<li> Speed up decisions instead of focusing on precision.
<li> Get support and commitment from senior managers.
<li> Clearly define an approach to integration.
<li> Select a highly respected and capable integration leader.
<li> Select dedicated, capable people for the integration core team and task forces.
<li> Use best practices.
<li> Set measurable goals and objectives.
<li> Provide continuous communication and feedback.’
</ul>
'''Due Diligence'''
Human resource due diligence analysis as well as financial due diligence is important. It provides details about where the companies converge or diverge in areas such as leadership, communication, training and performance management. Identifying this can allow the companies to plan for any conflicts that might occur during the integration phase in respect to these matters.
'''Speedy Decisions'''
Tends to allow faster integration, and enables people to refocus more quickly on work, customers and results.
'''Clearly Defined Approach'''
Allows faster decision making and organizes the entire integration process. ‘Without a defined approach that includes clear deliverables, due dates, milestones, information flows, and so on, each function of the enterprise will be working on a different schedule and producing deliverables that vary widely in terms of quality and content.’ (p.198)
'''Capable Leadership'''
‘The integration leader should be an excellent project manager with a broad view of the enterprise and good people skills.’ (p. 198)
'''Measurable Goals and Objectives'''
Measurable goals and objectives let people involved know what a successful integration consists of, and how long it should take.
==COMMON PROBLEMS AND CHALLENGES IN ACQUISITIONS==
===Managing Acquisitions: Creating Value Through Corporate Renewal===
[http://www.amazon.com/Managing-Acquisitions-Creating-Through-Corporate/dp/0029141656]
David B. Jemison, Philippe C. Haspeslagh
Four common challenges in managing acquisitions are (p. 8):
<ul>
<li> ‘Ensuring that acquisitions support the firm’s overall corporate renewal strategy
<li> Developing a pre-acquisition decision-making process that will allow consideration of the “right” acquisitions and that will develop for any particular acquisition a meaningful justification, given limited information and the need for speed and secrecy.
<li> Managing the post-acquisition integration process to create the value hoped for when the acquisition was conceived.
<li> Fostering both acquisition-specific and broader organizational learning from the exposure to the acquisition.’
</ul>
‘The key to integration is to obtain the participation of the people involved without compromising the strategic task.’ (p.11)
Acquisition integration has several challenges (p.11):
<ul>
<li> ‘Adapting pre-acquisition views to embrace reality,
<li> An ability to create the atmosphere necessary for capability transfer,
<li> The leadership to provide a common vision,
<li> And careful management of the interactions between the organizations.’
</ul>
'''Process Perspective'''
‘Adopting a process perspective shifts the focus from an acquisition’s results to the drivers that cause these results: the transfer of capabilities that will lead to competitive advantage. In the process perspective, acquisitions are not independent, one-off deals. Instead, they are a means to the end of corporate renewal. The transaction itself does not bring the expected benefits; instead, actions and activities of the managers after the agreement determine the results.’ (p.12)
(A summary of the entire chapter is provided on p. 15)
===Winning at Mergers and Acquisitions: The Guide to Market-Focused |Planning and Integration===
[http://www.wiley.com/WileyCDA/WileyTitle/productCd-047119056X.html]
Mark N. Clemente, David S. Greenspan
Key to successful mergers and acquisitions is ‘being able to take the differences inherent in the two companies and meld them to create an enhanced capability.’ (p. 43)
Problem is often that stakeholders focus on the short-term benefits from mergers and acquisitions such as cost reduction, which results in decisions being made that can sacrifice long-term goals to achieve short-term savings.
‘When companies seek to merge or acquire, and can cite more than two strategic drivers as reasons to come together, then the chances of success are higher.’ (p.44)
Twelve common challenges present in the majority of mergers and acquisitions are (p.163):
<ul>
<li> ‘Embracing the concept of change
<li> Setting priorities
<li> Sharing information and effecting corporate understanding
<li> Melding cultures
<li> Forging a new corporate identity
<li> Determining managerial roles and responsibilities
<li> Effecting teamwork and cooperation
<li> Combining corporate functions and internal processes
<li> Aligning capabilities, services, and products
<li> Measuring results
<li> Acknowledging the two levels of integration
<li> Maintaining flexibility’
</ul>
The long-term success or failure of mergers and acquisitions can be determined by the steps put in place to meet these challenges – each challenge should be ‘met with a clear focus and forward-thinking tactics.’ (p.163)
'''Setting Priorities'''
Integration planning is the number-one priority once a deal has been closed. The critical steps in the integration process itself are:
<ul>
<li> Address corporate information, marketing, and sales departments quickly, as these represent the company to stakeholders
<li> Corporate image and branding aspects are important to begin promoting the new image. This allows the company to display ‘the best face on the merger to external audiences while you grapple with many of the longer-term internal and operational issues.’ (p.165)
<li> Focus on retaining key employees
<li> Focus on customer retention – this is critical to maintain the value of the acquired company.
</ul>
'''Sharing Information and Effecting Corporate Understanding'''
The two companies need to share information, and understand the nature of the new corporate relationship. This should address issues such as ‘What is the company’s corporate philosophy? What are the strategic intentions of senior management? Why has the company come to develop, commercialize, and invest in the products and services it does? How are the sales and production people compensated and why?’ (p. 166)
'''Melding Cultures'''
‘Cultural compatibility is one of the most significant determinants of a successful M&A transaction.’ (p.167)
‘Acknowledging whether cultural compatibility can exist should be a factor in determining whether to pursue a given deal. Integration can never be attaining – and growth strategies never realized – if two companies are worlds apart culturally.’ (p.167)
This alignment of cultures can be achieved through information sharing, emphasizing similarities and ‘mitigating dissimilarities’ (p.167) through effective communication.
'''Determining Managerial Roles and Responsibilities'''
‘Allowing the acquired company’s managers to maintain responsibility for activities central to its core operations will help to accelerate integration by minimizing gaps in performance or production. Ideally, the acquiring management should audit and counsel the existing management, augmenting it where it is weak but leaving the previous management team intact until key processes have been successfully incorporated into the merged firm’s operational infrastructure.’ (p. 169)
Defining the character traits required in the new organization, and then identifying people possessing these assists in the selection of the management team that will best achieve strategic objectives.
Staffing decisions must be made early in the integration process to avoid employee uncertainty, which can impact on productivity.
'''Measuring Results'''
The integration program must have measurable criteria to assess the progress of the merger. ‘Must strive to set forth measurement criteria wherever it is possible to do so, whether it is by setting time parameters by which certain integration tasks must be completed, by gauging attitude changes via employee research, or by tracking the number of people who stay with the merged company against expected levels of attrition.’ (p. 175)
'''Acknowledging the Two Levels of Integration'''
‘The key to a prompt and effective integration launch is focusing on the similarities inherent in each organization and building on them.’ (p.175)
‘The key to successful integration is identifying the similarities inherent in each organization and building on them while maintaining a disciplined yet flexible approach…’ (p.177)
‘Isolating common factors and focusing on similarities provides the essence of the growth planning approach to devising and implementing a successful integration strategy.’ (p. 177)
==MEASURING MERGER SUCCESS==
===Keeping Track of Success: Merger Measurement Systems===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The benefits that arise from a formal tracking process are (p.145):
<ul>
<li> ‘Determining whether the transition is proceeding according to plan
<li> Identifying “hot spots” before they flare out of control
<li> Ensuring a good flow of communication
<li> Highlighting the need for midcourse corrections
<li> Demonstrating interest in the human side of change
<li> Involving more people in the combination process
<li> Sending a message about the new company’s culture.’
</ul>
‘Four areas for which separate but interrelated measurement processes must be continually managed during merger integration’: (p.145)
<ul>
<li> Integration measures: assess the integration events and determine whether ‘overall integration approach is accomplishing its mission of leading the organization through change.’ (p.145)
<li> Operational measures: track ‘any potential merger-related impact on the organization’s ability to conduct its continuing, day-to-day business.’ (p.145)
<li> Process and cultural measures: determine the ‘status of merger-driven efforts to redesign business processes or elements of the organizational culture.’ (p.145)
<li> Financial measures – track and report whether the company is achieving its expected synergies.
</ul>
(Examples of measures used for the above are included on p.145)
'''Integration Measures'''
‘Merger measurement systems need to evolve as the integration evolves into each successive phase.’ (p.146)
‘Near the end of the project, it is essential to capture feedback, learning, and process upgrades that can be used to build an ongoing institutional knowledge base regarding the integration process itself.’ (p.150)
Refer to p.150 for Automated Feedback Channels – several interesting points regarding use of IT in integration.
'''Operational Measures'''
The company should establish and communicate critical success factors. These critical success factors ‘summarize the essential strategic business outcomes that must be achieved.’ (p.152)
(Diagram on p.153 provides a summary of the process involved in defining operational measures)
'''Process and Cultural Measures'''
A ‘formal process for measuring the effectiveness of major merger-related redesign and cultural integration efforts’ (p.154) should be created by the company to track progress.
One method for this is the ‘Merger Integration Scorecard’ which provides a status update showing the progress of the most important critical success factors in key measurement categories. An example of this is provided on p.159-161.
'''Financial Measures'''
Four components are recommended to ensure a company identifies and achieves its essential objectives (p.162):
<ul>
<li> ‘An education process
<li> A verification process
<li> Document templates for submitting, tracking, and summarizing the achievement of synergies
<li> A process for reporting and communicating the achievement of synergies.’
</ul>
It is also important to identify the sources of synergies. Synergies typically come from: (p.163)
<ul>
<li> Income generation – ‘produce efficiencies whereby increased production is achieved via changes to processes, new or different equipment, new products, new channels for sales or distribution, enhanced quality, new management techniques, or best practices.’ (p.163)
<li> Expense reductions unrelated to reductions in staffing expenses – result from the avoidance and reduction of costs that were made possible due to the integration.
<li> Avoidance of capital outlay – ‘involve any reduction in planned use of capital, or in the scope of capital projects, that is made possible by improvements in plant use or by the sharing of resources.’ (p.163)
<li> Expense reductions related to reductions in staffing expenses – ‘involves the elimination of redundant roles, positions, or units when these reductions are attributable to the integration.’ (p.163)
</ul>
==BENEFITS FROM INTEGRATION MANAGEMENT==
===Integration Managers: Special Leaders for Special Times===
[http://www1.ximb.ac.in/users/fac/dpdash/dpdash.nsf/23e5e39594c064ee852564ae004fa010/7216b2f7b30b5247e52568b2001830f5/$FILE/ATT8WDSA/Integration_Managers.pdf]
Ronald N. Ashkenas, Suzanne C. Francis
(Article basically covers the role of integration managers, and looks at case studies involving integration managers)
‘Integration managers help the process in four principal ways: they speed it up, create a structure for it, forge social connections between the two organization, and help engineer short-term successes that produce business results.’ (p.183-184)
‘The integration manager can clear paths between the two cultures by facilitating the social connections among people on both sides.’ (p.191) This can help to overcome the problem of culture clash.
Five personality factors that are likely to increase the success of individuals in the role of integration manager are (p.196 – 201):
<ul>
<li> Deep knowledge of the acquiring company
<li> No need for credit – ‘The integration manager cannot be concerned with getting credit – or even recognition – for an effective integration.’ (p.198)
<li> Comfort with chaos – The integration manager need to have strong project management and organizational skills. ‘The best integration managers keep the process moving by constantly recalibrating their plans.’ (p.199)
<li> A responsible independence – Needs to be able to take initiative and make independent judgments, as there is no one providing instructions for what they need to do. It is also ‘vitally important that the integration manager have – or win – the trust of the most senior executives in his or her company.’ (p.200)
<li> Emotional and cultural intelligence – Integration manager must be able to understand the emotional and cultural issues that are involved in a merger, and recognize that it isn’t just an ‘engineering exercise’, but involves people.
</ul>
Summary, p. 202 – 203 ‘What Integration Managers Do’
'''Inject Speed'''
<ul>
<li> Ramp up planning efforts
<li> Accelerate implementation
<li> Push for decisions and actions
<li> Monitor progress against goals, and pace the integration efforts to meet deadlines
</ul>
'''Engineer Success'''
<ul>
<li> Help identify critical business synergies
<li> Launch 100-day projects to achieve short-term bottom-line results
<li> Orchestrate transfers of best practices between companies
</ul>
'''Make Social Connections'''
<ul>
<li> Act as traveling ambassador between locations and businesses
<li> Serve as a lighting rod for hot issues; allow employees to vent
<li> Interpret the customs, language, and cultures of both companies
</ul>
Cr'''eate Structure'''
<ul>
<li> Provide flexible integration frameworks
<li> Mobilize joint teams
<li> Create key events and timelines
<li> Facilitate team and executive reviews’ (p.202 – 203)
</ul>
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
6e31233413f04229fc45c9c48f3a9109be21ba02
Managing Risk in Mergers & Acquisitions - A Success Strategy
0
295
381
2010-08-06T15:56:02Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2007 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
THIS ARTICLE IS NOT YET COMPLETE
=Pre-Merger Actions=
==Pre-merger Requisits==
* Beyond Financial Due-diligence (history & forecast)
** Financial,
** Legal,
** Cultural,
** Infrastructure, etc
* Include the cost of integration (including IT) in the forecasts
* Understand the financial structure, performance drivers and debt levels
* Understand the hidden control & decision relationships (why the acquired business really works)
* Understand all the stakeholders and implied or expressed service agreements
* Understand the meaning of merger success (in this context and for both parties)
* Agree the merger strategy (on both sides of the table)
* Don’t kill it during negotiation (greed is not good in this case)
==Bishop’s Stakeholder Communities Model==
===Analysing Strategy, Culture & Processes===
We see a business or business unit as only having activities designed to service these communities. Some
Processes are purely to foster community interaction & membership, others are designed for services the
community needs like payroll, leave applications, advertisements, policy creation, complaints, help, performance
information and dissemination. With a little thought and consistent application the model proves both universal
and scalable.. You may use this model freely as long as the original author is always accredited.
A business consists only of stakeholder communities:
<table>
<tr>
<td>
# Workforce
## Employees
## Contractors
# Suppliers
# Partners
## Business network
## Cooperative
# Customers
## Pay for goods & service
# Clients
## Receive goods & service
# Governance
## Regulators
## Board
## Senior exec
# Government
# Wealth / Enterprise Custodians
## Asset managers
## Treasury, equipment, IP
# The Public
## The ultimate source & influence on all other stakeholders
</td>
<td>
[[Image:BishopsStakeholderCommunityModel.png]]
</td>
</tr>
</table>
=Post Merger Actions=
==Introduction==
* Understand the required degree of integration for the intended merger outcome
* Assess and monitor merger & integration risk
** Including: triggering events, consequences, remediation, responsibility, escalations
** Consider carefully the role of internal & external brands
* Empower the merger from the top
** Establish an merger or integration steering committee
*** Comprising board + stakeholder executive (include IT)
* Establish an integration manager / office
** Assemble the right-skilled integration team
** Focus Internal PR on bonding and service crossflow (not happy sheets)
** Establish a specific IT integration/interfacing advisory panel include business leaders
** Establish an integration ‘help-desk’ & communicate its existence
* Re-Perform cultural due diligence (where high integration exists)
* Perform targeted redundancies early & together – then tell the team it is over
* Revise Management Performance Reporting
** Target at the required integration degree
* Implement an integration strategy
** Work in many short (100 day) projects
* Implement a merger tracking programme
** Defined performance measures with targets (automate)
** Risk & remediation managed (automate)
** Progress & outcome communications
* Monitor progress and revise strategy
==Empower from the Top==
Weber (1996) concluded merger successes were generally CEO lead who:
* Dedicate executive time and focus
* Put together a leadership team to drive it
* Focus management attention on formal success factors
* Create a sense of human purpose and direction
* Model desired behaviour and ‘rules of the road’
==Distilling the Risks==
(Weber (96) & Bishop)
1 Is the combination achieving financial and operational goals? R1
2 Are schedules on target and are changes being implemented effectively? R2
3 Do employees understand and support the need for change? R3
4 What is the effect on people’s well-being and esprit-de-corps? R4
5 Are managers at all levels taking steps to minimise negative reactions and build positive feelings? R5
6 Are productivity or work quality being affected? R6
7 Do people understand their new roles and what is expected? R7
8 Are client and staff complaint levels stable or dropping? R8
9 Is the IT Business Process value map stable or declining? (See next slide for an example) R9
10 Is the post-merger integration investment budget on track? R10
==The IT and Business Process Value Map==
$NTV – Net Time Value (of net contribution over life of IT system)
This table runs at the businees process and business unit, etc levels
DO NOT UNDERESTIMATE THE IMPACT OF IT ISSUES
BP1 BP2 BP3 BP4
IT Sys1 $NTv $NTv $NTv $NTv $TNTV
IT Sys2 $NTv $NTv $NTv $NTv $TNTV
IT Sys3 $NTv $NTv $NTv $NTv $TNTV
IT Sys4 $TNTV
IT Sys5 $TNTV
IT Sys6 $TNTV
IT Sys7 $TNTV
IT Sys8 $TNTV
IT Sys9 $TNTV
$TNTV $TNTV $TNTV $TNTV
==Tracking Success – The Scorecard==
* Market measures
* Integration measures
* Operational measures
* Process measures
* Cultural measures
* Financial measures
* Purpose measures
==Role of the Integration Manager==
(Ashkenis & Francis 2001)
* Inject Speed
** Ramp up planning
** Accelerate implementation
** Push for decisions & actions
** Monitor progress & report to CEO/Steering
* Engineer Success
** Identify critical business synergies
** Define and launch 100 day projects
** Orchestrate BP transformation to combine entity Best Practice
* Make Social Connections
** Serve as a travelling ambassador between locations and businesses
** Serve as a lightning rod for hot issues (& venting)
** Interpret the customs language and culture of both companies
* Create Structure
** Provide flexible integration frameworks
** Mobilize joint teams
** Create key events and timelines
** Facilitate team and exec review
==Engaging The Right Skills==
* Project management
* Risk management
* Process reengineering
* IT interfacing / integrating
* Marketing & Brand management
* Intra-Corporate & Public Relations
* Corporate Governance
* Conglomerate Accounting & Finance
* Legal & HR
==Constraining Risk Events==
-Setting Strategic Priorities-
* Address:
** Corporate PR, marketing & sales quickly – these are the company to most external stakeholders
* Focus on retaining key staff
* Focus on customer retention
* Focus on IT change cost
* Do not disconnect business process from IT systems during transition (and understand the ISNTV)
* Forge a new corporate identity – or know why you aren’t
* Focus/ Build on similarities – not differences
* Align capabilities, services and products
* Promote successes and strengths in the acquired entity
* There is no business more important than the firm’s business.
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
8a2e82879b309fbbf8459f35718ec9324f63473e
401
381
2010-08-06T15:56:02Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2007 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
THIS ARTICLE IS NOT YET COMPLETE
=Pre-Merger Actions=
==Pre-merger Requisits==
* Beyond Financial Due-diligence (history & forecast)
** Financial,
** Legal,
** Cultural,
** Infrastructure, etc
* Include the cost of integration (including IT) in the forecasts
* Understand the financial structure, performance drivers and debt levels
* Understand the hidden control & decision relationships (why the acquired business really works)
* Understand all the stakeholders and implied or expressed service agreements
* Understand the meaning of merger success (in this context and for both parties)
* Agree the merger strategy (on both sides of the table)
* Don’t kill it during negotiation (greed is not good in this case)
==Bishop’s Stakeholder Communities Model==
===Analysing Strategy, Culture & Processes===
We see a business or business unit as only having activities designed to service these communities. Some
Processes are purely to foster community interaction & membership, others are designed for services the
community needs like payroll, leave applications, advertisements, policy creation, complaints, help, performance
information and dissemination. With a little thought and consistent application the model proves both universal
and scalable.. You may use this model freely as long as the original author is always accredited.
A business consists only of stakeholder communities:
<table>
<tr>
<td>
# Workforce
## Employees
## Contractors
# Suppliers
# Partners
## Business network
## Cooperative
# Customers
## Pay for goods & service
# Clients
## Receive goods & service
# Governance
## Regulators
## Board
## Senior exec
# Government
# Wealth / Enterprise Custodians
## Asset managers
## Treasury, equipment, IP
# The Public
## The ultimate source & influence on all other stakeholders
</td>
<td>
[[Image:BishopsStakeholderCommunityModel.png]]
</td>
</tr>
</table>
=Post Merger Actions=
==Introduction==
* Understand the required degree of integration for the intended merger outcome
* Assess and monitor merger & integration risk
** Including: triggering events, consequences, remediation, responsibility, escalations
** Consider carefully the role of internal & external brands
* Empower the merger from the top
** Establish an merger or integration steering committee
*** Comprising board + stakeholder executive (include IT)
* Establish an integration manager / office
** Assemble the right-skilled integration team
** Focus Internal PR on bonding and service crossflow (not happy sheets)
** Establish a specific IT integration/interfacing advisory panel include business leaders
** Establish an integration ‘help-desk’ & communicate its existence
* Re-Perform cultural due diligence (where high integration exists)
* Perform targeted redundancies early & together – then tell the team it is over
* Revise Management Performance Reporting
** Target at the required integration degree
* Implement an integration strategy
** Work in many short (100 day) projects
* Implement a merger tracking programme
** Defined performance measures with targets (automate)
** Risk & remediation managed (automate)
** Progress & outcome communications
* Monitor progress and revise strategy
==Empower from the Top==
Weber (1996) concluded merger successes were generally CEO lead who:
* Dedicate executive time and focus
* Put together a leadership team to drive it
* Focus management attention on formal success factors
* Create a sense of human purpose and direction
* Model desired behaviour and ‘rules of the road’
==Distilling the Risks==
(Weber (96) & Bishop)
1 Is the combination achieving financial and operational goals? R1
2 Are schedules on target and are changes being implemented effectively? R2
3 Do employees understand and support the need for change? R3
4 What is the effect on people’s well-being and esprit-de-corps? R4
5 Are managers at all levels taking steps to minimise negative reactions and build positive feelings? R5
6 Are productivity or work quality being affected? R6
7 Do people understand their new roles and what is expected? R7
8 Are client and staff complaint levels stable or dropping? R8
9 Is the IT Business Process value map stable or declining? (See next slide for an example) R9
10 Is the post-merger integration investment budget on track? R10
==The IT and Business Process Value Map==
$NTV – Net Time Value (of net contribution over life of IT system)
This table runs at the businees process and business unit, etc levels
DO NOT UNDERESTIMATE THE IMPACT OF IT ISSUES
BP1 BP2 BP3 BP4
IT Sys1 $NTv $NTv $NTv $NTv $TNTV
IT Sys2 $NTv $NTv $NTv $NTv $TNTV
IT Sys3 $NTv $NTv $NTv $NTv $TNTV
IT Sys4 $TNTV
IT Sys5 $TNTV
IT Sys6 $TNTV
IT Sys7 $TNTV
IT Sys8 $TNTV
IT Sys9 $TNTV
$TNTV $TNTV $TNTV $TNTV
==Tracking Success – The Scorecard==
* Market measures
* Integration measures
* Operational measures
* Process measures
* Cultural measures
* Financial measures
* Purpose measures
==Role of the Integration Manager==
(Ashkenis & Francis 2001)
* Inject Speed
** Ramp up planning
** Accelerate implementation
** Push for decisions & actions
** Monitor progress & report to CEO/Steering
* Engineer Success
** Identify critical business synergies
** Define and launch 100 day projects
** Orchestrate BP transformation to combine entity Best Practice
* Make Social Connections
** Serve as a travelling ambassador between locations and businesses
** Serve as a lightning rod for hot issues (& venting)
** Interpret the customs language and culture of both companies
* Create Structure
** Provide flexible integration frameworks
** Mobilize joint teams
** Create key events and timelines
** Facilitate team and exec review
==Engaging The Right Skills==
* Project management
* Risk management
* Process reengineering
* IT interfacing / integrating
* Marketing & Brand management
* Intra-Corporate & Public Relations
* Corporate Governance
* Conglomerate Accounting & Finance
* Legal & HR
==Constraining Risk Events==
-Setting Strategic Priorities-
* Address:
** Corporate PR, marketing & sales quickly – these are the company to most external stakeholders
* Focus on retaining key staff
* Focus on customer retention
* Focus on IT change cost
* Do not disconnect business process from IT systems during transition (and understand the ISNTV)
* Forge a new corporate identity – or know why you aren’t
* Focus/ Build on similarities – not differences
* Align capabilities, services and products
* Promote successes and strengths in the acquired entity
* There is no business more important than the firm’s business.
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
8a2e82879b309fbbf8459f35718ec9324f63473e
473
401
2010-08-06T15:56:02Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2007 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
THIS ARTICLE IS NOT YET COMPLETE
=Pre-Merger Actions=
==Pre-merger Requisits==
* Beyond Financial Due-diligence (history & forecast)
** Financial,
** Legal,
** Cultural,
** Infrastructure, etc
* Include the cost of integration (including IT) in the forecasts
* Understand the financial structure, performance drivers and debt levels
* Understand the hidden control & decision relationships (why the acquired business really works)
* Understand all the stakeholders and implied or expressed service agreements
* Understand the meaning of merger success (in this context and for both parties)
* Agree the merger strategy (on both sides of the table)
* Don’t kill it during negotiation (greed is not good in this case)
==Bishop’s Stakeholder Communities Model==
===Analysing Strategy, Culture & Processes===
We see a business or business unit as only having activities designed to service these communities. Some
Processes are purely to foster community interaction & membership, others are designed for services the
community needs like payroll, leave applications, advertisements, policy creation, complaints, help, performance
information and dissemination. With a little thought and consistent application the model proves both universal
and scalable.. You may use this model freely as long as the original author is always accredited.
A business consists only of stakeholder communities:
<table>
<tr>
<td>
# Workforce
## Employees
## Contractors
# Suppliers
# Partners
## Business network
## Cooperative
# Customers
## Pay for goods & service
# Clients
## Receive goods & service
# Governance
## Regulators
## Board
## Senior exec
# Government
# Wealth / Enterprise Custodians
## Asset managers
## Treasury, equipment, IP
# The Public
## The ultimate source & influence on all other stakeholders
</td>
<td>
[[Image:BishopsStakeholderCommunityModel.png]]
</td>
</tr>
</table>
=Post Merger Actions=
==Introduction==
* Understand the required degree of integration for the intended merger outcome
* Assess and monitor merger & integration risk
** Including: triggering events, consequences, remediation, responsibility, escalations
** Consider carefully the role of internal & external brands
* Empower the merger from the top
** Establish an merger or integration steering committee
*** Comprising board + stakeholder executive (include IT)
* Establish an integration manager / office
** Assemble the right-skilled integration team
** Focus Internal PR on bonding and service crossflow (not happy sheets)
** Establish a specific IT integration/interfacing advisory panel include business leaders
** Establish an integration ‘help-desk’ & communicate its existence
* Re-Perform cultural due diligence (where high integration exists)
* Perform targeted redundancies early & together – then tell the team it is over
* Revise Management Performance Reporting
** Target at the required integration degree
* Implement an integration strategy
** Work in many short (100 day) projects
* Implement a merger tracking programme
** Defined performance measures with targets (automate)
** Risk & remediation managed (automate)
** Progress & outcome communications
* Monitor progress and revise strategy
==Empower from the Top==
Weber (1996) concluded merger successes were generally CEO lead who:
* Dedicate executive time and focus
* Put together a leadership team to drive it
* Focus management attention on formal success factors
* Create a sense of human purpose and direction
* Model desired behaviour and ‘rules of the road’
==Distilling the Risks==
(Weber (96) & Bishop)
1 Is the combination achieving financial and operational goals? R1
2 Are schedules on target and are changes being implemented effectively? R2
3 Do employees understand and support the need for change? R3
4 What is the effect on people’s well-being and esprit-de-corps? R4
5 Are managers at all levels taking steps to minimise negative reactions and build positive feelings? R5
6 Are productivity or work quality being affected? R6
7 Do people understand their new roles and what is expected? R7
8 Are client and staff complaint levels stable or dropping? R8
9 Is the IT Business Process value map stable or declining? (See next slide for an example) R9
10 Is the post-merger integration investment budget on track? R10
==The IT and Business Process Value Map==
$NTV – Net Time Value (of net contribution over life of IT system)
This table runs at the businees process and business unit, etc levels
DO NOT UNDERESTIMATE THE IMPACT OF IT ISSUES
BP1 BP2 BP3 BP4
IT Sys1 $NTv $NTv $NTv $NTv $TNTV
IT Sys2 $NTv $NTv $NTv $NTv $TNTV
IT Sys3 $NTv $NTv $NTv $NTv $TNTV
IT Sys4 $TNTV
IT Sys5 $TNTV
IT Sys6 $TNTV
IT Sys7 $TNTV
IT Sys8 $TNTV
IT Sys9 $TNTV
$TNTV $TNTV $TNTV $TNTV
==Tracking Success – The Scorecard==
* Market measures
* Integration measures
* Operational measures
* Process measures
* Cultural measures
* Financial measures
* Purpose measures
==Role of the Integration Manager==
(Ashkenis & Francis 2001)
* Inject Speed
** Ramp up planning
** Accelerate implementation
** Push for decisions & actions
** Monitor progress & report to CEO/Steering
* Engineer Success
** Identify critical business synergies
** Define and launch 100 day projects
** Orchestrate BP transformation to combine entity Best Practice
* Make Social Connections
** Serve as a travelling ambassador between locations and businesses
** Serve as a lightning rod for hot issues (& venting)
** Interpret the customs language and culture of both companies
* Create Structure
** Provide flexible integration frameworks
** Mobilize joint teams
** Create key events and timelines
** Facilitate team and exec review
==Engaging The Right Skills==
* Project management
* Risk management
* Process reengineering
* IT interfacing / integrating
* Marketing & Brand management
* Intra-Corporate & Public Relations
* Corporate Governance
* Conglomerate Accounting & Finance
* Legal & HR
==Constraining Risk Events==
-Setting Strategic Priorities-
* Address:
** Corporate PR, marketing & sales quickly – these are the company to most external stakeholders
* Focus on retaining key staff
* Focus on customer retention
* Focus on IT change cost
* Do not disconnect business process from IT systems during transition (and understand the ISNTV)
* Forge a new corporate identity – or know why you aren’t
* Focus/ Build on similarities – not differences
* Align capabilities, services and products
* Promote successes and strengths in the acquired entity
* There is no business more important than the firm’s business.
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
8a2e82879b309fbbf8459f35718ec9324f63473e
489
473
2010-08-06T15:56:02Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2007 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
THIS ARTICLE IS NOT YET COMPLETE
=Pre-Merger Actions=
==Pre-merger Requisits==
* Beyond Financial Due-diligence (history & forecast)
** Financial,
** Legal,
** Cultural,
** Infrastructure, etc
* Include the cost of integration (including IT) in the forecasts
* Understand the financial structure, performance drivers and debt levels
* Understand the hidden control & decision relationships (why the acquired business really works)
* Understand all the stakeholders and implied or expressed service agreements
* Understand the meaning of merger success (in this context and for both parties)
* Agree the merger strategy (on both sides of the table)
* Don’t kill it during negotiation (greed is not good in this case)
==Bishop’s Stakeholder Communities Model==
===Analysing Strategy, Culture & Processes===
We see a business or business unit as only having activities designed to service these communities. Some
Processes are purely to foster community interaction & membership, others are designed for services the
community needs like payroll, leave applications, advertisements, policy creation, complaints, help, performance
information and dissemination. With a little thought and consistent application the model proves both universal
and scalable.. You may use this model freely as long as the original author is always accredited.
A business consists only of stakeholder communities:
<table>
<tr>
<td>
# Workforce
## Employees
## Contractors
# Suppliers
# Partners
## Business network
## Cooperative
# Customers
## Pay for goods & service
# Clients
## Receive goods & service
# Governance
## Regulators
## Board
## Senior exec
# Government
# Wealth / Enterprise Custodians
## Asset managers
## Treasury, equipment, IP
# The Public
## The ultimate source & influence on all other stakeholders
</td>
<td>
[[Image:BishopsStakeholderCommunityModel.png]]
</td>
</tr>
</table>
=Post Merger Actions=
==Introduction==
* Understand the required degree of integration for the intended merger outcome
* Assess and monitor merger & integration risk
** Including: triggering events, consequences, remediation, responsibility, escalations
** Consider carefully the role of internal & external brands
* Empower the merger from the top
** Establish an merger or integration steering committee
*** Comprising board + stakeholder executive (include IT)
* Establish an integration manager / office
** Assemble the right-skilled integration team
** Focus Internal PR on bonding and service crossflow (not happy sheets)
** Establish a specific IT integration/interfacing advisory panel include business leaders
** Establish an integration ‘help-desk’ & communicate its existence
* Re-Perform cultural due diligence (where high integration exists)
* Perform targeted redundancies early & together – then tell the team it is over
* Revise Management Performance Reporting
** Target at the required integration degree
* Implement an integration strategy
** Work in many short (100 day) projects
* Implement a merger tracking programme
** Defined performance measures with targets (automate)
** Risk & remediation managed (automate)
** Progress & outcome communications
* Monitor progress and revise strategy
==Empower from the Top==
Weber (1996) concluded merger successes were generally CEO lead who:
* Dedicate executive time and focus
* Put together a leadership team to drive it
* Focus management attention on formal success factors
* Create a sense of human purpose and direction
* Model desired behaviour and ‘rules of the road’
==Distilling the Risks==
(Weber (96) & Bishop)
1 Is the combination achieving financial and operational goals? R1
2 Are schedules on target and are changes being implemented effectively? R2
3 Do employees understand and support the need for change? R3
4 What is the effect on people’s well-being and esprit-de-corps? R4
5 Are managers at all levels taking steps to minimise negative reactions and build positive feelings? R5
6 Are productivity or work quality being affected? R6
7 Do people understand their new roles and what is expected? R7
8 Are client and staff complaint levels stable or dropping? R8
9 Is the IT Business Process value map stable or declining? (See next slide for an example) R9
10 Is the post-merger integration investment budget on track? R10
==The IT and Business Process Value Map==
$NTV – Net Time Value (of net contribution over life of IT system)
This table runs at the businees process and business unit, etc levels
DO NOT UNDERESTIMATE THE IMPACT OF IT ISSUES
BP1 BP2 BP3 BP4
IT Sys1 $NTv $NTv $NTv $NTv $TNTV
IT Sys2 $NTv $NTv $NTv $NTv $TNTV
IT Sys3 $NTv $NTv $NTv $NTv $TNTV
IT Sys4 $TNTV
IT Sys5 $TNTV
IT Sys6 $TNTV
IT Sys7 $TNTV
IT Sys8 $TNTV
IT Sys9 $TNTV
$TNTV $TNTV $TNTV $TNTV
==Tracking Success – The Scorecard==
* Market measures
* Integration measures
* Operational measures
* Process measures
* Cultural measures
* Financial measures
* Purpose measures
==Role of the Integration Manager==
(Ashkenis & Francis 2001)
* Inject Speed
** Ramp up planning
** Accelerate implementation
** Push for decisions & actions
** Monitor progress & report to CEO/Steering
* Engineer Success
** Identify critical business synergies
** Define and launch 100 day projects
** Orchestrate BP transformation to combine entity Best Practice
* Make Social Connections
** Serve as a travelling ambassador between locations and businesses
** Serve as a lightning rod for hot issues (& venting)
** Interpret the customs language and culture of both companies
* Create Structure
** Provide flexible integration frameworks
** Mobilize joint teams
** Create key events and timelines
** Facilitate team and exec review
==Engaging The Right Skills==
* Project management
* Risk management
* Process reengineering
* IT interfacing / integrating
* Marketing & Brand management
* Intra-Corporate & Public Relations
* Corporate Governance
* Conglomerate Accounting & Finance
* Legal & HR
==Constraining Risk Events==
-Setting Strategic Priorities-
* Address:
** Corporate PR, marketing & sales quickly – these are the company to most external stakeholders
* Focus on retaining key staff
* Focus on customer retention
* Focus on IT change cost
* Do not disconnect business process from IT systems during transition (and understand the ISNTV)
* Forge a new corporate identity – or know why you aren’t
* Focus/ Build on similarities – not differences
* Align capabilities, services and products
* Promote successes and strengths in the acquired entity
* There is no business more important than the firm’s business.
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
8a2e82879b309fbbf8459f35718ec9324f63473e
Managing Risk in Mergers & Acquisitions
0
297
385
2010-08-06T15:56:38Z
Bishopj
1
wikitext
text/x-wiki
==Topics==
* [[Managing Risk in Mergers & Acquisitions - Causes of Success & Failure]]
* [[Managing Risk in Mergers & Acquisitions - A Success Strategy]]
* [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
975422383bcff83e8288f0207aa4f21d1f209d44
405
385
2010-08-06T15:56:38Z
Bishopj
1
wikitext
text/x-wiki
==Topics==
* [[Managing Risk in Mergers & Acquisitions - Causes of Success & Failure]]
* [[Managing Risk in Mergers & Acquisitions - A Success Strategy]]
* [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
975422383bcff83e8288f0207aa4f21d1f209d44
477
405
2010-08-06T15:56:38Z
Bishopj
1
wikitext
text/x-wiki
==Topics==
* [[Managing Risk in Mergers & Acquisitions - Causes of Success & Failure]]
* [[Managing Risk in Mergers & Acquisitions - A Success Strategy]]
* [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
975422383bcff83e8288f0207aa4f21d1f209d44
493
477
2010-08-06T15:56:38Z
Bishopj
1
wikitext
text/x-wiki
==Topics==
* [[Managing Risk in Mergers & Acquisitions - Causes of Success & Failure]]
* [[Managing Risk in Mergers & Acquisitions - A Success Strategy]]
* [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
975422383bcff83e8288f0207aa4f21d1f209d44
Risk Management
0
298
389
2010-08-07T02:04:15Z
Bishopj
1
wikitext
text/x-wiki
=Risk Management=
==The Risk Management View - How the Machine Looks From the Inside==
Risk Management is a philosophy of management science that sees an organisation's state in terms of the balance of its risk and opportunity portfolio. An organisation with in a steady state will experience a rise in the value of opportunities commensurate with a rise in the volume or value of risk, while a destructively unstable scenario would be rising risks with falling opportunity and while rising value of opportunities with steady or falling risks might indicate either a desirable growth pattern or under achievement of opportunities.
In its most common implementation today, risk management focuses on the risk side of the equation. With this constraint to its domain, risk management sees the universe as a variably dangerous place measured in terms of the likelihood of an event that might be a cause of some consequence that will have a measurable impact. A group of such events with shared impacts is a risk. A risk might have a severity (based on the likelihood of its various triggering events and the worst case scenario of the impacts of those causal triggers) and it might have a value based on the impacts. With or without the value one view of risk management might claim that risk management is about cost minimisation (in terms of anything measurable like money, brand value, social standing, votes won, etc). Minimising cost does not mean minimising risk itself necessarily as other factors may influence that decision such as the risk appetite (willingness to tolerate a level or type of risk), and confidence in the dependent opportunities (not measured in a risk-only model).
The causes and consequences of a risk might be seen, through their likelihood and impact respectively, to imply a particular inherent level of risk,
once we know the risks we naturally do things to either prevent the triggers from occurring, to know when they have, and to respond with corrective action in the event that a risk manifests as an occurrence. We call these things controls or strategies, and would be right to think that this should moderate our value for a given risk in some way.
The risk manager might accommodate this control impact in multiple ways depending on the risk model in use:
#By rating the controls themselves and reducing the total risk rating by applying this value in some way to the inherent risk and getting a rating of the risk remaining after controls are added - commonly known as the residual risk. The ratings of controls and strategies is in-exact in itself and the addition of additional data for control ratings may be no more reliable than the instinctive feel for the control impact required in approach 2. Considerably more rigour may be needed in the controls understanding than is common in management.
#By rating the likelihood and impact of a risk again AFTER the raters have considered the controls thus having two ratings measuring likelihood and impact : inherent and residual. Under this approach the control impact is assumed in the revised likelihood and impact ratings. Controls should not be rated as a risk group, but can be rated separately to inform the residual likelihood and impact ratings. This method provides no way to reliably analyse the cost-effectiveness of individual control strategies from the resulting ratings.
Together these components describe the essence of the model through which risk managers view the organisation and thence the universe through which the organisation moves. With a risk only view the risk manager sees a health index in terms of risk to the organisation.
==The Risk Management Function - Keeping the Machine Healthy==
The risk manager uses the risk model to view the health state of an organisation. The risk manager improves and protects that state by managing essentially the input variables of the model. This includes:
#facilitating the process of identifying risks and their properties and the process of rating the risks.
#ensuring that every risk has a clear management responsibility attached to it.
#ensuring strategies have been devised to prevent (to some degree) causes where possible, to detect causes when they trigger and to mitigate consequential impacts.
#ensuring executive and governors are properly informed of the risk profile and changes therein over time.
#ensuring the accuracy of the model through actions such as regular review and re-rating of risks, monitoring strategy progress.
==Articles in this topic:==
Topics covered by articles include:
* [[Risk Management - Introduction]]
* [[BPC RiskManager Software Suite]]
* [[Managing Risk in Mergers & Acquisitions]]
The full category is available from:
[[:Category:Risk Management|Risk Management Topics]]
<noinclude>
[[Category:Management Science]]
[[Category:Risk Management]]
{{BackLinks}}
</noinclude>
5b321f41e0e0f3fa2c6fbf0d749aee11df42db35
481
389
2010-08-07T02:04:15Z
Bishopj
1
wikitext
text/x-wiki
=Risk Management=
==The Risk Management View - How the Machine Looks From the Inside==
Risk Management is a philosophy of management science that sees an organisation's state in terms of the balance of its risk and opportunity portfolio. An organisation with in a steady state will experience a rise in the value of opportunities commensurate with a rise in the volume or value of risk, while a destructively unstable scenario would be rising risks with falling opportunity and while rising value of opportunities with steady or falling risks might indicate either a desirable growth pattern or under achievement of opportunities.
In its most common implementation today, risk management focuses on the risk side of the equation. With this constraint to its domain, risk management sees the universe as a variably dangerous place measured in terms of the likelihood of an event that might be a cause of some consequence that will have a measurable impact. A group of such events with shared impacts is a risk. A risk might have a severity (based on the likelihood of its various triggering events and the worst case scenario of the impacts of those causal triggers) and it might have a value based on the impacts. With or without the value one view of risk management might claim that risk management is about cost minimisation (in terms of anything measurable like money, brand value, social standing, votes won, etc). Minimising cost does not mean minimising risk itself necessarily as other factors may influence that decision such as the risk appetite (willingness to tolerate a level or type of risk), and confidence in the dependent opportunities (not measured in a risk-only model).
The causes and consequences of a risk might be seen, through their likelihood and impact respectively, to imply a particular inherent level of risk,
once we know the risks we naturally do things to either prevent the triggers from occurring, to know when they have, and to respond with corrective action in the event that a risk manifests as an occurrence. We call these things controls or strategies, and would be right to think that this should moderate our value for a given risk in some way.
The risk manager might accommodate this control impact in multiple ways depending on the risk model in use:
#By rating the controls themselves and reducing the total risk rating by applying this value in some way to the inherent risk and getting a rating of the risk remaining after controls are added - commonly known as the residual risk. The ratings of controls and strategies is in-exact in itself and the addition of additional data for control ratings may be no more reliable than the instinctive feel for the control impact required in approach 2. Considerably more rigour may be needed in the controls understanding than is common in management.
#By rating the likelihood and impact of a risk again AFTER the raters have considered the controls thus having two ratings measuring likelihood and impact : inherent and residual. Under this approach the control impact is assumed in the revised likelihood and impact ratings. Controls should not be rated as a risk group, but can be rated separately to inform the residual likelihood and impact ratings. This method provides no way to reliably analyse the cost-effectiveness of individual control strategies from the resulting ratings.
Together these components describe the essence of the model through which risk managers view the organisation and thence the universe through which the organisation moves. With a risk only view the risk manager sees a health index in terms of risk to the organisation.
==The Risk Management Function - Keeping the Machine Healthy==
The risk manager uses the risk model to view the health state of an organisation. The risk manager improves and protects that state by managing essentially the input variables of the model. This includes:
#facilitating the process of identifying risks and their properties and the process of rating the risks.
#ensuring that every risk has a clear management responsibility attached to it.
#ensuring strategies have been devised to prevent (to some degree) causes where possible, to detect causes when they trigger and to mitigate consequential impacts.
#ensuring executive and governors are properly informed of the risk profile and changes therein over time.
#ensuring the accuracy of the model through actions such as regular review and re-rating of risks, monitoring strategy progress.
==Articles in this topic:==
Topics covered by articles include:
* [[Risk Management - Introduction]]
* [[BPC RiskManager Software Suite]]
* [[Managing Risk in Mergers & Acquisitions]]
The full category is available from:
[[:Category:Risk Management|Risk Management Topics]]
<noinclude>
[[Category:Management Science]]
[[Category:Risk Management]]
{{BackLinks}}
</noinclude>
5b321f41e0e0f3fa2c6fbf0d749aee11df42db35
Risk Management - Introduction
0
293
341
2010-08-07T02:15:55Z
Bishopj
1
/* Rating a Risk */
wikitext
text/x-wiki
==What Is Risk Management?==
===Risks, Causes & Consequences===
Risks to your operations and assets are a permanent and inescapable aspect of existence. Put simply if you have an objective, the central possibility exists that your objective may not be achieved. That possibility is risk.
Inputs required for your objective may not be available when required, or the cost of the same may make the objective inviable, or the social or technical assumptions may be invalidated, etc. These are threats, or causes of objective failure, and therefore causes of risk. Threats exist - some latent and some active, but all are potential causes of the failure to achieve your objective (with varying likelihoods).
Further, it may be that failure to achieve the objective, or preserve the asset may have impacts far beyond the loss of the expected benefit to be derived, or value of the asset lost. Those impacts are the consequences. For example, at the individual business level, failure to achieve a strategic objective may result in failure of the business, while on the international stage, failure to achieve a diplomatic objective may impact the society detrimentally for generations to come, and failure to protect a critical military or hazardous materials technology may result in extensive loss of life.
Lastly, a risk may not be a bad thing - it might be a good thing, or more commonly known as "an opportunity". Likewise, in impact may not just be "nothing to really bad" but also "really good to nothing to really bad". In its fullest extent risk management covers both opportunties and exposures. Most of the following discussion will consider risk management in its more common guise as managing exposures, but when we consider "Competitive risk Management" we will once again expand the definition.
<br>
===Risk Appetite===
The degree to which these undesired outcomes are more or less certain will effect your degree of concern about them. At the extreme ends, everybody may have pretty much the same response: an undesired outcome that is virtually certain to occur will probably be judged as unacceptable, while an undesired outcome that is virtually certain not to occur, will probably be judged as acceptable. Between these extremes each individual, organisation, and society will have differing determinations of acceptability. This determination is also likely to vary with the nature of the undesired outcome (for example the 50% chance of a loss of thousands of lives is generally considered less acceptable than the 50% chance of the loss of ten dollars). This variance in judgement as the risk appetite - literally your or your organisation's willingness to passively accept the possibility of a particular type of undesired outcome.
===Risk Response, Mitigation and Control===
The reactive leader, when faced with changed circumstances will rapidly form a response. These responses are designed to minimise the consequences of the threat event and are risk mitigation actions, or risk treatments. Of course, some responses (like avoidance or insurance) are by this time out of the question - as the threat has materialised. Faced with too many or too big a change in circumstances, even the most responsive leader can be overwhelmed, and the process fails with the objective not achieved.
A wise leader then (at least) learns from experience, and establishes processes to minimise the likelihood of similar threat events occurring (prevention), to detect when they occur (detection) and immediately respond and mitigate the consequences when they occur regardless (correction). These preplanned and pre-established processes of prevention, detection and correction are controls.
===Rating a Risk===
All controls have a cost - whether measure in money, time, tactical advantage, etc. Too much control may make the achievement of the object inviable. The leader may judge that some threats experienced are unlikely to occur again (for example Yr 2000 date risk was a once off, as the year 2000 is unlikely to occur again in this time line!). Other threats will be considered almost certain - such as a sunny day melting an unrefrigerated cargo of ice cream. the probability that a threat will eventuate is its likelihood. Where the likelihood is very low, the leader may judge it is not worth the cost of controlling.
Likewise, some consequences of threat events are so minor that they can be ignored, while others are catastrophic to the objective. This judgement is the impact rating of the consequence.
The Likelihood of a threat event, combined with it's level of impact to the object achievement constitute the inherent risk to the achievement of the objective.
Although not yet part of the standard, over recent years an additional rating parameter is being argued for consideration: "Velocity". The velocity of a risk is the speed with which a causal event translates into an outcome. Velocity is a rating against time inversely, so the shorter the time it takes for a causal event to result in a specific impact, the higher the velocity.
Conversely if we are going to consider a time based measure for the onset of a risk event, we should allow for a velocity measure on the mitigation side of the equation. Here we would have two types to consider - pre-event controls (such as training, and document manuals), have a velocity measure that acts during a different phase from that during which the impact velocity is measured. The control velocity of specific interest to mitigating impact velocity is that of the reactive controls - Event (or Error) Detection and Event (or Error) Correction controls
<blockquote>
'''NOTE:''' Controls fall into one of three groups - Prevention, Detection and Correction. The first group identifies proactive controls (although some control steps in a given strategy of controls may be reactive even here), while the latter two describe purely reactive controls. Note that under this view the process of setting up a reactive control system and training the participants and systems in the operation of that control is itself a proactive step and hence a Preventive control, while the operation of the actual control itself is, to the triggering causal event, reactive.
</blockquote>
A similar case may, on the face of it, be advanced for direct estimation of Risk Frequency. Specifically, such a measure is one of the frequency of a causal event - with an assessed likelihood of triggering at each cycle. The amount of time required for a single cycle from Causal Event A<sub>0</sub> to the next potential occurrence of Causal Event A at time 1 :i.e. A<sub>1</sub> is the velocity of the likelihood of a causal event being once again tested. On this basis we could again track the velocity of the likelihood.
A reasonably strong case might also be advanced that likelihood measures carry an implied frequency measurement as people tend to rate things as more certain to occur of they are always almost occurring than when rarely experienced, even if the causal event actually occurs on these rare occasions. In this case it is argued that rating likelihood velocity in fact double weights the likelihood rating.
This author leans to the former view. If we are separating some velocities from their coupled ratings, we should consistently apply the logic of separation to them all. On that basis the probability or reliability estimates are consistently cleansed of time subjectivity, and thence become an instantaneous rating rather than a multi period rating of the probability, impact or dampening (control mitigation rating). In database design terms the rating measures are normalised with respect to time. The obvious benefit is that the greater the consistency among the properties (functional and data) if not the content of those properties, the greater the reliability that the items can be combined to give a result that varies consistently with its inputs (in this case a Risk rating). If some of the inputs are themselves functions of other inputs (such as time) the result of combining the various components of the risk formula together will not appear to move consistently with the inputs.
A further benefit of separating velocity information is the colour it might bring to the risk analysis. One can picture a risk model where the assessment of an otherwise well rated risk, on the basis of likelihood velocity (think frequency), impact velocity (think: "How quickly will this hit us?") against preventive control velocity (think "How long will it take for the training to be completed?") and Detection control velocity (think "How quickly will we know that the wheels have fallen off?" and Correction control velocity (think "How quickly will we have cleaned up the mess?"), might reveal some fascinating structural problems in a control system. Such as a 12 month wait for detection controls to be in place for an high to medium impact impact of an event happening every week, and if those detection controls that then tell us only at the end of a quarter that a problem occurred that will take 6 months to fix, we might like to know - even though individually all these controls got the highest ratings in terms of effectiveness. Of course, if our risk formula dealt with these items properly as part of its model we would not have a well rated risk with such problems!
Expressed as a formula where f() means a function of the items in parentheses, the risk equation with all these potential inputs is then:
R<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(C<sub>i</sub>), f(CPV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;C
:Means Mitigating Strategies and Controls effectiveness rating mitigating causal events and consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating each impact and possibly some to all causal events
This formula says nothing more than that the risk rating is a function of eight variables, Whole-of-risk likelihood, likelihood velocity, impact, impact velocity, but mitigated by whole-of-risk control effectiveness-reliability, working over three velocities - Prevention control velocity, Detection control velocity and Correction control velocity. In term the value supplied for each of these ratings is itself a function of the assessed value of the rating to a normalised value (such as the range of reals from -1 to 1, or a shared 5 point scale, etc.)
The weakness in this formula lies in the consolidation of the three risk groups into a single control rating for the purposes of the risk function itself (thus hiding the relationship between the control group velocities and the control group ratings.
R<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(CP<sub>i</sub>), f(CD<sub>i</sub>), f(CC<sub>i</sub>), f(CPV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;CP
:Means Mitigating Strategies and Controls effectiveness rating at preventing causal events.
;CD
:Means Mitigating Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CC
:Means Mitigating Strategies and Controls effectiveness rating for reducing the likelihood of further causal events and mitigating consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating each impact and possibly some to all causal events
<br>
===From Risk Response To Risk Management===
Faced with a similar objective at another time, the prudent leader moves from re-action to pre-action. He applies his own and others' past experience, "common sense", and deductive reasoning when identifying the nature and causes of potential threat events and their consequences. He makes judgements as to the likelihood of these identified threats, and judgements as to the degree of impact arising from the consequences. This process is risk identification and assessment.
Comparing this assessment to the organisation's risk appetite he determines a range of risk responses, treatments or controls. The shift to pre-action (more commonly described as proactive management), the leader's options are widened when compared to the earlier reactive state. By preplanning the risk profile he is able to consider avoidance (just don't do it!), risk sharing (insurance) and threat prevention (training) as options in the risk mitigation armoury. Further the costs of each mitigation strategy can be considered against the benefits expected from the achievement of the objective, and the most effective, efficient and economic ones chosen.
In all cases a threat has a "tell-tale" which must be used to detect that the threat has eventuated, or that the likelihood of a threat has changed. These controls required in this case are detection controls. As with the other pre-action choices, detective controls are most advantageous before the event occurs - as once it has occurred they will generally tell you what you already know. This shifting assessment of risk based on the changes in likelihood over time is the current risk.
Implementing detection controls, allows the leader to defer the implementation (if not the planning, design and establishment) of other reactive controls, thus delivering a degree of certainty over the costs of mitigation at each point in a project, under a variety of circumstances and levels of current risk.
Once the controls (or risk mitigation plan) is applied to the assessed inherent risk of the objective the result is the residual risk - that portion of the inherent or current risk that remains after the controls have been applied.
Risk Management is about applying a structured thought process to identifying and managing such risks.
In one form or another, every leader undertakes risk management from the minute you establish a political ideology, manifesto, business vision, organisational mission, or business or political objective. Without a plan - however loosely defined - the objective is unlikely to be achieved. That plan is a map to managing risks to the non-achievement of the objective - starting with the most obvious risk: "inaction".
While Compliance Management is about a governance process for managing adherence to internally and externally known standards, policies, procedures, and controls; Risk Management is an approach to governance that aims to identify what plans, standards, policies, procedures, and controls are be required and how important each part is to the purpose, and when you will know which additional actions will be required. Risk Management is a systematic process of making a realistic evaluation of the true level of risks to your purpose, and mitigating those risks that exceed your risk appetite in the most efficient, effective and economic manner possible.
==What Is Enterprise Risk Management?==
Enterprise Risk Management applies the concepts outlined at the project or single objective level described above, and applies them across the enterprise, government, or society (as appropriate). Enterprise management distinguishes itself from project risk management by its aims:
* Firstly, it aims to reduce duplication of risk management planning and risk mitigation strategies by facilitating cross-organisational sharing of control frameworks, management expertise, and resources.
* Secondly, it aims to minimise contradictory, counter productive and mutually exclusive risk management strategies by facilitating enterprise wide knowledge of the risk profile of the organisation.
* Thirdly, it aims to inform the governance team of their true organisation wide position on a continuous and instantaneous basis.
* Fourthly, it aims to forecast the risk profile of the organisation within, at least, the decision cycle of the governance team.
==What is Competitive Risk Management?==
So far, we have considered risk management as a stability governance tool for the assisting the achievement of identified objectives. In essence it is under this view a defencive strategy. The scope of governance arguably extends beyond maintenance of environmental stability and achievement of defined near-term deadlines and objectives, to the identification of the correct objectives (those that succeed on some measure), and longer term aspirational objectives such "more profit" or, in social measures - "higher average literacy".
This shift implies to additional dimensions should be considered:
#A risk may also be an opportunity, and an impact may be both positive and negative. Where the impact is positive for the organisation the correct corrective control response is to in fact augment it the effect (such as by adjusting the causal states of other risks (opportunities)). The overall implication is that to accomodate opportunity the risk rating scheme needs to be balance around 0 (meaning minimum risk and minimum opportunity). Whether this is best done with a positive scale and a negative scale or whether this should be achieved with a linear scale with a floating normal line is, I think an implimentation question at this stage.
#A risk/opportunity may have a group of controls (strategies) intended both alternately to mitigate (Prevent, Detect, Correct ) and augment (Focus, Sense, Enable) a risk in some way. Note that we are expanding our control groups from three to six. This is necessary where two impact rating scales are used (an opportiunity scale and a impact scale). If only a single monotonic impact scale was used: eg. "really-good to negligeable to really-bad", we could prossibly escape with four groups: Focus, Prevent, Detect, Correct. Focus is the opportunity's version of Prevent. The difference is that in the case of a risk, an effective preventive control reduces the residual likelihood (if not the inherent likelihood) of a causal event, while in an opportnity we want precisely the opposite outcome. Thus we need to track these separately. In the case of the two scale system we need both the "opportunity" equivalents for detection and correction control functions separated as well.
In competitive risk management we utilise the techniques of "defencive" risk management as a method to inform competitive strategy. The same methods that are applied to determine and manage or avoid your risks, can be applied to:
#determine, induce and exploit your opportunities, and select the opportunities most likely to be successfully exploited; and
#determine and trigger your competitor's risks, and where they are either most exposed, or where their responsive mitigation costs will be greatest. In this use there is an implied additional measure-counter measure relationship between controls where an augmentation strategy is defined that is designed to detect or counter another mitigation strategy.
In competitive risk management we therefore look to identify and exploit our opportunities and the weakness in others through application if risk management techniques. Such an application of the method is likely to be most effective where knowledge of the competitor or competing industry approaches perfection, and the accuracyy of the model used approaches perfect accuracey. There are interesting implications to game theory where all participants in a market use equivalently competitive risk management methods and have equivalently perfect knowledge.
Competitive risk management is therefore a strategy setting process. In both cases the analysis expands the colour of the control analysis part of our formulah described in the previous section. Specifically the nature of the changes required are to accomodate additional ratings and velocities for allow treat risk and opportunity a single function (eg possibly describing a parabolic or logarythmic curve as the output).
Our revised formulah for competitive risk then becomes:
RO<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(CF<sub>i</sub>), f(CS<sub>i</sub>), f(CE<sub>i</sub>), f(CP<sub>i</sub>), f(CD<sub>i</sub>), f(CC<sub>i</sub>), f(CFV<sub>i</sub>), f(CSV<sub>i</sub>), f(CEV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;RO
:is expressed in a single scale such as: "really-good to negligeable to really-bad", or as complex numbers with two scales a rating (high to neglieable) and a binary (two position) scale - "Opportunity or Risk"
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;CP
:Means Mitigating Strategies and Controls effectiveness rating at preventing causal events.
;CD
:Means Mitigating Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CC
:Means Mitigating Strategies and Controls effectiveness rating for reducing the likelihood of further causal events and mitigating consequential impacts.
;CF
:Means Enabling Strategies and Controls effectiveness rating at focussing causal events.
;CS
:Means Enabling Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CE
:Means Enabling Strategies and Controls effectiveness rating for increasing the likelihood of further causal events and enabling consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CFV
:Means Focus Control Velocity Rating for each causal event
;CSV
:Means Sensing Control Velocity Rating for each causal event and possibly some to all impacts
;CEV
:Means Enabling Control Velocity Rating for each enabling control enabling impacts and possibly some to all causal events
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating control for all impacts and possibly mitigating some to all causal events
<br>
==The Evolution of the Risk Management Standard==
In Australia, a team of experienced risk management practitioners was assembled over two decades to codify a standard for risk management as it had been (and was being) developed and deployed in Australia and New Zealand. That codification was initially released by Standards Australia as AS/NZS 4360:1995, revised as AS/NZS 4360:1999 and revised again in its current version as AS/NZS 4360:2004. You can access the standard via [http://infostore.saiglobal.com/store/Details.aspx?DocN=AS0733759041AT|SAI Risk Management Portal]. While still very much in its infancy as a governance tool, and immature as a management science, risk management has rapidly been adopted across the world and is now codified into an international standard: ISO 31000:2009 standard (October 2009), and supported by the ISO Guide 73:2009 - largely based on the AS/NZS standard.
==The Classical Approach==
In classical risk management - with respect to a given focus - a business, a business objective, and asset, etc - we told to identify the risks first, so that they can be properly managed. In its classical form, risk management asks, and attempts to answer three questions:
*What can go wrong?
*What can I do to prevent it?
*What do I do if it happens?
You are advised to develop a risk register to document each potential problem, its level of seriousness, what is required to fix it, who will fix the problem, and monitor progress.
There are essentially four things you can do with risk. We will call them, the four T's:
* Tolerate it (by accepting or ignoring a risk - this is where the profit lies)
* Treat it (by actively re-mediating or controlling it)
* Transfer it (by insuring it, perhaps better described as "sharing it")
* Terminate it (by exiting the business that incurs it)
It is critical that leaders understand that risk management is NOT about avoiding risk, but about managing it.
==The Evolution of a Risk Management Thought==
The concept of risk and reward management are not new to mankind. The walls of cities and castles were early forms of risk management, and Hadrian's Wall, Agricola's Wall, Antonine Wall, and the Great Wall of China are dramatic statements of risk containment on a social scale.
History is littered with authors and thinkers exploring the relationship between risk awareness, risk exploitation, active management and outcomes. Military and political strategists have employed the concepts underpinning modern risk management for centuries. The writings of both military and political strategists such as Sun Tzu ("The Art of War"), Carl von Clausewitz ("On War"), Niccolò Machiavelli ("The Prince", "The Art of War"), and Miyamoto Musashi ("The Five Rings") are all examples of the practical application of risk awareness in strategy formation. To varying extent these works all encourage an awareness of one's own and one's opponent's weaknesses, and the mitigations and exploitation of the same.
Perhaps, what is new, is the codification of the process of identifying, measuring, assessing, and responding to risk laid down in the more recent writings. It would be naive, however, to consider that risk management, per-se, is new. The difference between a successful manager and an unsuccessful manager has always been their ability to see the potential reward in an opportunity and get strike the correct balance between ignoring, avoiding, transferring and mitigating risks. Too much risk avoidance means opportunities are not exploited, too much control or insurance means that there is no profit left from the risky activity, and too much ignorance means that eventually the strategy's angel will become history's fool.
In the absence of a formalised approach to risk management, the successful business leader is known as lucky. In truth, the success is probably more due to a that leader's accident of DNA and life experience that leads to instinctively correct risk judgements. It is possibly this instinct, more than anything else, that justifies the executive salary differentials.
There is an important observation to be made from the historic context of risk management theory. Currently risk management professionals tend to view the discipline as an extension of the strategy achievement, yet historically, risk management has been as much about strategy identification and formation, as about implementation.
Good risk management looks both inward and outward. By this I mean that risk management can be applied both to minimising your chance of failure and maximising your competitor's chance of failure. The essence of military strategist's thinking is to identify the weakness's of the opponent and exploit them to you own advantage. Application of the principles of risk management can enable you to not only identify the opponent's weaknesses, but identify the probable strategies they will employ to manage the risks arising from those weaknesses, and hence better inform your planners about potential strategies to employ.
Over the last 50 years a number of frameworks addressing risk management with respect to governance have emerged out of the experience of the different professional groups involved in strategic management, asset protection, public accountability, finance and risk. These groups include:
* Internal Audit - focused on control system reliability
* External Audit - focused on true and fair representation of financial position on a going concern basis
* Actuarial Science - focused on the pricing of risk for insurance
* Investment banking - focused on the pricing of risk for portfolio management, hedging, capital fees and adequacy
* Risk Management - focused on management of risk to strategic and tactical outcomes on an enterprise and societal basis
Setting aside the military and political authors, among the business community, some of the earliest work in risk management arose from the financial advisory community looking for models to minimise the downside risks to financial products investment.
==A Mathematical Basis To Risk Measurement==
As early is 1952 Harry M Markowitz published his paper "Portfolio Selection" in the Journal of Finance, exploring the advantages of risk diversification through balanced portfolio selection. The essence of portfolio theory is that risk essentially expressed the potential for a negative return (financial loss) and the
An investor can reduce portfolio risk simply by holding combinations of instruments which are not perfectly positively correlated (correlation coefficient -1<(r)<1)).
To a greater of lesser extent the professional bodies, standards organisations and government agencies have responded with guidelines and standards for the measurement, application, response and management of risk as it applies to their specific problem domains. In the 1978 the Institute of Internal Auditors - the international professional body of the Internal Audit profession issued its Standard's for the Professional Practise of Internal Audit (SPPIA). In Anne of the earliest standards based references to risk based management the standards included standard 320: "Compliance with Policies, Plans, Procedures, Laws and Regulations". The statement determined that "Internal auditors should review the systems established to ensure compliance with policies, plans, procedures, laws and regulations which could have a significant impact on operations and reports, and should determine whether the organisation is in compliance". The SPPIA standards mandated the
==Alternative Standards and Views of Risk Management==
Among the definitive pronouncements on risk management are:
* The King Report on Corporate Governance for South Africa (SA King II - 2002)
* A Risk Management Standard (RMS 2004) by the Federation of European Risk Management Association (UK FERMA)
* Australian/New Zealand Standard 4360—Risk Management (A/NZ 1995, 1999, 2004)
* COSO’s Enterprise Risk Management— Integrated Framework
* The Institute of Management Accountants’ (IMA)
* “A Global Perspective on Assessing Internal Control over Financial Reporting” (ICoFR)
* Basel II
* Standard & Poor’s and ERM
* ISO 31000:2009
Building on the work of many years, the middle of the first decade of the millenium saw a succession of enterprise risk management (ERM) related pronouncements. AS/NZS 4360: 2004 defined the risk management process as the “'''systematic application of management policies, procedures and practices to the tasks of communicating, establishing the context, identifying, analysing, evaluating, treating, monitoring and reviewing '''”. For the financial sector, the earlier BASEL I standard was superceded by BASEL II which closely mirrored by the view of AS/NZS 4360.
Expanding on an earlier Internal Control Framework from the early 1990's the Committee of Sponsoring Organisations of the Treadway Commission (COSO) releasmillenniumed the ‘Enterprise Risk Management (ERM) – Integrated Framework’ which attempted to map the COSO framework that formed the motivational basis for the US Sarbanes-Oxley compliance legislation into a broader enterprise risk management framework. The COSO/ERM framwork defined enterprise risk management as:
* A process, ongoing and flowing through an entity,
* Effected by people at every level of an organisation,
* Applied in strategy setting,
* Applied across the enterprise, at every level and unit, and includes taking an entity-level portfolio view of risk,
* Designed to identify potential events that, if they occur, will affect the entity and to manage risk within its risk appetite,
* Able to provide reasonable assurance to an entity’s management and board of directors,
* Geared to achievement of objectives in one or more separate but overlapping categories.
The standards enjoy a shared purpose to improve the predictability of business outcomes, but differ significantly in how that certainty is to be improved. While 4360 describes the process for management of risk, BASEL II mandates firm’s operational risk management (ORM) system must be “conceptually sound and implemented with integrity”, but stops short of defining the form or process of the ORM. BASEL II does specify that the ORM should be maintained by an independent operational risk management function, and that is to consist of at least “strategies, methodologies and risk reporting systems". It identifies that the purpose of the ORM is to "identify, measure, monitor and control/mitigate operational risk”.
Under BASEL II, the ORM systems should be:
* “credible and appropriate”,
* “well reasoned, well documented”,
* “transparent and accessible”, and
* capable of being validated by audit.
Among the failings of BASEL II, is the lack of definition of these key terms, which, in a sense, is where AS/NZSpractisessuperseded 4360 and the COSO ERM Framwork come in. The latter standards provides a framework under which a credible, reasoned, transparent, documented and verifiable risk management model can be established.
AS/NZS 4360 and COSO do not eliminate failure in the ORM/ERM, however, as in their implementation there is still considerable subjectivity in risk identification and assessment, and within the process documented by the standard there is not a mechanism for provining or measuring "completeness". They do, however, populate the next level of the BASEL II obligation.
This problem of "completeness" in ERM frameworks should not be underestimated. It is present in all current risk management standards and is possibly a key reason for failure in ERM frameworks. We shall explore approaches to solving this problem in later papers.
Owing to their differing origins the three standards employ slightly different terminology for shared ideas:
* AS/NZS 4360 refers to ‘Risk Treatment’, COSO to ‘Risk Response’ and Basel II uses ‘Risk Mitigation’.
While the seven ‘elements’ of AS/NZS 4360:2004 framework do not align precisely with the eight ‘components’ of the COSO process, the ‘end to end’ risk management process is the same.
<table cellpadding="10" >
<tr>
<th>
AS/NZS 4360: 2004
Framework
</th>
<th>
COSOframework ERM–Integrated
Framework
</th>
<th>
BASEL II ORM
Framework
</th>
</tr>
<tr>
<td>
Establish the context
</td>
<td>
Internal environment
</td>
<td>
</td>
</tr>
<tr>
<td>
Establish the context
</td>
<td>
Objective setting
</td>
<td>
</td>
</tr>
<tr>
<td>
Identify risks
</td>
<td>
Event identification
</td>
<td>
Identify
</td>
</tr>
<tr>
<td>
Analyse risks
</td>
<td>
Risk assessment
</td>
<td>
Assess
</td>
</tr>
<tr>
<td>
Evaluate risks
</td>
<td>
Risk assessment
</td>
<td>
Assess
</td>
</tr>
<tr>
<td>
Treat risks
</td>
<td>
Risk response and control activities
</td>
<td>
Control/mitigate
</td>
</tr>
<tr>
<td>
Monitor and review
</td>
<td>
Monitoring
</td>
<td>
Monitor
</td>
</tr>
<tr>
<td>
Consult and communicate
</td>
<td>
Information and communication
</td>
<td>
</td>
</tr>
</table>
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Risk Management]]
{{BackLinks}}
</noinclude>
0c92f2577353da0d73bf684aee6689d18b9f93ee
387
341
2010-08-07T02:15:55Z
Bishopj
1
/* Rating a Risk */
wikitext
text/x-wiki
==What Is Risk Management?==
===Risks, Causes & Consequences===
Risks to your operations and assets are a permanent and inescapable aspect of existence. Put simply if you have an objective, the central possibility exists that your objective may not be achieved. That possibility is risk.
Inputs required for your objective may not be available when required, or the cost of the same may make the objective inviable, or the social or technical assumptions may be invalidated, etc. These are threats, or causes of objective failure, and therefore causes of risk. Threats exist - some latent and some active, but all are potential causes of the failure to achieve your objective (with varying likelihoods).
Further, it may be that failure to achieve the objective, or preserve the asset may have impacts far beyond the loss of the expected benefit to be derived, or value of the asset lost. Those impacts are the consequences. For example, at the individual business level, failure to achieve a strategic objective may result in failure of the business, while on the international stage, failure to achieve a diplomatic objective may impact the society detrimentally for generations to come, and failure to protect a critical military or hazardous materials technology may result in extensive loss of life.
Lastly, a risk may not be a bad thing - it might be a good thing, or more commonly known as "an opportunity". Likewise, in impact may not just be "nothing to really bad" but also "really good to nothing to really bad". In its fullest extent risk management covers both opportunties and exposures. Most of the following discussion will consider risk management in its more common guise as managing exposures, but when we consider "Competitive risk Management" we will once again expand the definition.
<br>
===Risk Appetite===
The degree to which these undesired outcomes are more or less certain will effect your degree of concern about them. At the extreme ends, everybody may have pretty much the same response: an undesired outcome that is virtually certain to occur will probably be judged as unacceptable, while an undesired outcome that is virtually certain not to occur, will probably be judged as acceptable. Between these extremes each individual, organisation, and society will have differing determinations of acceptability. This determination is also likely to vary with the nature of the undesired outcome (for example the 50% chance of a loss of thousands of lives is generally considered less acceptable than the 50% chance of the loss of ten dollars). This variance in judgement as the risk appetite - literally your or your organisation's willingness to passively accept the possibility of a particular type of undesired outcome.
===Risk Response, Mitigation and Control===
The reactive leader, when faced with changed circumstances will rapidly form a response. These responses are designed to minimise the consequences of the threat event and are risk mitigation actions, or risk treatments. Of course, some responses (like avoidance or insurance) are by this time out of the question - as the threat has materialised. Faced with too many or too big a change in circumstances, even the most responsive leader can be overwhelmed, and the process fails with the objective not achieved.
A wise leader then (at least) learns from experience, and establishes processes to minimise the likelihood of similar threat events occurring (prevention), to detect when they occur (detection) and immediately respond and mitigate the consequences when they occur regardless (correction). These preplanned and pre-established processes of prevention, detection and correction are controls.
===Rating a Risk===
All controls have a cost - whether measure in money, time, tactical advantage, etc. Too much control may make the achievement of the object inviable. The leader may judge that some threats experienced are unlikely to occur again (for example Yr 2000 date risk was a once off, as the year 2000 is unlikely to occur again in this time line!). Other threats will be considered almost certain - such as a sunny day melting an unrefrigerated cargo of ice cream. the probability that a threat will eventuate is its likelihood. Where the likelihood is very low, the leader may judge it is not worth the cost of controlling.
Likewise, some consequences of threat events are so minor that they can be ignored, while others are catastrophic to the objective. This judgement is the impact rating of the consequence.
The Likelihood of a threat event, combined with it's level of impact to the object achievement constitute the inherent risk to the achievement of the objective.
Although not yet part of the standard, over recent years an additional rating parameter is being argued for consideration: "Velocity". The velocity of a risk is the speed with which a causal event translates into an outcome. Velocity is a rating against time inversely, so the shorter the time it takes for a causal event to result in a specific impact, the higher the velocity.
Conversely if we are going to consider a time based measure for the onset of a risk event, we should allow for a velocity measure on the mitigation side of the equation. Here we would have two types to consider - pre-event controls (such as training, and document manuals), have a velocity measure that acts during a different phase from that during which the impact velocity is measured. The control velocity of specific interest to mitigating impact velocity is that of the reactive controls - Event (or Error) Detection and Event (or Error) Correction controls
<blockquote>
'''NOTE:''' Controls fall into one of three groups - Prevention, Detection and Correction. The first group identifies proactive controls (although some control steps in a given strategy of controls may be reactive even here), while the latter two describe purely reactive controls. Note that under this view the process of setting up a reactive control system and training the participants and systems in the operation of that control is itself a proactive step and hence a Preventive control, while the operation of the actual control itself is, to the triggering causal event, reactive.
</blockquote>
A similar case may, on the face of it, be advanced for direct estimation of Risk Frequency. Specifically, such a measure is one of the frequency of a causal event - with an assessed likelihood of triggering at each cycle. The amount of time required for a single cycle from Causal Event A<sub>0</sub> to the next potential occurrence of Causal Event A at time 1 :i.e. A<sub>1</sub> is the velocity of the likelihood of a causal event being once again tested. On this basis we could again track the velocity of the likelihood.
A reasonably strong case might also be advanced that likelihood measures carry an implied frequency measurement as people tend to rate things as more certain to occur of they are always almost occurring than when rarely experienced, even if the causal event actually occurs on these rare occasions. In this case it is argued that rating likelihood velocity in fact double weights the likelihood rating.
This author leans to the former view. If we are separating some velocities from their coupled ratings, we should consistently apply the logic of separation to them all. On that basis the probability or reliability estimates are consistently cleansed of time subjectivity, and thence become an instantaneous rating rather than a multi period rating of the probability, impact or dampening (control mitigation rating). In database design terms the rating measures are normalised with respect to time. The obvious benefit is that the greater the consistency among the properties (functional and data) if not the content of those properties, the greater the reliability that the items can be combined to give a result that varies consistently with its inputs (in this case a Risk rating). If some of the inputs are themselves functions of other inputs (such as time) the result of combining the various components of the risk formula together will not appear to move consistently with the inputs.
A further benefit of separating velocity information is the colour it might bring to the risk analysis. One can picture a risk model where the assessment of an otherwise well rated risk, on the basis of likelihood velocity (think frequency), impact velocity (think: "How quickly will this hit us?") against preventive control velocity (think "How long will it take for the training to be completed?") and Detection control velocity (think "How quickly will we know that the wheels have fallen off?" and Correction control velocity (think "How quickly will we have cleaned up the mess?"), might reveal some fascinating structural problems in a control system. Such as a 12 month wait for detection controls to be in place for an high to medium impact impact of an event happening every week, and if those detection controls that then tell us only at the end of a quarter that a problem occurred that will take 6 months to fix, we might like to know - even though individually all these controls got the highest ratings in terms of effectiveness. Of course, if our risk formula dealt with these items properly as part of its model we would not have a well rated risk with such problems!
Expressed as a formula where f() means a function of the items in parentheses, the risk equation with all these potential inputs is then:
R<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(C<sub>i</sub>), f(CPV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;C
:Means Mitigating Strategies and Controls effectiveness rating mitigating causal events and consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating each impact and possibly some to all causal events
This formula says nothing more than that the risk rating is a function of eight variables, Whole-of-risk likelihood, likelihood velocity, impact, impact velocity, but mitigated by whole-of-risk control effectiveness-reliability, working over three velocities - Prevention control velocity, Detection control velocity and Correction control velocity. In term the value supplied for each of these ratings is itself a function of the assessed value of the rating to a normalised value (such as the range of reals from -1 to 1, or a shared 5 point scale, etc.)
The weakness in this formula lies in the consolidation of the three risk groups into a single control rating for the purposes of the risk function itself (thus hiding the relationship between the control group velocities and the control group ratings.
R<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(CP<sub>i</sub>), f(CD<sub>i</sub>), f(CC<sub>i</sub>), f(CPV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;CP
:Means Mitigating Strategies and Controls effectiveness rating at preventing causal events.
;CD
:Means Mitigating Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CC
:Means Mitigating Strategies and Controls effectiveness rating for reducing the likelihood of further causal events and mitigating consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating each impact and possibly some to all causal events
<br>
===From Risk Response To Risk Management===
Faced with a similar objective at another time, the prudent leader moves from re-action to pre-action. He applies his own and others' past experience, "common sense", and deductive reasoning when identifying the nature and causes of potential threat events and their consequences. He makes judgements as to the likelihood of these identified threats, and judgements as to the degree of impact arising from the consequences. This process is risk identification and assessment.
Comparing this assessment to the organisation's risk appetite he determines a range of risk responses, treatments or controls. The shift to pre-action (more commonly described as proactive management), the leader's options are widened when compared to the earlier reactive state. By preplanning the risk profile he is able to consider avoidance (just don't do it!), risk sharing (insurance) and threat prevention (training) as options in the risk mitigation armoury. Further the costs of each mitigation strategy can be considered against the benefits expected from the achievement of the objective, and the most effective, efficient and economic ones chosen.
In all cases a threat has a "tell-tale" which must be used to detect that the threat has eventuated, or that the likelihood of a threat has changed. These controls required in this case are detection controls. As with the other pre-action choices, detective controls are most advantageous before the event occurs - as once it has occurred they will generally tell you what you already know. This shifting assessment of risk based on the changes in likelihood over time is the current risk.
Implementing detection controls, allows the leader to defer the implementation (if not the planning, design and establishment) of other reactive controls, thus delivering a degree of certainty over the costs of mitigation at each point in a project, under a variety of circumstances and levels of current risk.
Once the controls (or risk mitigation plan) is applied to the assessed inherent risk of the objective the result is the residual risk - that portion of the inherent or current risk that remains after the controls have been applied.
Risk Management is about applying a structured thought process to identifying and managing such risks.
In one form or another, every leader undertakes risk management from the minute you establish a political ideology, manifesto, business vision, organisational mission, or business or political objective. Without a plan - however loosely defined - the objective is unlikely to be achieved. That plan is a map to managing risks to the non-achievement of the objective - starting with the most obvious risk: "inaction".
While Compliance Management is about a governance process for managing adherence to internally and externally known standards, policies, procedures, and controls; Risk Management is an approach to governance that aims to identify what plans, standards, policies, procedures, and controls are be required and how important each part is to the purpose, and when you will know which additional actions will be required. Risk Management is a systematic process of making a realistic evaluation of the true level of risks to your purpose, and mitigating those risks that exceed your risk appetite in the most efficient, effective and economic manner possible.
==What Is Enterprise Risk Management?==
Enterprise Risk Management applies the concepts outlined at the project or single objective level described above, and applies them across the enterprise, government, or society (as appropriate). Enterprise management distinguishes itself from project risk management by its aims:
* Firstly, it aims to reduce duplication of risk management planning and risk mitigation strategies by facilitating cross-organisational sharing of control frameworks, management expertise, and resources.
* Secondly, it aims to minimise contradictory, counter productive and mutually exclusive risk management strategies by facilitating enterprise wide knowledge of the risk profile of the organisation.
* Thirdly, it aims to inform the governance team of their true organisation wide position on a continuous and instantaneous basis.
* Fourthly, it aims to forecast the risk profile of the organisation within, at least, the decision cycle of the governance team.
==What is Competitive Risk Management?==
So far, we have considered risk management as a stability governance tool for the assisting the achievement of identified objectives. In essence it is under this view a defencive strategy. The scope of governance arguably extends beyond maintenance of environmental stability and achievement of defined near-term deadlines and objectives, to the identification of the correct objectives (those that succeed on some measure), and longer term aspirational objectives such "more profit" or, in social measures - "higher average literacy".
This shift implies to additional dimensions should be considered:
#A risk may also be an opportunity, and an impact may be both positive and negative. Where the impact is positive for the organisation the correct corrective control response is to in fact augment it the effect (such as by adjusting the causal states of other risks (opportunities)). The overall implication is that to accomodate opportunity the risk rating scheme needs to be balance around 0 (meaning minimum risk and minimum opportunity). Whether this is best done with a positive scale and a negative scale or whether this should be achieved with a linear scale with a floating normal line is, I think an implimentation question at this stage.
#A risk/opportunity may have a group of controls (strategies) intended both alternately to mitigate (Prevent, Detect, Correct ) and augment (Focus, Sense, Enable) a risk in some way. Note that we are expanding our control groups from three to six. This is necessary where two impact rating scales are used (an opportiunity scale and a impact scale). If only a single monotonic impact scale was used: eg. "really-good to negligeable to really-bad", we could prossibly escape with four groups: Focus, Prevent, Detect, Correct. Focus is the opportunity's version of Prevent. The difference is that in the case of a risk, an effective preventive control reduces the residual likelihood (if not the inherent likelihood) of a causal event, while in an opportnity we want precisely the opposite outcome. Thus we need to track these separately. In the case of the two scale system we need both the "opportunity" equivalents for detection and correction control functions separated as well.
In competitive risk management we utilise the techniques of "defencive" risk management as a method to inform competitive strategy. The same methods that are applied to determine and manage or avoid your risks, can be applied to:
#determine, induce and exploit your opportunities, and select the opportunities most likely to be successfully exploited; and
#determine and trigger your competitor's risks, and where they are either most exposed, or where their responsive mitigation costs will be greatest. In this use there is an implied additional measure-counter measure relationship between controls where an augmentation strategy is defined that is designed to detect or counter another mitigation strategy.
In competitive risk management we therefore look to identify and exploit our opportunities and the weakness in others through application if risk management techniques. Such an application of the method is likely to be most effective where knowledge of the competitor or competing industry approaches perfection, and the accuracyy of the model used approaches perfect accuracey. There are interesting implications to game theory where all participants in a market use equivalently competitive risk management methods and have equivalently perfect knowledge.
Competitive risk management is therefore a strategy setting process. In both cases the analysis expands the colour of the control analysis part of our formulah described in the previous section. Specifically the nature of the changes required are to accomodate additional ratings and velocities for allow treat risk and opportunity a single function (eg possibly describing a parabolic or logarythmic curve as the output).
Our revised formulah for competitive risk then becomes:
RO<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(CF<sub>i</sub>), f(CS<sub>i</sub>), f(CE<sub>i</sub>), f(CP<sub>i</sub>), f(CD<sub>i</sub>), f(CC<sub>i</sub>), f(CFV<sub>i</sub>), f(CSV<sub>i</sub>), f(CEV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;RO
:is expressed in a single scale such as: "really-good to negligeable to really-bad", or as complex numbers with two scales a rating (high to neglieable) and a binary (two position) scale - "Opportunity or Risk"
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;CP
:Means Mitigating Strategies and Controls effectiveness rating at preventing causal events.
;CD
:Means Mitigating Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CC
:Means Mitigating Strategies and Controls effectiveness rating for reducing the likelihood of further causal events and mitigating consequential impacts.
;CF
:Means Enabling Strategies and Controls effectiveness rating at focussing causal events.
;CS
:Means Enabling Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CE
:Means Enabling Strategies and Controls effectiveness rating for increasing the likelihood of further causal events and enabling consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CFV
:Means Focus Control Velocity Rating for each causal event
;CSV
:Means Sensing Control Velocity Rating for each causal event and possibly some to all impacts
;CEV
:Means Enabling Control Velocity Rating for each enabling control enabling impacts and possibly some to all causal events
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating control for all impacts and possibly mitigating some to all causal events
<br>
==The Evolution of the Risk Management Standard==
In Australia, a team of experienced risk management practitioners was assembled over two decades to codify a standard for risk management as it had been (and was being) developed and deployed in Australia and New Zealand. That codification was initially released by Standards Australia as AS/NZS 4360:1995, revised as AS/NZS 4360:1999 and revised again in its current version as AS/NZS 4360:2004. You can access the standard via [http://infostore.saiglobal.com/store/Details.aspx?DocN=AS0733759041AT|SAI Risk Management Portal]. While still very much in its infancy as a governance tool, and immature as a management science, risk management has rapidly been adopted across the world and is now codified into an international standard: ISO 31000:2009 standard (October 2009), and supported by the ISO Guide 73:2009 - largely based on the AS/NZS standard.
==The Classical Approach==
In classical risk management - with respect to a given focus - a business, a business objective, and asset, etc - we told to identify the risks first, so that they can be properly managed. In its classical form, risk management asks, and attempts to answer three questions:
*What can go wrong?
*What can I do to prevent it?
*What do I do if it happens?
You are advised to develop a risk register to document each potential problem, its level of seriousness, what is required to fix it, who will fix the problem, and monitor progress.
There are essentially four things you can do with risk. We will call them, the four T's:
* Tolerate it (by accepting or ignoring a risk - this is where the profit lies)
* Treat it (by actively re-mediating or controlling it)
* Transfer it (by insuring it, perhaps better described as "sharing it")
* Terminate it (by exiting the business that incurs it)
It is critical that leaders understand that risk management is NOT about avoiding risk, but about managing it.
==The Evolution of a Risk Management Thought==
The concept of risk and reward management are not new to mankind. The walls of cities and castles were early forms of risk management, and Hadrian's Wall, Agricola's Wall, Antonine Wall, and the Great Wall of China are dramatic statements of risk containment on a social scale.
History is littered with authors and thinkers exploring the relationship between risk awareness, risk exploitation, active management and outcomes. Military and political strategists have employed the concepts underpinning modern risk management for centuries. The writings of both military and political strategists such as Sun Tzu ("The Art of War"), Carl von Clausewitz ("On War"), Niccolò Machiavelli ("The Prince", "The Art of War"), and Miyamoto Musashi ("The Five Rings") are all examples of the practical application of risk awareness in strategy formation. To varying extent these works all encourage an awareness of one's own and one's opponent's weaknesses, and the mitigations and exploitation of the same.
Perhaps, what is new, is the codification of the process of identifying, measuring, assessing, and responding to risk laid down in the more recent writings. It would be naive, however, to consider that risk management, per-se, is new. The difference between a successful manager and an unsuccessful manager has always been their ability to see the potential reward in an opportunity and get strike the correct balance between ignoring, avoiding, transferring and mitigating risks. Too much risk avoidance means opportunities are not exploited, too much control or insurance means that there is no profit left from the risky activity, and too much ignorance means that eventually the strategy's angel will become history's fool.
In the absence of a formalised approach to risk management, the successful business leader is known as lucky. In truth, the success is probably more due to a that leader's accident of DNA and life experience that leads to instinctively correct risk judgements. It is possibly this instinct, more than anything else, that justifies the executive salary differentials.
There is an important observation to be made from the historic context of risk management theory. Currently risk management professionals tend to view the discipline as an extension of the strategy achievement, yet historically, risk management has been as much about strategy identification and formation, as about implementation.
Good risk management looks both inward and outward. By this I mean that risk management can be applied both to minimising your chance of failure and maximising your competitor's chance of failure. The essence of military strategist's thinking is to identify the weakness's of the opponent and exploit them to you own advantage. Application of the principles of risk management can enable you to not only identify the opponent's weaknesses, but identify the probable strategies they will employ to manage the risks arising from those weaknesses, and hence better inform your planners about potential strategies to employ.
Over the last 50 years a number of frameworks addressing risk management with respect to governance have emerged out of the experience of the different professional groups involved in strategic management, asset protection, public accountability, finance and risk. These groups include:
* Internal Audit - focused on control system reliability
* External Audit - focused on true and fair representation of financial position on a going concern basis
* Actuarial Science - focused on the pricing of risk for insurance
* Investment banking - focused on the pricing of risk for portfolio management, hedging, capital fees and adequacy
* Risk Management - focused on management of risk to strategic and tactical outcomes on an enterprise and societal basis
Setting aside the military and political authors, among the business community, some of the earliest work in risk management arose from the financial advisory community looking for models to minimise the downside risks to financial products investment.
==A Mathematical Basis To Risk Measurement==
As early is 1952 Harry M Markowitz published his paper "Portfolio Selection" in the Journal of Finance, exploring the advantages of risk diversification through balanced portfolio selection. The essence of portfolio theory is that risk essentially expressed the potential for a negative return (financial loss) and the
An investor can reduce portfolio risk simply by holding combinations of instruments which are not perfectly positively correlated (correlation coefficient -1<(r)<1)).
To a greater of lesser extent the professional bodies, standards organisations and government agencies have responded with guidelines and standards for the measurement, application, response and management of risk as it applies to their specific problem domains. In the 1978 the Institute of Internal Auditors - the international professional body of the Internal Audit profession issued its Standard's for the Professional Practise of Internal Audit (SPPIA). In Anne of the earliest standards based references to risk based management the standards included standard 320: "Compliance with Policies, Plans, Procedures, Laws and Regulations". The statement determined that "Internal auditors should review the systems established to ensure compliance with policies, plans, procedures, laws and regulations which could have a significant impact on operations and reports, and should determine whether the organisation is in compliance". The SPPIA standards mandated the
==Alternative Standards and Views of Risk Management==
Among the definitive pronouncements on risk management are:
* The King Report on Corporate Governance for South Africa (SA King II - 2002)
* A Risk Management Standard (RMS 2004) by the Federation of European Risk Management Association (UK FERMA)
* Australian/New Zealand Standard 4360—Risk Management (A/NZ 1995, 1999, 2004)
* COSO’s Enterprise Risk Management— Integrated Framework
* The Institute of Management Accountants’ (IMA)
* “A Global Perspective on Assessing Internal Control over Financial Reporting” (ICoFR)
* Basel II
* Standard & Poor’s and ERM
* ISO 31000:2009
Building on the work of many years, the middle of the first decade of the millenium saw a succession of enterprise risk management (ERM) related pronouncements. AS/NZS 4360: 2004 defined the risk management process as the “'''systematic application of management policies, procedures and practices to the tasks of communicating, establishing the context, identifying, analysing, evaluating, treating, monitoring and reviewing '''”. For the financial sector, the earlier BASEL I standard was superceded by BASEL II which closely mirrored by the view of AS/NZS 4360.
Expanding on an earlier Internal Control Framework from the early 1990's the Committee of Sponsoring Organisations of the Treadway Commission (COSO) releasmillenniumed the ‘Enterprise Risk Management (ERM) – Integrated Framework’ which attempted to map the COSO framework that formed the motivational basis for the US Sarbanes-Oxley compliance legislation into a broader enterprise risk management framework. The COSO/ERM framwork defined enterprise risk management as:
* A process, ongoing and flowing through an entity,
* Effected by people at every level of an organisation,
* Applied in strategy setting,
* Applied across the enterprise, at every level and unit, and includes taking an entity-level portfolio view of risk,
* Designed to identify potential events that, if they occur, will affect the entity and to manage risk within its risk appetite,
* Able to provide reasonable assurance to an entity’s management and board of directors,
* Geared to achievement of objectives in one or more separate but overlapping categories.
The standards enjoy a shared purpose to improve the predictability of business outcomes, but differ significantly in how that certainty is to be improved. While 4360 describes the process for management of risk, BASEL II mandates firm’s operational risk management (ORM) system must be “conceptually sound and implemented with integrity”, but stops short of defining the form or process of the ORM. BASEL II does specify that the ORM should be maintained by an independent operational risk management function, and that is to consist of at least “strategies, methodologies and risk reporting systems". It identifies that the purpose of the ORM is to "identify, measure, monitor and control/mitigate operational risk”.
Under BASEL II, the ORM systems should be:
* “credible and appropriate”,
* “well reasoned, well documented”,
* “transparent and accessible”, and
* capable of being validated by audit.
Among the failings of BASEL II, is the lack of definition of these key terms, which, in a sense, is where AS/NZSpractisessuperseded 4360 and the COSO ERM Framwork come in. The latter standards provides a framework under which a credible, reasoned, transparent, documented and verifiable risk management model can be established.
AS/NZS 4360 and COSO do not eliminate failure in the ORM/ERM, however, as in their implementation there is still considerable subjectivity in risk identification and assessment, and within the process documented by the standard there is not a mechanism for provining or measuring "completeness". They do, however, populate the next level of the BASEL II obligation.
This problem of "completeness" in ERM frameworks should not be underestimated. It is present in all current risk management standards and is possibly a key reason for failure in ERM frameworks. We shall explore approaches to solving this problem in later papers.
Owing to their differing origins the three standards employ slightly different terminology for shared ideas:
* AS/NZS 4360 refers to ‘Risk Treatment’, COSO to ‘Risk Response’ and Basel II uses ‘Risk Mitigation’.
While the seven ‘elements’ of AS/NZS 4360:2004 framework do not align precisely with the eight ‘components’ of the COSO process, the ‘end to end’ risk management process is the same.
<table cellpadding="10" >
<tr>
<th>
AS/NZS 4360: 2004
Framework
</th>
<th>
COSOframework ERM–Integrated
Framework
</th>
<th>
BASEL II ORM
Framework
</th>
</tr>
<tr>
<td>
Establish the context
</td>
<td>
Internal environment
</td>
<td>
</td>
</tr>
<tr>
<td>
Establish the context
</td>
<td>
Objective setting
</td>
<td>
</td>
</tr>
<tr>
<td>
Identify risks
</td>
<td>
Event identification
</td>
<td>
Identify
</td>
</tr>
<tr>
<td>
Analyse risks
</td>
<td>
Risk assessment
</td>
<td>
Assess
</td>
</tr>
<tr>
<td>
Evaluate risks
</td>
<td>
Risk assessment
</td>
<td>
Assess
</td>
</tr>
<tr>
<td>
Treat risks
</td>
<td>
Risk response and control activities
</td>
<td>
Control/mitigate
</td>
</tr>
<tr>
<td>
Monitor and review
</td>
<td>
Monitoring
</td>
<td>
Monitor
</td>
</tr>
<tr>
<td>
Consult and communicate
</td>
<td>
Information and communication
</td>
<td>
</td>
</tr>
</table>
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Risk Management]]
{{BackLinks}}
</noinclude>
0c92f2577353da0d73bf684aee6689d18b9f93ee
479
387
2010-08-07T02:15:55Z
Bishopj
1
/* Rating a Risk */
wikitext
text/x-wiki
==What Is Risk Management?==
===Risks, Causes & Consequences===
Risks to your operations and assets are a permanent and inescapable aspect of existence. Put simply if you have an objective, the central possibility exists that your objective may not be achieved. That possibility is risk.
Inputs required for your objective may not be available when required, or the cost of the same may make the objective inviable, or the social or technical assumptions may be invalidated, etc. These are threats, or causes of objective failure, and therefore causes of risk. Threats exist - some latent and some active, but all are potential causes of the failure to achieve your objective (with varying likelihoods).
Further, it may be that failure to achieve the objective, or preserve the asset may have impacts far beyond the loss of the expected benefit to be derived, or value of the asset lost. Those impacts are the consequences. For example, at the individual business level, failure to achieve a strategic objective may result in failure of the business, while on the international stage, failure to achieve a diplomatic objective may impact the society detrimentally for generations to come, and failure to protect a critical military or hazardous materials technology may result in extensive loss of life.
Lastly, a risk may not be a bad thing - it might be a good thing, or more commonly known as "an opportunity". Likewise, in impact may not just be "nothing to really bad" but also "really good to nothing to really bad". In its fullest extent risk management covers both opportunties and exposures. Most of the following discussion will consider risk management in its more common guise as managing exposures, but when we consider "Competitive risk Management" we will once again expand the definition.
<br>
===Risk Appetite===
The degree to which these undesired outcomes are more or less certain will effect your degree of concern about them. At the extreme ends, everybody may have pretty much the same response: an undesired outcome that is virtually certain to occur will probably be judged as unacceptable, while an undesired outcome that is virtually certain not to occur, will probably be judged as acceptable. Between these extremes each individual, organisation, and society will have differing determinations of acceptability. This determination is also likely to vary with the nature of the undesired outcome (for example the 50% chance of a loss of thousands of lives is generally considered less acceptable than the 50% chance of the loss of ten dollars). This variance in judgement as the risk appetite - literally your or your organisation's willingness to passively accept the possibility of a particular type of undesired outcome.
===Risk Response, Mitigation and Control===
The reactive leader, when faced with changed circumstances will rapidly form a response. These responses are designed to minimise the consequences of the threat event and are risk mitigation actions, or risk treatments. Of course, some responses (like avoidance or insurance) are by this time out of the question - as the threat has materialised. Faced with too many or too big a change in circumstances, even the most responsive leader can be overwhelmed, and the process fails with the objective not achieved.
A wise leader then (at least) learns from experience, and establishes processes to minimise the likelihood of similar threat events occurring (prevention), to detect when they occur (detection) and immediately respond and mitigate the consequences when they occur regardless (correction). These preplanned and pre-established processes of prevention, detection and correction are controls.
===Rating a Risk===
All controls have a cost - whether measure in money, time, tactical advantage, etc. Too much control may make the achievement of the object inviable. The leader may judge that some threats experienced are unlikely to occur again (for example Yr 2000 date risk was a once off, as the year 2000 is unlikely to occur again in this time line!). Other threats will be considered almost certain - such as a sunny day melting an unrefrigerated cargo of ice cream. the probability that a threat will eventuate is its likelihood. Where the likelihood is very low, the leader may judge it is not worth the cost of controlling.
Likewise, some consequences of threat events are so minor that they can be ignored, while others are catastrophic to the objective. This judgement is the impact rating of the consequence.
The Likelihood of a threat event, combined with it's level of impact to the object achievement constitute the inherent risk to the achievement of the objective.
Although not yet part of the standard, over recent years an additional rating parameter is being argued for consideration: "Velocity". The velocity of a risk is the speed with which a causal event translates into an outcome. Velocity is a rating against time inversely, so the shorter the time it takes for a causal event to result in a specific impact, the higher the velocity.
Conversely if we are going to consider a time based measure for the onset of a risk event, we should allow for a velocity measure on the mitigation side of the equation. Here we would have two types to consider - pre-event controls (such as training, and document manuals), have a velocity measure that acts during a different phase from that during which the impact velocity is measured. The control velocity of specific interest to mitigating impact velocity is that of the reactive controls - Event (or Error) Detection and Event (or Error) Correction controls
<blockquote>
'''NOTE:''' Controls fall into one of three groups - Prevention, Detection and Correction. The first group identifies proactive controls (although some control steps in a given strategy of controls may be reactive even here), while the latter two describe purely reactive controls. Note that under this view the process of setting up a reactive control system and training the participants and systems in the operation of that control is itself a proactive step and hence a Preventive control, while the operation of the actual control itself is, to the triggering causal event, reactive.
</blockquote>
A similar case may, on the face of it, be advanced for direct estimation of Risk Frequency. Specifically, such a measure is one of the frequency of a causal event - with an assessed likelihood of triggering at each cycle. The amount of time required for a single cycle from Causal Event A<sub>0</sub> to the next potential occurrence of Causal Event A at time 1 :i.e. A<sub>1</sub> is the velocity of the likelihood of a causal event being once again tested. On this basis we could again track the velocity of the likelihood.
A reasonably strong case might also be advanced that likelihood measures carry an implied frequency measurement as people tend to rate things as more certain to occur of they are always almost occurring than when rarely experienced, even if the causal event actually occurs on these rare occasions. In this case it is argued that rating likelihood velocity in fact double weights the likelihood rating.
This author leans to the former view. If we are separating some velocities from their coupled ratings, we should consistently apply the logic of separation to them all. On that basis the probability or reliability estimates are consistently cleansed of time subjectivity, and thence become an instantaneous rating rather than a multi period rating of the probability, impact or dampening (control mitigation rating). In database design terms the rating measures are normalised with respect to time. The obvious benefit is that the greater the consistency among the properties (functional and data) if not the content of those properties, the greater the reliability that the items can be combined to give a result that varies consistently with its inputs (in this case a Risk rating). If some of the inputs are themselves functions of other inputs (such as time) the result of combining the various components of the risk formula together will not appear to move consistently with the inputs.
A further benefit of separating velocity information is the colour it might bring to the risk analysis. One can picture a risk model where the assessment of an otherwise well rated risk, on the basis of likelihood velocity (think frequency), impact velocity (think: "How quickly will this hit us?") against preventive control velocity (think "How long will it take for the training to be completed?") and Detection control velocity (think "How quickly will we know that the wheels have fallen off?" and Correction control velocity (think "How quickly will we have cleaned up the mess?"), might reveal some fascinating structural problems in a control system. Such as a 12 month wait for detection controls to be in place for an high to medium impact impact of an event happening every week, and if those detection controls that then tell us only at the end of a quarter that a problem occurred that will take 6 months to fix, we might like to know - even though individually all these controls got the highest ratings in terms of effectiveness. Of course, if our risk formula dealt with these items properly as part of its model we would not have a well rated risk with such problems!
Expressed as a formula where f() means a function of the items in parentheses, the risk equation with all these potential inputs is then:
R<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(C<sub>i</sub>), f(CPV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;C
:Means Mitigating Strategies and Controls effectiveness rating mitigating causal events and consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating each impact and possibly some to all causal events
This formula says nothing more than that the risk rating is a function of eight variables, Whole-of-risk likelihood, likelihood velocity, impact, impact velocity, but mitigated by whole-of-risk control effectiveness-reliability, working over three velocities - Prevention control velocity, Detection control velocity and Correction control velocity. In term the value supplied for each of these ratings is itself a function of the assessed value of the rating to a normalised value (such as the range of reals from -1 to 1, or a shared 5 point scale, etc.)
The weakness in this formula lies in the consolidation of the three risk groups into a single control rating for the purposes of the risk function itself (thus hiding the relationship between the control group velocities and the control group ratings.
R<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(CP<sub>i</sub>), f(CD<sub>i</sub>), f(CC<sub>i</sub>), f(CPV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;CP
:Means Mitigating Strategies and Controls effectiveness rating at preventing causal events.
;CD
:Means Mitigating Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CC
:Means Mitigating Strategies and Controls effectiveness rating for reducing the likelihood of further causal events and mitigating consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating each impact and possibly some to all causal events
<br>
===From Risk Response To Risk Management===
Faced with a similar objective at another time, the prudent leader moves from re-action to pre-action. He applies his own and others' past experience, "common sense", and deductive reasoning when identifying the nature and causes of potential threat events and their consequences. He makes judgements as to the likelihood of these identified threats, and judgements as to the degree of impact arising from the consequences. This process is risk identification and assessment.
Comparing this assessment to the organisation's risk appetite he determines a range of risk responses, treatments or controls. The shift to pre-action (more commonly described as proactive management), the leader's options are widened when compared to the earlier reactive state. By preplanning the risk profile he is able to consider avoidance (just don't do it!), risk sharing (insurance) and threat prevention (training) as options in the risk mitigation armoury. Further the costs of each mitigation strategy can be considered against the benefits expected from the achievement of the objective, and the most effective, efficient and economic ones chosen.
In all cases a threat has a "tell-tale" which must be used to detect that the threat has eventuated, or that the likelihood of a threat has changed. These controls required in this case are detection controls. As with the other pre-action choices, detective controls are most advantageous before the event occurs - as once it has occurred they will generally tell you what you already know. This shifting assessment of risk based on the changes in likelihood over time is the current risk.
Implementing detection controls, allows the leader to defer the implementation (if not the planning, design and establishment) of other reactive controls, thus delivering a degree of certainty over the costs of mitigation at each point in a project, under a variety of circumstances and levels of current risk.
Once the controls (or risk mitigation plan) is applied to the assessed inherent risk of the objective the result is the residual risk - that portion of the inherent or current risk that remains after the controls have been applied.
Risk Management is about applying a structured thought process to identifying and managing such risks.
In one form or another, every leader undertakes risk management from the minute you establish a political ideology, manifesto, business vision, organisational mission, or business or political objective. Without a plan - however loosely defined - the objective is unlikely to be achieved. That plan is a map to managing risks to the non-achievement of the objective - starting with the most obvious risk: "inaction".
While Compliance Management is about a governance process for managing adherence to internally and externally known standards, policies, procedures, and controls; Risk Management is an approach to governance that aims to identify what plans, standards, policies, procedures, and controls are be required and how important each part is to the purpose, and when you will know which additional actions will be required. Risk Management is a systematic process of making a realistic evaluation of the true level of risks to your purpose, and mitigating those risks that exceed your risk appetite in the most efficient, effective and economic manner possible.
==What Is Enterprise Risk Management?==
Enterprise Risk Management applies the concepts outlined at the project or single objective level described above, and applies them across the enterprise, government, or society (as appropriate). Enterprise management distinguishes itself from project risk management by its aims:
* Firstly, it aims to reduce duplication of risk management planning and risk mitigation strategies by facilitating cross-organisational sharing of control frameworks, management expertise, and resources.
* Secondly, it aims to minimise contradictory, counter productive and mutually exclusive risk management strategies by facilitating enterprise wide knowledge of the risk profile of the organisation.
* Thirdly, it aims to inform the governance team of their true organisation wide position on a continuous and instantaneous basis.
* Fourthly, it aims to forecast the risk profile of the organisation within, at least, the decision cycle of the governance team.
==What is Competitive Risk Management?==
So far, we have considered risk management as a stability governance tool for the assisting the achievement of identified objectives. In essence it is under this view a defencive strategy. The scope of governance arguably extends beyond maintenance of environmental stability and achievement of defined near-term deadlines and objectives, to the identification of the correct objectives (those that succeed on some measure), and longer term aspirational objectives such "more profit" or, in social measures - "higher average literacy".
This shift implies to additional dimensions should be considered:
#A risk may also be an opportunity, and an impact may be both positive and negative. Where the impact is positive for the organisation the correct corrective control response is to in fact augment it the effect (such as by adjusting the causal states of other risks (opportunities)). The overall implication is that to accomodate opportunity the risk rating scheme needs to be balance around 0 (meaning minimum risk and minimum opportunity). Whether this is best done with a positive scale and a negative scale or whether this should be achieved with a linear scale with a floating normal line is, I think an implimentation question at this stage.
#A risk/opportunity may have a group of controls (strategies) intended both alternately to mitigate (Prevent, Detect, Correct ) and augment (Focus, Sense, Enable) a risk in some way. Note that we are expanding our control groups from three to six. This is necessary where two impact rating scales are used (an opportiunity scale and a impact scale). If only a single monotonic impact scale was used: eg. "really-good to negligeable to really-bad", we could prossibly escape with four groups: Focus, Prevent, Detect, Correct. Focus is the opportunity's version of Prevent. The difference is that in the case of a risk, an effective preventive control reduces the residual likelihood (if not the inherent likelihood) of a causal event, while in an opportnity we want precisely the opposite outcome. Thus we need to track these separately. In the case of the two scale system we need both the "opportunity" equivalents for detection and correction control functions separated as well.
In competitive risk management we utilise the techniques of "defencive" risk management as a method to inform competitive strategy. The same methods that are applied to determine and manage or avoid your risks, can be applied to:
#determine, induce and exploit your opportunities, and select the opportunities most likely to be successfully exploited; and
#determine and trigger your competitor's risks, and where they are either most exposed, or where their responsive mitigation costs will be greatest. In this use there is an implied additional measure-counter measure relationship between controls where an augmentation strategy is defined that is designed to detect or counter another mitigation strategy.
In competitive risk management we therefore look to identify and exploit our opportunities and the weakness in others through application if risk management techniques. Such an application of the method is likely to be most effective where knowledge of the competitor or competing industry approaches perfection, and the accuracyy of the model used approaches perfect accuracey. There are interesting implications to game theory where all participants in a market use equivalently competitive risk management methods and have equivalently perfect knowledge.
Competitive risk management is therefore a strategy setting process. In both cases the analysis expands the colour of the control analysis part of our formulah described in the previous section. Specifically the nature of the changes required are to accomodate additional ratings and velocities for allow treat risk and opportunity a single function (eg possibly describing a parabolic or logarythmic curve as the output).
Our revised formulah for competitive risk then becomes:
RO<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(CF<sub>i</sub>), f(CS<sub>i</sub>), f(CE<sub>i</sub>), f(CP<sub>i</sub>), f(CD<sub>i</sub>), f(CC<sub>i</sub>), f(CFV<sub>i</sub>), f(CSV<sub>i</sub>), f(CEV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;RO
:is expressed in a single scale such as: "really-good to negligeable to really-bad", or as complex numbers with two scales a rating (high to neglieable) and a binary (two position) scale - "Opportunity or Risk"
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;CP
:Means Mitigating Strategies and Controls effectiveness rating at preventing causal events.
;CD
:Means Mitigating Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CC
:Means Mitigating Strategies and Controls effectiveness rating for reducing the likelihood of further causal events and mitigating consequential impacts.
;CF
:Means Enabling Strategies and Controls effectiveness rating at focussing causal events.
;CS
:Means Enabling Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CE
:Means Enabling Strategies and Controls effectiveness rating for increasing the likelihood of further causal events and enabling consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CFV
:Means Focus Control Velocity Rating for each causal event
;CSV
:Means Sensing Control Velocity Rating for each causal event and possibly some to all impacts
;CEV
:Means Enabling Control Velocity Rating for each enabling control enabling impacts and possibly some to all causal events
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating control for all impacts and possibly mitigating some to all causal events
<br>
==The Evolution of the Risk Management Standard==
In Australia, a team of experienced risk management practitioners was assembled over two decades to codify a standard for risk management as it had been (and was being) developed and deployed in Australia and New Zealand. That codification was initially released by Standards Australia as AS/NZS 4360:1995, revised as AS/NZS 4360:1999 and revised again in its current version as AS/NZS 4360:2004. You can access the standard via [http://infostore.saiglobal.com/store/Details.aspx?DocN=AS0733759041AT|SAI Risk Management Portal]. While still very much in its infancy as a governance tool, and immature as a management science, risk management has rapidly been adopted across the world and is now codified into an international standard: ISO 31000:2009 standard (October 2009), and supported by the ISO Guide 73:2009 - largely based on the AS/NZS standard.
==The Classical Approach==
In classical risk management - with respect to a given focus - a business, a business objective, and asset, etc - we told to identify the risks first, so that they can be properly managed. In its classical form, risk management asks, and attempts to answer three questions:
*What can go wrong?
*What can I do to prevent it?
*What do I do if it happens?
You are advised to develop a risk register to document each potential problem, its level of seriousness, what is required to fix it, who will fix the problem, and monitor progress.
There are essentially four things you can do with risk. We will call them, the four T's:
* Tolerate it (by accepting or ignoring a risk - this is where the profit lies)
* Treat it (by actively re-mediating or controlling it)
* Transfer it (by insuring it, perhaps better described as "sharing it")
* Terminate it (by exiting the business that incurs it)
It is critical that leaders understand that risk management is NOT about avoiding risk, but about managing it.
==The Evolution of a Risk Management Thought==
The concept of risk and reward management are not new to mankind. The walls of cities and castles were early forms of risk management, and Hadrian's Wall, Agricola's Wall, Antonine Wall, and the Great Wall of China are dramatic statements of risk containment on a social scale.
History is littered with authors and thinkers exploring the relationship between risk awareness, risk exploitation, active management and outcomes. Military and political strategists have employed the concepts underpinning modern risk management for centuries. The writings of both military and political strategists such as Sun Tzu ("The Art of War"), Carl von Clausewitz ("On War"), Niccolò Machiavelli ("The Prince", "The Art of War"), and Miyamoto Musashi ("The Five Rings") are all examples of the practical application of risk awareness in strategy formation. To varying extent these works all encourage an awareness of one's own and one's opponent's weaknesses, and the mitigations and exploitation of the same.
Perhaps, what is new, is the codification of the process of identifying, measuring, assessing, and responding to risk laid down in the more recent writings. It would be naive, however, to consider that risk management, per-se, is new. The difference between a successful manager and an unsuccessful manager has always been their ability to see the potential reward in an opportunity and get strike the correct balance between ignoring, avoiding, transferring and mitigating risks. Too much risk avoidance means opportunities are not exploited, too much control or insurance means that there is no profit left from the risky activity, and too much ignorance means that eventually the strategy's angel will become history's fool.
In the absence of a formalised approach to risk management, the successful business leader is known as lucky. In truth, the success is probably more due to a that leader's accident of DNA and life experience that leads to instinctively correct risk judgements. It is possibly this instinct, more than anything else, that justifies the executive salary differentials.
There is an important observation to be made from the historic context of risk management theory. Currently risk management professionals tend to view the discipline as an extension of the strategy achievement, yet historically, risk management has been as much about strategy identification and formation, as about implementation.
Good risk management looks both inward and outward. By this I mean that risk management can be applied both to minimising your chance of failure and maximising your competitor's chance of failure. The essence of military strategist's thinking is to identify the weakness's of the opponent and exploit them to you own advantage. Application of the principles of risk management can enable you to not only identify the opponent's weaknesses, but identify the probable strategies they will employ to manage the risks arising from those weaknesses, and hence better inform your planners about potential strategies to employ.
Over the last 50 years a number of frameworks addressing risk management with respect to governance have emerged out of the experience of the different professional groups involved in strategic management, asset protection, public accountability, finance and risk. These groups include:
* Internal Audit - focused on control system reliability
* External Audit - focused on true and fair representation of financial position on a going concern basis
* Actuarial Science - focused on the pricing of risk for insurance
* Investment banking - focused on the pricing of risk for portfolio management, hedging, capital fees and adequacy
* Risk Management - focused on management of risk to strategic and tactical outcomes on an enterprise and societal basis
Setting aside the military and political authors, among the business community, some of the earliest work in risk management arose from the financial advisory community looking for models to minimise the downside risks to financial products investment.
==A Mathematical Basis To Risk Measurement==
As early is 1952 Harry M Markowitz published his paper "Portfolio Selection" in the Journal of Finance, exploring the advantages of risk diversification through balanced portfolio selection. The essence of portfolio theory is that risk essentially expressed the potential for a negative return (financial loss) and the
An investor can reduce portfolio risk simply by holding combinations of instruments which are not perfectly positively correlated (correlation coefficient -1<(r)<1)).
To a greater of lesser extent the professional bodies, standards organisations and government agencies have responded with guidelines and standards for the measurement, application, response and management of risk as it applies to their specific problem domains. In the 1978 the Institute of Internal Auditors - the international professional body of the Internal Audit profession issued its Standard's for the Professional Practise of Internal Audit (SPPIA). In Anne of the earliest standards based references to risk based management the standards included standard 320: "Compliance with Policies, Plans, Procedures, Laws and Regulations". The statement determined that "Internal auditors should review the systems established to ensure compliance with policies, plans, procedures, laws and regulations which could have a significant impact on operations and reports, and should determine whether the organisation is in compliance". The SPPIA standards mandated the
==Alternative Standards and Views of Risk Management==
Among the definitive pronouncements on risk management are:
* The King Report on Corporate Governance for South Africa (SA King II - 2002)
* A Risk Management Standard (RMS 2004) by the Federation of European Risk Management Association (UK FERMA)
* Australian/New Zealand Standard 4360—Risk Management (A/NZ 1995, 1999, 2004)
* COSO’s Enterprise Risk Management— Integrated Framework
* The Institute of Management Accountants’ (IMA)
* “A Global Perspective on Assessing Internal Control over Financial Reporting” (ICoFR)
* Basel II
* Standard & Poor’s and ERM
* ISO 31000:2009
Building on the work of many years, the middle of the first decade of the millenium saw a succession of enterprise risk management (ERM) related pronouncements. AS/NZS 4360: 2004 defined the risk management process as the “'''systematic application of management policies, procedures and practices to the tasks of communicating, establishing the context, identifying, analysing, evaluating, treating, monitoring and reviewing '''”. For the financial sector, the earlier BASEL I standard was superceded by BASEL II which closely mirrored by the view of AS/NZS 4360.
Expanding on an earlier Internal Control Framework from the early 1990's the Committee of Sponsoring Organisations of the Treadway Commission (COSO) releasmillenniumed the ‘Enterprise Risk Management (ERM) – Integrated Framework’ which attempted to map the COSO framework that formed the motivational basis for the US Sarbanes-Oxley compliance legislation into a broader enterprise risk management framework. The COSO/ERM framwork defined enterprise risk management as:
* A process, ongoing and flowing through an entity,
* Effected by people at every level of an organisation,
* Applied in strategy setting,
* Applied across the enterprise, at every level and unit, and includes taking an entity-level portfolio view of risk,
* Designed to identify potential events that, if they occur, will affect the entity and to manage risk within its risk appetite,
* Able to provide reasonable assurance to an entity’s management and board of directors,
* Geared to achievement of objectives in one or more separate but overlapping categories.
The standards enjoy a shared purpose to improve the predictability of business outcomes, but differ significantly in how that certainty is to be improved. While 4360 describes the process for management of risk, BASEL II mandates firm’s operational risk management (ORM) system must be “conceptually sound and implemented with integrity”, but stops short of defining the form or process of the ORM. BASEL II does specify that the ORM should be maintained by an independent operational risk management function, and that is to consist of at least “strategies, methodologies and risk reporting systems". It identifies that the purpose of the ORM is to "identify, measure, monitor and control/mitigate operational risk”.
Under BASEL II, the ORM systems should be:
* “credible and appropriate”,
* “well reasoned, well documented”,
* “transparent and accessible”, and
* capable of being validated by audit.
Among the failings of BASEL II, is the lack of definition of these key terms, which, in a sense, is where AS/NZSpractisessuperseded 4360 and the COSO ERM Framwork come in. The latter standards provides a framework under which a credible, reasoned, transparent, documented and verifiable risk management model can be established.
AS/NZS 4360 and COSO do not eliminate failure in the ORM/ERM, however, as in their implementation there is still considerable subjectivity in risk identification and assessment, and within the process documented by the standard there is not a mechanism for provining or measuring "completeness". They do, however, populate the next level of the BASEL II obligation.
This problem of "completeness" in ERM frameworks should not be underestimated. It is present in all current risk management standards and is possibly a key reason for failure in ERM frameworks. We shall explore approaches to solving this problem in later papers.
Owing to their differing origins the three standards employ slightly different terminology for shared ideas:
* AS/NZS 4360 refers to ‘Risk Treatment’, COSO to ‘Risk Response’ and Basel II uses ‘Risk Mitigation’.
While the seven ‘elements’ of AS/NZS 4360:2004 framework do not align precisely with the eight ‘components’ of the COSO process, the ‘end to end’ risk management process is the same.
<table cellpadding="10" >
<tr>
<th>
AS/NZS 4360: 2004
Framework
</th>
<th>
COSOframework ERM–Integrated
Framework
</th>
<th>
BASEL II ORM
Framework
</th>
</tr>
<tr>
<td>
Establish the context
</td>
<td>
Internal environment
</td>
<td>
</td>
</tr>
<tr>
<td>
Establish the context
</td>
<td>
Objective setting
</td>
<td>
</td>
</tr>
<tr>
<td>
Identify risks
</td>
<td>
Event identification
</td>
<td>
Identify
</td>
</tr>
<tr>
<td>
Analyse risks
</td>
<td>
Risk assessment
</td>
<td>
Assess
</td>
</tr>
<tr>
<td>
Evaluate risks
</td>
<td>
Risk assessment
</td>
<td>
Assess
</td>
</tr>
<tr>
<td>
Treat risks
</td>
<td>
Risk response and control activities
</td>
<td>
Control/mitigate
</td>
</tr>
<tr>
<td>
Monitor and review
</td>
<td>
Monitoring
</td>
<td>
Monitor
</td>
</tr>
<tr>
<td>
Consult and communicate
</td>
<td>
Information and communication
</td>
<td>
</td>
</tr>
</table>
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Risk Management]]
{{BackLinks}}
</noinclude>
0c92f2577353da0d73bf684aee6689d18b9f93ee
BPC RiskManager V6.2 Network Architecture
0
4
6
2010-08-07T13:41:23Z
Bishopj
1
wikitext
text/x-wiki
[[Image:BPCRM NetDiag.png]]
BPC RiskManager is an N-Tier application. The primary layers are:
* Database Server layer
* Application Server layer
* Client layer
The core application set does not require a web server but certain optional capabilities do.
You will require a web server if you will be:
* Using the browser plugin client component.
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
While the browser plugin client component can be served by any brand of web server, you will require IIS 5+ if you plan to be:
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
Both of these capabilities use ISAPI libraries running on an IIS server. If you will be using the the HTTPS communication protocol, you will also need an SSL certificate installed on the web server.
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
96accad095e3f378d468445d6bc5231ced78bf76
277
6
2010-08-07T13:41:23Z
Bishopj
1
wikitext
text/x-wiki
[[Image:BPCRM NetDiag.png]]
BPC RiskManager is an N-Tier application. The primary layers are:
* Database Server layer
* Application Server layer
* Client layer
The core application set does not require a web server but certain optional capabilities do.
You will require a web server if you will be:
* Using the browser plugin client component.
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
While the browser plugin client component can be served by any brand of web server, you will require IIS 5+ if you plan to be:
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
Both of these capabilities use ISAPI libraries running on an IIS server. If you will be using the the HTTPS communication protocol, you will also need an SSL certificate installed on the web server.
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
96accad095e3f378d468445d6bc5231ced78bf76
483
277
2010-08-07T13:41:23Z
Bishopj
1
wikitext
text/x-wiki
[[Image:BPCRM NetDiag.png]]
BPC RiskManager is an N-Tier application. The primary layers are:
* Database Server layer
* Application Server layer
* Client layer
The core application set does not require a web server but certain optional capabilities do.
You will require a web server if you will be:
* Using the browser plugin client component.
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
While the browser plugin client component can be served by any brand of web server, you will require IIS 5+ if you plan to be:
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
Both of these capabilities use ISAPI libraries running on an IIS server. If you will be using the the HTTPS communication protocol, you will also need an SSL certificate installed on the web server.
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
96accad095e3f378d468445d6bc5231ced78bf76
499
483
2010-08-07T13:41:23Z
Bishopj
1
wikitext
text/x-wiki
[[Image:BPCRM NetDiag.png]]
BPC RiskManager is an N-Tier application. The primary layers are:
* Database Server layer
* Application Server layer
* Client layer
The core application set does not require a web server but certain optional capabilities do.
You will require a web server if you will be:
* Using the browser plugin client component.
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
While the browser plugin client component can be served by any brand of web server, you will require IIS 5+ if you plan to be:
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
Both of these capabilities use ISAPI libraries running on an IIS server. If you will be using the the HTTPS communication protocol, you will also need an SSL certificate installed on the web server.
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
96accad095e3f378d468445d6bc5231ced78bf76
BPC RiskManager Frequently Asked Questions
0
5
8
2010-08-07T14:37:54Z
Bishopj
1
wikitext
text/x-wiki
{|width="100%"
|width="60%" VALIGN="Top" |
# [[How do I get a copy of BPC RiskManager V6.2.5?]]
# [[Would it be possible to get a copy of the BPC RiskManager V6 installation guide?]]
# [[Is there a feature listing for the BPC RiskManager windows client and the browser client?]] We are looking at the possibility of using a mixed client environment based on user specific needs and where they are.
# [[When are multiple BPC RIskManager server licenses required?]] We are looking to have RM implemented across a group of companies. They will all be using the same instance with same fields and definitions as the subject matter is the same. Can we use a single server license or will we require multiple server licenses?
# [[Can you please provide information on the cost of licensing and the type of licensing for BPC RiskManager V6.x ?]]
# [[Does your license include the cost of MS SQL Server ?]]
# [[I just purchased BPC RiskManager. Will you be sending the install disks, and when?]]
# [[What will need to be arranged prior to the installing BPC RiskManager?]]
# [[Does the RiskManager client application work with FireFox browsers?]]
# [[In what programming language is BPC RiskManager written?]]
# [[Does the RiskManager plug-in itself have a certificate like a java applet does?]]
# [[For support, what type of support is available (i.e.: email, phone, onsite, etc...)?]]
# [[What is the best way to get support?]]
# [[How do I arrange installation support and what is the timeline?]]
# [[What support packages are available and at what cost?]]
# [[Is there a cost associated with telephone support (i.e.: cost per call or issue)?]]
# [[How do I get custom features added, or request new features for BPC RiskManager?]]
# [[Is there a User Group Forum?]]
# [[What type of documentation, technical and user is available for BPC RiskManager?]]
# [[How does one decide the optimum BPC RiskManager configuration?]]
# [[Is BPC RiskManager a Client-Server application?]]
# [[What is the difference between the browser plugin and the windows executable RiskManager client?]]
# [[Database stability: Is the RiskManager essentially a SQL Server application ported to Oracle?]]
# [[Database support: Which database choice will give us the best level of support?]]
# [[Security: What is the most secure architecture for BPC RiskManager?]]
# [[What is the best client version - the browser or non browser Risk Manager client?]]
# [[What admin account rights are required to setup a browser plug-in?]]
# [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7|How do I configure IE for the RiskManager browser plugin?]]
# [[BPC RiskManager Server - After installing in production or adding an application server|We just ported our enterprise system to a new server and I can't login. What do I do now?]]
# [[Steps For Migrating RiskManager V6.x from Test To Production|How do I port BPC RiskManager from test (or dev) to production?]]
# [[BPC RiskManager V6 on 64 bit Windows|How do I install BPC RiskManager onto a computer running a 64bit Windows OS?]]
| VALIGN="Top"|
<noinclude>
{|align="right" width="100%" cellpadding="10px"
|- style="background-color:#FFEBCD; " width="100%"
|'''A Frequently asked Question is...'''
|-
|<div class="didyouknow2" STYLE="height: 600px;
border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-right:10px;" >
{{#dpl: includepage=*
|includemaxlength=3000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=RiskManager FAQ
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;"></div>
|}
</noinclude>
|}
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:Bishop Phillips Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinksCategoryHead|CT=RiskManager FAQ|CN=The frequently asked Questions Category}}
</noinclude>
25cfdeccbd4a292afa2715e0cff010008b205d54
279
8
2010-08-07T14:37:54Z
Bishopj
1
wikitext
text/x-wiki
{|width="100%"
|width="60%" VALIGN="Top" |
# [[How do I get a copy of BPC RiskManager V6.2.5?]]
# [[Would it be possible to get a copy of the BPC RiskManager V6 installation guide?]]
# [[Is there a feature listing for the BPC RiskManager windows client and the browser client?]] We are looking at the possibility of using a mixed client environment based on user specific needs and where they are.
# [[When are multiple BPC RIskManager server licenses required?]] We are looking to have RM implemented across a group of companies. They will all be using the same instance with same fields and definitions as the subject matter is the same. Can we use a single server license or will we require multiple server licenses?
# [[Can you please provide information on the cost of licensing and the type of licensing for BPC RiskManager V6.x ?]]
# [[Does your license include the cost of MS SQL Server ?]]
# [[I just purchased BPC RiskManager. Will you be sending the install disks, and when?]]
# [[What will need to be arranged prior to the installing BPC RiskManager?]]
# [[Does the RiskManager client application work with FireFox browsers?]]
# [[In what programming language is BPC RiskManager written?]]
# [[Does the RiskManager plug-in itself have a certificate like a java applet does?]]
# [[For support, what type of support is available (i.e.: email, phone, onsite, etc...)?]]
# [[What is the best way to get support?]]
# [[How do I arrange installation support and what is the timeline?]]
# [[What support packages are available and at what cost?]]
# [[Is there a cost associated with telephone support (i.e.: cost per call or issue)?]]
# [[How do I get custom features added, or request new features for BPC RiskManager?]]
# [[Is there a User Group Forum?]]
# [[What type of documentation, technical and user is available for BPC RiskManager?]]
# [[How does one decide the optimum BPC RiskManager configuration?]]
# [[Is BPC RiskManager a Client-Server application?]]
# [[What is the difference between the browser plugin and the windows executable RiskManager client?]]
# [[Database stability: Is the RiskManager essentially a SQL Server application ported to Oracle?]]
# [[Database support: Which database choice will give us the best level of support?]]
# [[Security: What is the most secure architecture for BPC RiskManager?]]
# [[What is the best client version - the browser or non browser Risk Manager client?]]
# [[What admin account rights are required to setup a browser plug-in?]]
# [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7|How do I configure IE for the RiskManager browser plugin?]]
# [[BPC RiskManager Server - After installing in production or adding an application server|We just ported our enterprise system to a new server and I can't login. What do I do now?]]
# [[Steps For Migrating RiskManager V6.x from Test To Production|How do I port BPC RiskManager from test (or dev) to production?]]
# [[BPC RiskManager V6 on 64 bit Windows|How do I install BPC RiskManager onto a computer running a 64bit Windows OS?]]
| VALIGN="Top"|
<noinclude>
{|align="right" width="100%" cellpadding="10px"
|- style="background-color:#FFEBCD; " width="100%"
|'''A Frequently asked Question is...'''
|-
|<div class="didyouknow2" STYLE="height: 600px;
border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-right:10px;" >
{{#dpl: includepage=*
|includemaxlength=3000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=RiskManager FAQ
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;"></div>
|}
</noinclude>
|}
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:Bishop Phillips Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinksCategoryHead|CT=RiskManager FAQ|CN=The frequently asked Questions Category}}
</noinclude>
25cfdeccbd4a292afa2715e0cff010008b205d54
301
279
2010-08-07T14:37:54Z
Bishopj
1
wikitext
text/x-wiki
{|width="100%"
|width="60%" VALIGN="Top" |
# [[How do I get a copy of BPC RiskManager V6.2.5?]]
# [[Would it be possible to get a copy of the BPC RiskManager V6 installation guide?]]
# [[Is there a feature listing for the BPC RiskManager windows client and the browser client?]] We are looking at the possibility of using a mixed client environment based on user specific needs and where they are.
# [[When are multiple BPC RIskManager server licenses required?]] We are looking to have RM implemented across a group of companies. They will all be using the same instance with same fields and definitions as the subject matter is the same. Can we use a single server license or will we require multiple server licenses?
# [[Can you please provide information on the cost of licensing and the type of licensing for BPC RiskManager V6.x ?]]
# [[Does your license include the cost of MS SQL Server ?]]
# [[I just purchased BPC RiskManager. Will you be sending the install disks, and when?]]
# [[What will need to be arranged prior to the installing BPC RiskManager?]]
# [[Does the RiskManager client application work with FireFox browsers?]]
# [[In what programming language is BPC RiskManager written?]]
# [[Does the RiskManager plug-in itself have a certificate like a java applet does?]]
# [[For support, what type of support is available (i.e.: email, phone, onsite, etc...)?]]
# [[What is the best way to get support?]]
# [[How do I arrange installation support and what is the timeline?]]
# [[What support packages are available and at what cost?]]
# [[Is there a cost associated with telephone support (i.e.: cost per call or issue)?]]
# [[How do I get custom features added, or request new features for BPC RiskManager?]]
# [[Is there a User Group Forum?]]
# [[What type of documentation, technical and user is available for BPC RiskManager?]]
# [[How does one decide the optimum BPC RiskManager configuration?]]
# [[Is BPC RiskManager a Client-Server application?]]
# [[What is the difference between the browser plugin and the windows executable RiskManager client?]]
# [[Database stability: Is the RiskManager essentially a SQL Server application ported to Oracle?]]
# [[Database support: Which database choice will give us the best level of support?]]
# [[Security: What is the most secure architecture for BPC RiskManager?]]
# [[What is the best client version - the browser or non browser Risk Manager client?]]
# [[What admin account rights are required to setup a browser plug-in?]]
# [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7|How do I configure IE for the RiskManager browser plugin?]]
# [[BPC RiskManager Server - After installing in production or adding an application server|We just ported our enterprise system to a new server and I can't login. What do I do now?]]
# [[Steps For Migrating RiskManager V6.x from Test To Production|How do I port BPC RiskManager from test (or dev) to production?]]
# [[BPC RiskManager V6 on 64 bit Windows|How do I install BPC RiskManager onto a computer running a 64bit Windows OS?]]
| VALIGN="Top"|
<noinclude>
{|align="right" width="100%" cellpadding="10px"
|- style="background-color:#FFEBCD; " width="100%"
|'''A Frequently asked Question is...'''
|-
|<div class="didyouknow2" STYLE="height: 600px;
border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-right:10px;" >
{{#dpl: includepage=*
|includemaxlength=3000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=RiskManager FAQ
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;"></div>
|}
</noinclude>
|}
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:Bishop Phillips Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinksCategoryHead|CT=RiskManager FAQ|CN=The frequently asked Questions Category}}
</noinclude>
25cfdeccbd4a292afa2715e0cff010008b205d54
465
301
2010-08-07T14:37:54Z
Bishopj
1
wikitext
text/x-wiki
{|width="100%"
|width="60%" VALIGN="Top" |
# [[How do I get a copy of BPC RiskManager V6.2.5?]]
# [[Would it be possible to get a copy of the BPC RiskManager V6 installation guide?]]
# [[Is there a feature listing for the BPC RiskManager windows client and the browser client?]] We are looking at the possibility of using a mixed client environment based on user specific needs and where they are.
# [[When are multiple BPC RIskManager server licenses required?]] We are looking to have RM implemented across a group of companies. They will all be using the same instance with same fields and definitions as the subject matter is the same. Can we use a single server license or will we require multiple server licenses?
# [[Can you please provide information on the cost of licensing and the type of licensing for BPC RiskManager V6.x ?]]
# [[Does your license include the cost of MS SQL Server ?]]
# [[I just purchased BPC RiskManager. Will you be sending the install disks, and when?]]
# [[What will need to be arranged prior to the installing BPC RiskManager?]]
# [[Does the RiskManager client application work with FireFox browsers?]]
# [[In what programming language is BPC RiskManager written?]]
# [[Does the RiskManager plug-in itself have a certificate like a java applet does?]]
# [[For support, what type of support is available (i.e.: email, phone, onsite, etc...)?]]
# [[What is the best way to get support?]]
# [[How do I arrange installation support and what is the timeline?]]
# [[What support packages are available and at what cost?]]
# [[Is there a cost associated with telephone support (i.e.: cost per call or issue)?]]
# [[How do I get custom features added, or request new features for BPC RiskManager?]]
# [[Is there a User Group Forum?]]
# [[What type of documentation, technical and user is available for BPC RiskManager?]]
# [[How does one decide the optimum BPC RiskManager configuration?]]
# [[Is BPC RiskManager a Client-Server application?]]
# [[What is the difference between the browser plugin and the windows executable RiskManager client?]]
# [[Database stability: Is the RiskManager essentially a SQL Server application ported to Oracle?]]
# [[Database support: Which database choice will give us the best level of support?]]
# [[Security: What is the most secure architecture for BPC RiskManager?]]
# [[What is the best client version - the browser or non browser Risk Manager client?]]
# [[What admin account rights are required to setup a browser plug-in?]]
# [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7|How do I configure IE for the RiskManager browser plugin?]]
# [[BPC RiskManager Server - After installing in production or adding an application server|We just ported our enterprise system to a new server and I can't login. What do I do now?]]
# [[Steps For Migrating RiskManager V6.x from Test To Production|How do I port BPC RiskManager from test (or dev) to production?]]
# [[BPC RiskManager V6 on 64 bit Windows|How do I install BPC RiskManager onto a computer running a 64bit Windows OS?]]
| VALIGN="Top"|
<noinclude>
{|align="right" width="100%" cellpadding="10px"
|- style="background-color:#FFEBCD; " width="100%"
|'''A Frequently asked Question is...'''
|-
|<div class="didyouknow2" STYLE="height: 600px;
border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-right:10px;" >
{{#dpl: includepage=*
|includemaxlength=3000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=RiskManager FAQ
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;"></div>
|}
</noinclude>
|}
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:Bishop Phillips Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinksCategoryHead|CT=RiskManager FAQ|CN=The frequently asked Questions Category}}
</noinclude>
25cfdeccbd4a292afa2715e0cff010008b205d54
485
465
2010-08-07T14:37:54Z
Bishopj
1
wikitext
text/x-wiki
{|width="100%"
|width="60%" VALIGN="Top" |
# [[How do I get a copy of BPC RiskManager V6.2.5?]]
# [[Would it be possible to get a copy of the BPC RiskManager V6 installation guide?]]
# [[Is there a feature listing for the BPC RiskManager windows client and the browser client?]] We are looking at the possibility of using a mixed client environment based on user specific needs and where they are.
# [[When are multiple BPC RIskManager server licenses required?]] We are looking to have RM implemented across a group of companies. They will all be using the same instance with same fields and definitions as the subject matter is the same. Can we use a single server license or will we require multiple server licenses?
# [[Can you please provide information on the cost of licensing and the type of licensing for BPC RiskManager V6.x ?]]
# [[Does your license include the cost of MS SQL Server ?]]
# [[I just purchased BPC RiskManager. Will you be sending the install disks, and when?]]
# [[What will need to be arranged prior to the installing BPC RiskManager?]]
# [[Does the RiskManager client application work with FireFox browsers?]]
# [[In what programming language is BPC RiskManager written?]]
# [[Does the RiskManager plug-in itself have a certificate like a java applet does?]]
# [[For support, what type of support is available (i.e.: email, phone, onsite, etc...)?]]
# [[What is the best way to get support?]]
# [[How do I arrange installation support and what is the timeline?]]
# [[What support packages are available and at what cost?]]
# [[Is there a cost associated with telephone support (i.e.: cost per call or issue)?]]
# [[How do I get custom features added, or request new features for BPC RiskManager?]]
# [[Is there a User Group Forum?]]
# [[What type of documentation, technical and user is available for BPC RiskManager?]]
# [[How does one decide the optimum BPC RiskManager configuration?]]
# [[Is BPC RiskManager a Client-Server application?]]
# [[What is the difference between the browser plugin and the windows executable RiskManager client?]]
# [[Database stability: Is the RiskManager essentially a SQL Server application ported to Oracle?]]
# [[Database support: Which database choice will give us the best level of support?]]
# [[Security: What is the most secure architecture for BPC RiskManager?]]
# [[What is the best client version - the browser or non browser Risk Manager client?]]
# [[What admin account rights are required to setup a browser plug-in?]]
# [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7|How do I configure IE for the RiskManager browser plugin?]]
# [[BPC RiskManager Server - After installing in production or adding an application server|We just ported our enterprise system to a new server and I can't login. What do I do now?]]
# [[Steps For Migrating RiskManager V6.x from Test To Production|How do I port BPC RiskManager from test (or dev) to production?]]
# [[BPC RiskManager V6 on 64 bit Windows|How do I install BPC RiskManager onto a computer running a 64bit Windows OS?]]
| VALIGN="Top"|
<noinclude>
{|align="right" width="100%" cellpadding="10px"
|- style="background-color:#FFEBCD; " width="100%"
|'''A Frequently asked Question is...'''
|-
|<div class="didyouknow2" STYLE="height: 600px;
border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-right:10px;" >
{{#dpl: includepage=*
|includemaxlength=3000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=RiskManager FAQ
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;"></div>
|}
</noinclude>
|}
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:Bishop Phillips Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinksCategoryHead|CT=RiskManager FAQ|CN=The frequently asked Questions Category}}
</noinclude>
25cfdeccbd4a292afa2715e0cff010008b205d54
501
485
2010-08-07T14:37:54Z
Bishopj
1
wikitext
text/x-wiki
{|width="100%"
|width="60%" VALIGN="Top" |
# [[How do I get a copy of BPC RiskManager V6.2.5?]]
# [[Would it be possible to get a copy of the BPC RiskManager V6 installation guide?]]
# [[Is there a feature listing for the BPC RiskManager windows client and the browser client?]] We are looking at the possibility of using a mixed client environment based on user specific needs and where they are.
# [[When are multiple BPC RIskManager server licenses required?]] We are looking to have RM implemented across a group of companies. They will all be using the same instance with same fields and definitions as the subject matter is the same. Can we use a single server license or will we require multiple server licenses?
# [[Can you please provide information on the cost of licensing and the type of licensing for BPC RiskManager V6.x ?]]
# [[Does your license include the cost of MS SQL Server ?]]
# [[I just purchased BPC RiskManager. Will you be sending the install disks, and when?]]
# [[What will need to be arranged prior to the installing BPC RiskManager?]]
# [[Does the RiskManager client application work with FireFox browsers?]]
# [[In what programming language is BPC RiskManager written?]]
# [[Does the RiskManager plug-in itself have a certificate like a java applet does?]]
# [[For support, what type of support is available (i.e.: email, phone, onsite, etc...)?]]
# [[What is the best way to get support?]]
# [[How do I arrange installation support and what is the timeline?]]
# [[What support packages are available and at what cost?]]
# [[Is there a cost associated with telephone support (i.e.: cost per call or issue)?]]
# [[How do I get custom features added, or request new features for BPC RiskManager?]]
# [[Is there a User Group Forum?]]
# [[What type of documentation, technical and user is available for BPC RiskManager?]]
# [[How does one decide the optimum BPC RiskManager configuration?]]
# [[Is BPC RiskManager a Client-Server application?]]
# [[What is the difference between the browser plugin and the windows executable RiskManager client?]]
# [[Database stability: Is the RiskManager essentially a SQL Server application ported to Oracle?]]
# [[Database support: Which database choice will give us the best level of support?]]
# [[Security: What is the most secure architecture for BPC RiskManager?]]
# [[What is the best client version - the browser or non browser Risk Manager client?]]
# [[What admin account rights are required to setup a browser plug-in?]]
# [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7|How do I configure IE for the RiskManager browser plugin?]]
# [[BPC RiskManager Server - After installing in production or adding an application server|We just ported our enterprise system to a new server and I can't login. What do I do now?]]
# [[Steps For Migrating RiskManager V6.x from Test To Production|How do I port BPC RiskManager from test (or dev) to production?]]
# [[BPC RiskManager V6 on 64 bit Windows|How do I install BPC RiskManager onto a computer running a 64bit Windows OS?]]
| VALIGN="Top"|
<noinclude>
{|align="right" width="100%" cellpadding="10px"
|- style="background-color:#FFEBCD; " width="100%"
|'''A Frequently asked Question is...'''
|-
|<div class="didyouknow2" STYLE="height: 600px;
border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-right:10px;" >
{{#dpl: includepage=*
|includemaxlength=3000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=RiskManager FAQ
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;"></div>
|}
</noinclude>
|}
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:Bishop Phillips Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinksCategoryHead|CT=RiskManager FAQ|CN=The frequently asked Questions Category}}
</noinclude>
25cfdeccbd4a292afa2715e0cff010008b205d54
BPC SurveyManager Web Client Manual: Accessing
0
274
291
2010-11-15T17:00:31Z
Bishopj
1
wikitext
text/x-wiki
=Section 1. Accessing the BPC Survey Manager=
==1.1 Connect to the ACFE and BPC SurveyManager web site==
===1.1.1 ACFE clients: Accessing the ACFE Learners Satisfaction Survey Website===
ACFE users or ACE providers wishing to access the Learners Satisfaction Survey Management Website should access the ACFE BPC SurveyManager site and log into your ACE organisation using your issued organistion administration. There you will find, added to your survey list, the LSS for the current year. This link will be in the email provided by your regional coordinator.
Connect to the Survey Manager site:
[http://acfe.bishopphillips.com/ ACFE BPC Survey Manager Website]
===1.1.2 Other clients: Accessing Survey Manager Management Website===
All other users, including BPC RiskManager users should use the link provider by your BPC SurveyManager hosting provider. This link and any site specific instructions will be in the email you received on activation.
BPC RiskManager users have a further option of using your BPC RiskManager client to build and manage surveys. This manual, however, is about the use of the BPC SurveyManager Management website.
==1.2 Starting The BPC Survey Manager Web Client==
===1.2.1 The BPC SurveyManager Launch Page===
There are now two ways to access the BPC Survey Manager system using a web browser:
# The Survey Manager (maintenance and management system)
# The Survey Portal
'''''The first – Survey Manager -''''' (the maintenance and management system) provides facilities for creating, editing, publishing, and maintaining surveys, as well as maintaining users and responders and viewing reports and results, etc.
'''''The second - the Survey Portal -''''' must first be enabled using the Survey Manager system, but once enabled can be used for anonymous survey entry, class/course based survey response recording, etc without publishing surveys to users first. Where you do not wish to track responses by student ID, are not emailing survey invitations to responders (such as students or staff), are entering surveys from hard-copy responses or are using class/course based survey collection in (for example) your own computer labs, this new facility may be of interest.
The survey portal requires a password to access, but once accessed it will launch any survey assigned to it. Surveys must have the security flags set appropriately for portal use. This will generally be either "Login Required" or "Allow Anonymous". In the former case a responder who has not yet logged in will be presented with a login screen before proceeding to the survey. In the latter case the survey engine generates random identifiers for responders with each survey access. Surveys entered through the portal using Anonymous response will need to be completed in a single sitting, rather than in multiple sittings which can be done with the emailed invitations or "Login Required" surveys. The survey portal with anonymous access is not appropriate where a known list of responders are entering responses remotely as you will not be able to work out who has responded and who has not because of the IDs.
This manual covers the Survey Manager (maintenance and management system) access method first as you will still need to use that first, if only to enable anonymous portal use.
You can use the traditional emailed (invited) responders and the login OR anonymous portal based methods simultaneously on the one survey. You can mix all the methods across multiple surveys.
==1.3 All Users - Start and Log into BPC Survey Manager Web Client==
===1.3.1 Starting BPC SurveyManager WC===
The BPC Survey Manager Web Client and the Survey Portal are usually launched from the same launch page. The launch page is a static page which will always be visible if the hosting web server is running. For example the ACFE launch page:
[[IMAGE:1_ACFESureveyManagerLaunchPage.jpg]]
Select the button that launches the Survey Manager application (not the Survey Portal). In the example page above that button is the "Enter the ACE Survey System".
===1.3.2 Logging into BPC SurveyManager WC===
On the login screen you have the opportunity both login and request login details be sent to the recorded email address for the ID you are using. The process for requesting your log in details in covered in the next section.
*Step 1: Select your organisation from the drop down list. ACFE and ACE users should select your ‘Training Organisation Identifier’ (TOID) from the drop down list. Other users should select the organisation unit advised to you with your login credentials, or any organisation to which you have subsequently been granted access.
*Step 2: Your username and password will have been provided to you by ACFE for ACFE and ACE users and by Bishop Phillips Consulting or your enterprise survey manager for other users.
#. Enter your ‘User name’ (Case sensitive).
#. Enter your ‘Password’ (Case sensitive).
#. Click ‘Log In’
[[IMAGE:2_BPCSurveyManagerWCLoginPage.jpg]]
After login you will be presented with the Survey List screen for the current organisation. From this screen you can access all the capabilities of the survey manager web client.
[[IMAGE:3_BPCSurveyManagerWCSurveyListScreenPNA.jpg]]
===1.3.2 Request your BPC SurveyManager WC login details===
On the login screen you have the opportunity both login and request login details be sent to the recorded email address for the ID you are using. You must already have a valid login account for this process to work for you.
*Step 1: Select your organisation from the drop down list. ACFE and ACE users should select your ‘Training Organisation Identifier’ (TOID) from the drop down list. Other users should select the organisation unit advised to you with your login credentials, or any organisation to which you have subsequently been granted access.
*Step 2: Your username will have been provided to you by ACFE for ACFE and ACE users and by Bishop Phillips Consulting or your enterprise survey manager for other users.
#. Enter your ‘User name’ (Case sensitive).
#. Leave the ‘Password’ blank.
#. Click ‘I Forgot My Password’
The systems will look up your user ID and send login details to the email recorded as belonging to that User name (User ID).
[[Category:BPC SurveyManager Web Client Manual]]
<noinclude>{{BackLinks}}
</noinclude>
f53f0fb081d310f9d844ea9526493c00101bbd60
BPC SurveyManager - Web Client Manual
0
273
289
2010-11-16T15:08:25Z
Bishopj
1
wikitext
text/x-wiki
=BPC SurveyManager Web Client Manual=
==Introduction==
BPC SurveyManager is comprised of five logical parts:
# BPC Survey Manager Engine - this delivers the surveys, reports and performs a wide range of management functions and a stateless mode. It has no direct user interface, but is best thought of as an library of survey capable routines and an interpreter of BPC SurveyManager "script" that dynamically constructs web pages on demand and according to the authors design.
# BPC Survey Manager Management WebClient - this application is a stateful pure browser based management solution for the BPC SurveyManager system. The web client surfaces the most commonly required capabilities of the SurveyManager system and presents them in a way intended for novice users to create, distribute, publish, manage and report surveys across multiple organisation units and regions.
# BPC SurveyManager Portal - the portal is really a function of the BPC SurveyManager Engine, allowing an organisation to selectively publish surveys to an indefinate number of portals. A portal is a page that responders' can use as a fixed entry point to collect and do surveys available to them. It is one of several channels through which a responder can respond to a survey opportunity.
# BPC SurveyManager DeskTop client - the most powerful surveymanager management client which enables all the capabilities of the surveymanager system to be used (including distributed survey databases, and remote and partially connected users). It is only available as an installable client-server application. This component is not distributed with any BPC application, but is supplied on request and acceptance of the conditions for its use.
# BPC SurveyManager N-Tier library - The library supports the N-Tier application-server structure of other BPC applications like BPC RiskManager which use it to provide an advanced survey manager management client directly in the body of an MS Windows installed application.
This manual covers ONLY the BPC SurveyManager Management Web Client application.
==Introduction for ACFE and ACE users==
In order to assist ACFE and ACE users we have included additional notes or alternative instructions where appropriate in a section marked clearly for these groups' attention.
==Contents==
* [[BPC SurveyManager Web Client Manual: Accessing]]
* [[BPC SurveyManager Web Client Manual: Home (ACFE/ACE) - Working with The LSS]]
* [[BPC SurveyManager Web Client Manual: Home - The Survey List Page]]
* [[BPC SurveyManager Web Client Manual: Creating the list of respondents]]
[[Category:BPC SurveyManager Web Client Manual]]
<noinclude>{{BackLinks}}
</noinclude>
33f6591c669e7be31839c9ab7b720df94763a741
BPC SurveyManager Web Client Manual: Creating the list of respondents
0
276
295
2010-11-16T15:09:26Z
Bishopj
1
New page: [[Category:BPC SurveyManager Web Client Manual]] <noinclude>{{BackLinks}} </noinclude>
wikitext
text/x-wiki
[[Category:BPC SurveyManager Web Client Manual]]
<noinclude>{{BackLinks}}
</noinclude>
3f1a5aef4adfaf5cd49fb337d211068a3381e226
BPC SurveyManager Web Client Manual: Home - The Survey List Page
0
275
293
2010-11-16T15:45:11Z
Bishopj
1
/* Creating a Survey */
wikitext
text/x-wiki
=SECTION 2B. The Survey List Page=
==Introduction - After Login==
After login you will be presented with the Survey List page. Think of this as your Survey Manager "home" page. From here you can reach everything you need. Excluding ACFE/ACE users, if this is the first time you have accessed your organisation, you will generally find no surveys in your survey list. Surveys may be present because they have been deployed to your organisation from a parent organisation (like a Region) or remain there from previous activities. These will be surveys you or your previous users have created for your organisation.
[[IMAGE:3_BPCSurveyManagerWCSurveyListScreenPNA.jpg]]
==The Survey List Actions==
If you have surveys present look at the surveys in your list. You will note that they have actions listed including:
# "Edit" - This allows you to edit certain presentaional aspects of the survey such as the enquiry email address, the logo graphic, incitation text, help, etc. For deployed surveys (like the LSS), you can not change the questions in the survey, but for other surveys you have access to add remove or change questions, etc. Unless you wish to change the default appearance of a deployed survey you do not need to use the action.
# Delete - While a survey is in draft mode, it can be deleted from your organisation. Once it the survey is activated or receives it's first responses the delete action will no longer be visible.
# "Manage" - This is the main action you will use. It enables publication of the survey to responders, sending of invitation, viewing of reports, and general management of the survey. For Providers using email invitations exclusively, this will be the only action in which you are interested.
# "Data Entry" - The data entry action enables the entry of surveys from both hardcopy and telephone/interview. A survey administrater or data entry account holder can enter the survey responsers by selecting from the list of published responders.
# "Make Template" - This action saves your current survey as a template that can then be used to build new fully editable versions of the survey. Both the original survey used for the template and surveys produced from the template remain independent.
==Creating a Survey==
Below the survey list, you will find a "Create a New Survey" button. This button will allow you to create new surveys for publication to groups of resonders. To learn about creating surveys go to [[yy]]
==Activating your Portal==
If you wish to use the portal (or even think you might use it) you can do this by ticking the "Activate portsl" checkbox. The system will invent a password if you use with the portal, but you are free to change it.
==The Menu Options==
# Change Login - displays the login screen. Primarilly for the use of admin users and others with membership of multiple organisations.
# Survey List - displays this screen. Display your current survey list.
# Manage Users - displays, edit or create the users of this organisation. You would use this to create data entry users. While you can create users for assignment to a survey in this area, the V6 survey manager web client favours creationg of responders specifically for a survey. V7 simplifies the assignment of exisiting users to a survey.
[[Category:BPC SurveyManager Web Client Manual]]
<noinclude>{{BackLinks}}
</noinclude>
aded5c2a439c08ea26e87216cd1721d5e03267f9
Main Page
0
1
509
2010-12-22T12:47:43Z
Bishopj
1
wikitext
text/x-wiki
='''The BPC RiskWiki'''=
__NOTOC__
'''''SPONSORED BY:'''''
[[Image:BPCTitle75PERC.jpg]]
{|width="100%"
|-width="100%"
|
<table align=left style="background-color:#FFEBCD;margin-right:0.9em" cellpadding="2" cellspacing="1" >
<tr>
<td>
==Quick Index==
* [[Contents]]
*'''Articles about BPC Software Systems'''
** [[BPC RiskManager Software Suite|BPC RiskManager]]
** [[BPC SurveyManager - Overview|BPC SurveyManager]]
** [[BPC RiskManager Frequently Asked Questions]]
** [[Bishop Phillips - Software Library Reference for Developers]]
*'''Articles about Governance Function Business Methods'''
** [[Internal Audit]]
** [[Risk Management]]
** [[Managing Risk in Mergers & Acquisitions]]
*'''Articles about General Management Methods'''
** [[Business Process Reengineering]]
** [[Report Writing]]
*'''Articles about Virtual Worlds'''
** [[Virtual Learning Systems]]
*'''About The RiskWiki'''
** [[About The RiskWiki]]
** [[Contributors]]
</td>
</tr>
</table>
==Introduction to the RiskWiki==
This wiki is sponsored by Bishop Phillips Consulting (http://www.bishopphillips.com/) for the education, use and enjoyment of our clients, educators, the public and professionals involved in management consulting and risk advisory, compliance, internal audit, insurance claims management, safety, governance and risk analysis industries. It provides reference articles on management, risk and risk related functions including: Risk Management, Internal Audit, Governance, Compliance, and Process Reengineering, etc.
The RiskWiki is based on the articles, methods, manuals and papers of primarily three firms: Bishop Phillips Consulting P/L, Stanton Consulting Partners and Bishop Finance P/L. These firms are contributing a large body of work amassed over many years experience with hundreds of clients. The project to convert and upload much of our BPC software help & manuals, extended body of consulting, risk and internal audit methods and models, and education and research materials is a large and time consuming project so the RiskWiki content changes frequently and will do so for the foreseeable future.
With the exception of all software documentation, and those additional documents marked otherwise, all written material on this site may be used freely by readers for any purpose including reproduction, subject only to the retention of moral rights by the authors. Some articles may include images for which additional permission may be reuired prior to reproduction. Software documentation may be duplicated in hard-copy for internal use by registered users of the systems with current maintenance agreements. Other uses of software systems documentation will be considered on written request.
==Things to See in The RiskWiki==
===BPC RiskManager===
*'''''Are you looking for BPC RiskManager Documentation or to learn more about the software?'''''
<div class="imagewrap" style="float: left; margin-right:10px;" >[[image:BPC_RiskManager_V6261_Main_Screen.jpg|100px|link=BPC RiskManager Software Suite]]
</div>Bishop Phillips supplies [[BPC RiskManager Software Suite|the BPC RiskManagement suite of governance software]] that provides a complete governance solution across risk management, controls management, compliance management, insurance management, claims management, incident & hazard management, audit risk management, governance document management and survey generation and management. The system can be installed in configurations ranging from single-user to very large scale enterprise configurations.
The system is particularly suited to managing and reporting on the risk and compliance management tasks of government agencies, whole of government, special project, not-for-profits, insurance providers, service industries, utilities, and tertiary education sectors. You will find an extensive body of information covering [[BPC RiskManager Software Suite|technical, administration and user level tasks here]].
If you have questions they may be answered in our [[BPC RiskManager Frequently Asked Questions|frequently asked questions]].
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: auto; padding-left:10px; padding-right:10px;" >
{|align="left" width="100%"
|- style="background-color:#FFEBCD; padding-bottom:10px;" width="100%"
|[[BPC RiskManager Frequently Asked Questions|'''Frequently asked Questions About BPC RiskManger''']]
|-width="100%"
|
<div class="didyouknow2" STYLE="height: 400px;
border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-top:20px; padding-bottom:20px; padding-right:10px;" >
{{#dpl: includepage=*
|includemaxlength=1000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=RiskManager FAQ
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;">
</div>
|- style="background-color:#FFEBCD; width:100%;"
|'''Featured Article...'''
|-width="100%"
|
<div class="didyouknow2" STYLE="height: 400px; border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-right:10px; padding-top:20px; padding-bottom:20px; " >
{{#dpl: includepage=*
|includemaxlength=4000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=Featured Article
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%| Read More..]],\n
}}
</div>
<div style="clear: both;">
</div>
|}
</div>
===BPC SurveyManager===
*'''''Are you looking for BPC SurveyManager Documentation or to learn more about the software?'''''
<div class="imagewrap" style="float: left; margin-right:10px;" >[[image:BPCSurveyManager_DTCV7_SurveyEdit_Screen.jpg|link=BPC SurveyManager - Overview|100px]]
</div>Bundled with the BPC RiskManager suite and also supplied in both hosted and installed forms, the BPC SurveyManager software solution is an outstandingly versatile interactive web page generation engine using a survey model as the design and data storage paradigm. While being outstanding at survey creation and management the software is powerful enough to build build conventional data-input web pages. The full [[BPC SurveyManager - Overview|technical and SM language programming documentation is available from here]].
===Research into Virtual Worlds in Business & Education===
*'''''Are you looking for our virtual Learning research papers?'''''
<div class="imagewrap" style="float: left; margin-right:10px;" >[[image:Second_Life_042.jpg|link=Virtual Learning Systems|100px]]
</div>Through our Virtual Worlds research group - "Waisman Learning Systems", we do extensive work in the development of virtual learning and business spaces in SecondLife, and undertake considerable formal research into the application of Virtual Worlds to learning. You will find technical and text book material in our [[Virtual Learning Systems|Virtual World Learning Systems pages]]. There is an extensive overview of the literature, and history of virtual worlds, a very large bibliography, details of our in-world networked lecture theatre control systems and lecture delivery systems, and a complete documentation of an extensive academic study undertaken by our WLS team into the effectiveness at achieving learning outcomes of different approaches in delivering course material in 3D virtual worlds.
You will find an extensive reading list and bibliography of works covering virtual worlds and virtual reality concepts, history, ideas, related technologies, and application in learning as well as relevant papers on learning taxonomies and teaching concepts relevant to [[VirtualWorldLearningReferences|virtual world learning systems here]].
===Internal Audit and Management Science===
*'''''Are you heading up an Internal Audit Team or learning internal audit methods?'''''
<div class="imagewrap" style="float: left; margin-right:10px;" >[[image:ALSBA.png|link=Internal Audit|100px]]
</div>If yes, you will find complete enterprise level internal audit methods and manuals on this site cross linked to our other management papers. The internal audit manuals cover everything from managing the audit team through planning the audit program to the detail of designing the audit, conducting interviews and undertaking the controls analysis; to reporting the results. Everything you are likely to need to [[Internal Audit|manage and train an internal audit team is here]].
*'''''Are you a manager, management consultant or student of Management Science?'''''
<div class="imagewrap" style="float: left; margin-right:10px;" >[[image:BPRAnalyticStructure.png|link=Category:Management Science|100px]]
</div>You will find articles covering topics of general management and process management methods in the RiskWiki including the detailed theory and practice of plannning, process re-engineering, control theory and our proven theories in stakeholder network organisation modelling. The work here is generally unique to this site. All methods have been used extensively and effectively in practice. Start here with [[Business Process Reengineering|process engineering]].
*'''''Are you managing a merger or an acquisition?'''''
<div class="imagewrap" style="float: left; margin-right:10px;" >[[Image:MnA_WhyMerge.jpg|link=Managing Risk in Mergers & Acquisitions|100px]]
</div>Take a look [[Managing Risk in Mergers & Acquisitions|here first and learn about the risks]] in mergers and acquisitions and successful strategies for managing them from our team who have been through it successfully from both sides or the equation multiple times.
|}
==Take A Random Look At The RiskWiki==
{|width="100%"
|- style="background-color:#FFEBCD;" width="100%"
|'''From the Vault of the BPC RiskWiki...'''
|-
|
<div class="didyouknow" width="100%" STYLE="height: 400px;
border: thin solid black; display: block; padding-left:10px; padding-right:10px; overflow: auto;" >
{{#dpl: namespace=
|includepage=*
|includemaxlength=1000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;">
</div>
|}
fe11cdf09d2919b101f866094f5422febc55b2fd
Report Writing
0
294
343
2010-12-22T12:56:59Z
Bishopj
1
wikitext
text/x-wiki
==About The Author==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2007 - Moral Rights Retained
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
==About This Document==
This paper compliments the Internal Audit and Management Consulting guides and discussions throughout the RiskWiki. It presents a brief guide to issues of style and presentation in writing up findings generally and with a very few exceptions applies universally to consultant and management reports (as well as to Internal Audit Reports).
Texts used as the basis for some of the views presented in this document and worthy of further exploration include:
* The Penguin Working Words (Penguin 1993)
* Fowler's Modern English Usage 2nd Edition (Oxford University Press 1965)
* Oxford Dictionary (Oxford University Press)
* Style Manual 4th Edition (Australian Government Press Service 1988)
* Practical English Usage - Michael Swan (Oxford University Press 1980)
* The Cambridge Encyclopedia of Language - David Crystal (Cambridge University Press 1987)
* Deloitte Internal Audit Method, Volume 6 - Report Writing - J Bishop & J Crawford (DTT 1992-3)
* Stanton Consulting Partners Style Manual (J Bishop 1995)
* NAB IA Reporting Style Guide ( J Bishop -1999- & an Unknown NAB Staff Member)
* Bishop Phillips Consulting Style Manual (J Bishop 2000)
==Writing Style==
===Introduction===
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="right">
<tr>
<td>
<div class="left" style="background-color:#FFFF99" >
====Bishop's Writing Rules:====
# Rule: The Passive puts people to sleep.
# Rule: Ending a sentence with a preposition is a situation up with which I will not put.
# Rule: Objects like subjects
# Rule: One point to a paragraph
# Rule: Get to the bottom line first
# Rule: Just do it - say what you mean.
# Rule: Readers don’t read
# Rule: Three sentences are company, four is a crowd
# Rule: Conjunctions can't commence (a sentence)
# Rule: Conjunction collections confuse
# Rule: Personalise people not things
# Rule: Negativity negates.
# Rule: DON'T SHOUT
# Rule: Don't plan to make a plan.
# Rule: Consistency is king
# Rule: Death is in the details.
# Rule: Pronouns need a noun
# Don't split the infinitive
# Rule: Unintroduced acronyms are antisocial
# Rule: Generalities are generally imprecise
# Rule: Let the facts carry the case.
</div>
</td>
</tr>
</table>
In written expression, a few simple rules can make the difference between clarity and confusion. Applying the rules in this section will help us both record our ideas efficiently and convey our meaning clearly.
The rules are a mix of style and traditional grammar identified over many years of reviewing and writing audit reports. We will need a rudimentary understanding of grammar to apply a number of these rules effectively.
Syntax assists semantics. Grammar defines the syntax of the language. Good syntax describes the structures a sentence can follow and still be considered well formed.
Semantics is the meaning of a sentence. Syntax assists semantics by managing the flow of ideas, and distinguishing ambiguities.
Consider for a moment the classic poets' joke
"What is this thing called love?" - The plaintive cry of a tortured heart.
"What is this thing called, love?" -The question of a curious friend on sighting a never before seen object.
One stray comma makes all the difference to the meaning of the question. In speech we use tone, rhythm, intonation and body language to convey meaning. In written expression we rely on syntax - the rules of grammar
We can not solve all problems of ambiguity in language with punctuation, but with a better understanding of grammar we can avoid the ambiguity in the first place. Take, for example, the sentence: "Flying saucers can be thrilling". This sentence seemingly can have a number of meanings:
# The act of flying a saucer can thrill the pilot.
# Seeing a saucer in flight can thrill the observer.
# The idea of a saucer that flies thrills.
We will see, however, that even in this situation, the judicious application of some simple rules when forming the sentence can result in clarity:
"Flying a saucer can thrill the pilot."
What has changed? We have moved from the general ("flying saucers") to the specific ("flying a saucer") (rule 20). We have also introduced a subject (the pilot) to the sentence where only the object and verb existed (rule 3) and applied plurals consistently (rule 15). Lastly applying rule 1 eliminates the problem entirely:
"A pilot can be thrilled when flying a saucer.."
To understand how to do this, we need a little grammar.
Since we can not avoid grammar if we wish to understand how best to convey our meaning, our discussion will be facilitated by first establishing the definition of a few grammatical terms. This we do in the next sub-section. Armed with a few parts of speech we will then explore the 19 rules over the subsections thereafter.
==A Grammar Crash Course==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="right">
<tr>
<td>
<div class="center" >
[[Image:IARepWriteCavemen.png]]
</div>
</td>
</tr>
</table>
===Subject, Verb and Object===
'When nine hundred years you reach, look as good, you will not. Strong with the Force you are…."
Remember Yoda ? Among the little, wrinkly, green "Star Wars" character's more distinctive features was "Yoda Speak". To a linguist, Yoda represents an imaginary member of a very rare and select group: races with languages that use an "Object - Subject - Verb" structure.
The understanding of the difference between each of these components is the first step in mastering sentence structure.
The order of subject (S) - verb (V) - object (O) (SVO) is the classic "natural" english sentence:
<table border=1 align="center" >
<tr><td >"Management </td><td> is adhering </td><td> to </td><td> credit policies."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb </td><td></td><td align="center" > Object </td></tr>
</table>
Things work quite well if we think of a sentence as revolving around a verb. The subject of a verb is the noun (or noun substitute ) that directs the action of the verb. The object of a verb is a noun (or noun substitute) that receives the action, is effected by the action, or about which the action is concerned. In the majority of instances a noun substitute is a pronoun.
In the example "management" directs the action and is therefore the subject, while "credit policies" are the things being "adhered to" and therefore the object. As a rough rule of thumb, if the noun phrase starts with a preposition it is a fair bet that the noun concerned is the object. In the example sentence, "to" is the preposition.
===Prepositions===
A preposition relates a word or phrase to another part of the sentence.
<table border=1 align="center" >
<tr><td >"Management </td><td> is adhering </td><td align="center" > to </td><td> credit policies."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb </td><td align="center" >Preposition</td><td align="center" > Object </td></tr>
</table>
Words that are prepositions include: to, in, into, on, upon, over, before, after, of, with.
In the example the word to joins (or more accurately relates) the noun phrase "credit policies" to the rest of the sentence - "Management is adhering".
A note of caution - a word that is a preposition in one case can be a conjunction in another:
* The auditor arrived before [preposition] the meeting.
* The auditor arrived before [conjunction] the meeting began.
===Conjunctions===
Conjunctions are words that join two sentences, or nouns, but not in a causal relationship as with a preposition but either as equals or in a superior - subordinate relationship. Examples of the former include: and, but, or, nor, whereas, however. Examples of the latter include: because, when, where, if, although.
==Active and Passive Voices==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: The Passive puts people to sleep.'''
</td></tr>
</table>
Recall the earlier discussion about subjects and objects of a sentence. We observed that the "natural" order in English is Subject - Verb - Object (SVO). This is the active voice:
<table border=1 align="center" >
<tr><td >"This firm</td><td> will no longer pay </td><td align="center" > for </td><td> Overtime."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb phrase </td><td align="center" >Preposition</td><td align="center" > Object </td></tr>
</table>
Now we will switch the subject and the object and contrast this with the same sentence expressed in the passive voice:
<table border=1 align="center" >
<tr><td >"Overtime payments</td><td> will no longer be made </td><td align="center" > by </td><td> this firm."</td></tr>
<tr><td align="center" >Object </td><td align="center" > Verb phrase </td><td align="center" >Preposition</td><td align="center" > Subject </td></tr>
</table>
The passive voice essentially reverses the natural order from SVO to OVS.
There is nothing grammatically wrong with either construct, but even a few lines expressed in the passive voice will bore our readers to tears. This effect arises because the passive voice places the reader at a distance from the action by making the object of the sentence the primary focus rather then the subject. Consequently, things appear to come before people.
Consider the following passage (passive voice).
"Significantly more overtime than the firm average has been incurred by roboteller maintenance staff of the Antarctic Division. A number of anomalies in the time sheets including bank branches that have been closed for many years having work recorded for them by individual staff have been revealed by a detailed analysis of the time sheets. Overtime payments will no longer be made by the Antarctic Division as a consequence."
Versus the following version (active voice)
"Roboteller maintenance staff in the Antarctic Division have incurred significantly more overtime than the firm average. An analysis of the time sheets for individual staff shows a number of anomalies, including work conducted for bank branches that have been closed for a number of years. Consequently, the Antarctic Division will no longer pay for overtime."
Which one did you have to read twice? The passive voice is difficult for the reader taken even one paragraph at a time. Try reading it for an entire report and you will be angry, frustrated and tense (assuming you are still awake by the end of it).
The active voice involves the reader, it flows better than the passive, it encourages the writer to go straight to the point rather than inserting "filler words" whose sole purpose is to make the sentence hang together and it reduces the chance of repetition (as apparent in the passage above). The passive voice, however, is not only difficult to read, but it is far more difficult (and therefore slower) to write.
In the passive voice we express the idea of the sentence before we provide the context (subject). The direct result of this is that our thought pattern is reversed and our ideas do not seem to flow properly. We end up adding extra words, leaving sentences hang in mid air (such as when we finish with a preposition) and, most importantly, failing to convince our audience of our point because they have to try too hard to understand it.
A sentence is a "word painting" of an idea. Well formed it is a thing of beauty and, like a great painting, a joy to behold.
==Positioning of Prepositions==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Ending a sentence with a preposition is a situation up with which I will not put.'''<br>
* '''Rule: Objects like Subjects.'''
</td></tr>
</table>
One of the most common errors in common speech is to place the preposition at the end of a sentence. Prepositions, by definition connect and introduce a noun phrase in a sentence. After the use of the active voice, I consider that this is almost the single most important trick to forming logical, easily understood sentences quickly.
Given that it has become almost standard usage to let prepositions drift to the end of a sentence, why is it such a gross error?
You will recall that we defined a preposition as a word that joins and relates a noun phrase to the rest of the sentence. It literally "leads" a phrase. Without the preposition connecting the two ideas in a sentence the sentence appears stilted (or as in the following example the sentence actually seams to mean something completely different):
"Management is adhering credit policies."
Consider a few examples:
<table border=1 align="center" >
<tr><th >Bad Form</th ><th >Good Form</th ></tr>
<tr><td>Where have the auditors come from?</td><td>From where have the auditors come?</td></tr>
<tr><td>Peace is worth striving for.</td><td>It is worth striving for peace</td></tr>
<tr><td>Firm credit policies must be complied with.</td><td>Management must comply with firm credit policies.</td></tr>
</table>
The first two on the left-hand side are merely untidy, but the third highlights the problem with prepositions shifting to the end of a sentence. The version on the left-hand side leaves the sentence "hanging" and most importantly, leaves out the subject. The lack of a subject in the sentence means that it is unclear who should perform the action. (ie. Objects Like Subjects)
If we use the active voice, and lead the sentence with the subject, we will be far less likely to end up with the versions on the left hand side. Since a preposition generally connects the object to the subject, it is the habit of placing the object at the start of the sentence (i.e. the passive voice) that leads to sentences with the preposition at the end.
The second example on the right hand side is still unsatisfactory, because it does not identify the responsibility of the action, and consequently is a generalisation - which is too easy to fault. For whom is it better to strive for peace? An arms manufacturer may see things a little differently! A better rewrite would have been: "We will benefit both materially and socially if we strive for peace."
It is easy to put prepositions in the right place if we remember to use the words "which" and "whom":
This is the day for which we have been waiting. (Not: This is the day we have been waiting for.)
These are the results of which we heard. (Not: These are the results we heard of.)
The rule (attributed to Winston Churchill) "Ending a sentence with a preposition is a situation up with which I will not put" (instead of - "Ending a sentence with a proposition is a situation I will not put up with.") illustrates how to arrange the words to achieve the desired outcome. It also tends to stick in one's mind and so is easily remembered.
==The Formula For A Paragraph==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: One idea to a paragraph'''
* '''Rule: Get to the bottom line first'''
* '''Rule: Three sentences are company, four is a crowd'''
* '''Rule: Just Do It - saying what we mean.'''
* '''Rule: Readers Don't Read'''
</td></tr>
</table>
The purpose of dividing a body of writing into paragraphs is to help the reader absorb the points being made, and the writer to formulate them. These five rules are each about how to put together a paragraph that works.
A couple of simple formulae describing the sequence of sentences in a paragraph can show us what to do:
# Main Point + Counter Point + Conclusion.
# Main Point + Expansion + [Expansion].
In each case we are saying a paragraph should consist of between 2 and 3 sentences. Using more or less sentences in a paragraph is permissible, but to be discouraged unless it is absolutely essential for the purpose of the point. This is particularly true when we a planning to use more than three sentences. (ie Three sentences are company, four is a crowd)
A paragraph end forms a natural break in the flow of though. By implication, we are asking the reader to absorb the entire a paragraph as a single concept before they evaluate it in their minds. The longer is the paragraph, the longer the reader must store the ideas before evaluation.
We risk loosing the reader's attention and comprehension if we ask him or her to temporarily store the ideas for too long a time or to store too many ideas at once. Short, punchy paragraphs built around a single central idea help minimise waffle and assist the reader to rapidly absorb our message. (i.e. One idea to a paragraph))
<table border=0 align="left" width="200px" style="background-color:#FFFF99" >
<tr><td align="center">
<p >
<font color="#990000" size="+1" >'''''…short, punchy paragraphs built around a single idea…'''''</font >
</p >
</td></tr>
</table>
It is a courtesy to the reader, to endeavour to minimise the work they need to do to in reading our work. Opening the paragraph with the main point allows the reader to skip the rest of the sentences in the paragraph if they agree with the point. In each of the two formulae we open with the main point (ie. we get to the bottom line first).
The difference between the forms is that in the first formula we offer a counter point in the second sentence, which is then offset by the conclusion. In this case the conclusion should be consistent with the main point (rather than the second or counter point).
In the second formula we are presenting the main point supported by one or two additional arguments. Should we need six or seven sentences to support the point, these should be presented as a dot-point list, or subdivided into two or three logical groups and split across two or three paragraphs.
<table border=0 align="right" width="200px" style="background-color:#FFFF99" >
<tr><td align="center">
<p >
<font color="#990000" size="+1" >'''''…the most convincing expression of an idea is usually the simplest…'''''</font >
</p >
</td></tr>
</table>
The essence of these ideas is that the most convincing expression of an idea is usually the simplest. Winning a point through confusion is, at best, a Pyrrhic victory. If the issue is important, the reader will dwell on it, and form their own opinion. If they didn't understand your arguments, you will have no effective input into the formation of their position on the matter, other than to raise it in the first place.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="left">
<tr>
<td>
<div class="center" >
[[Image:IARepWriteSectionStructure.png]]
</div>
</td>
</tr>
</table>
The essence of newspaper journalism is that most readers will not read most of the articles in a paper or magazine completely. Consequently, from the headline down to the end of the article the item is arranged as a series of progressively more detailed "summaries" of the information. There are usually three to four layers.
The first layer is the headline, which attempts to summarise the entire issue in a few words. The second layer is the first paragraph which presents a twenty to thirty word summary of the issue. The third layer is the second, third and perhaps fourth paragraphs, which provide the full story and the fourth layer provides incidental minor details.
The purpose of the structure is to allow the readers to exit at several points when they have collected sufficient information for their interest level. The approach recognises that none of us has time to read every piece of information presented to us, and when we do we tend to skim the information for issues that are relevant to us. (ie. readers don't read)
We should design our reports so that the reader does not have to read all the way to the end to "get" the issue. We can imagine this pattern as a pyramid, with the highest level summary at the top, and progressively more detail to the bottom.
==Using Conjunctions==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Conjunctions can't commence (a sentence)'''
* '''Rule: Conjunction collections confuse'''
</td></tr>
</table>
<table border=0 align="right" width="400px" style="background-color:#FFFF99;margin-left:0.9em" cellpadding="2" cellspacing="10" >
<tr><td align="left">
===The Importance of Correct Punctuation===
'''''The following two passages were written by Rowland Croucher. They illustrate neatly the importance of punctuation in written expression. Only the punctuation changes between the passages….'''''
<em>Dear Thomas,
I want a man who knows what love is all about. You are generous, kind, and thoughtful. People who are not like you admit to being useless and inferior. You have ruined me for other men. I yearn for you. I have no feelings whatsoever when we're apart. I can be forever happy--will you let me be yours?
Maria
----
Dear Thomas,
I want a man who knows what love is. All about you are generous, kind and thoughtful people, who are not like you. Admit to being useless and inferior. You have ruined me. For other men, I yearn; for you, I have no feelings whatsoever. When we're apart, I can be forever happy. Will you let me be?
Yours,
Maria</em>
</td></tr>
</table>
Conjunctions are important time savers and can help the flow of ideas if used correctly, but should not be used more than once in a sentence unless splitting the sentence would detract from it's meaning.
One example where two conjunctions may appear in a sentence is where the sentence contains both a list and two joined or related ideas:
''"The credit approval process involves assessing the amount, viability, merit and purpose of the loan and verifying that the borrower's credit history is of sufficient standing."''
In this case the passage would be harder to follow (and perhaps even misleading) if we wrote it as:
''"The credit approval process involves assessing the amount, viability, merit and purpose of the loan. The credit approval process should also verify that the borrower's credit history is of sufficient standing."''
By splitting the sentence we seem to imply that the credit history is of secondary importance to the information collected about the purpose of the loan.
These situations are generally pretty clear when they arise, but they are rare. A sentence with too many conjunctions suffers from the same problems as a paragraph with too many sentences; we have lost the reader before the end.
Some years ago Professor Manning Clark gave a Boyer lecture concerning the use of English in academic papers. One of his particular annoyances was the use of conjunctions to commence a sentence. His point was simple - a conjunction joins two sentences. If it starts the sentence it is prima-facie not joining two sentences together.
While we all recognise words like "and", "or" and "but" as conjunctions, words such as "however" and "because" are more often missed. Consider the following passage:
''"Because they operate unattended, Roboteller machines are prime targets for fraud. However, if we attach cameras to them they become leading tools in the capture of the perpetrators."''
This can be rewritten to eliminate the problem:
''"Roboteller machines are prime targets for fraud because they operate unattended. If we attach cameras to them, however, the machines become leading tools in the capture of the perpetrators."''
In rewriting the passage we also (once again) moved the subject to the start of the sentences. The "however": is redundant and the passage can be further simplified by writing it thus:
''"Roboteller machines are prime targets for fraud because they operate unattended. The machines become leading tools in the capture of the defrauders if we attach cameras to them."''
This passage demonstrates the appropriate use of "however":
''"Overall corporate / strategic planning is adequately addressed within Premium and Private, however, management attention is required concerning:…"''
==A Few Points of Style==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Personalise people not things'''
* '''Rule: Don't plan to make a plan.'''
* '''Rule: Negativity negates.'''
* '''Rule: DON'T SHOUT'''
</td></tr>
</table>
The three rules of this subsection cover common, but minor, problems of style.
A common written mistake is for a human trait such as "need" or "requiring" to be attributed to an inanimate "thing" such that it takes on the air of an inviolate law. The practice leads to broad statements without justification and hence incomplete argument of a case. Consider:
''"The credit approvals process needs to be reviewed."''
The credit approval process can not need anything. Only living creatures can experience need. It may be appropriate for the process to be updated and management or the auditors may need this to occur, but the process can't spontaneously need such improvement of itself.
Once again we find, as with so many English language errors, that the problem has arisen because of a subject / object mix-up. In the example the credit approval process, which should have been the object has been transformed into the subject. When we rewrite it the way it should have been we find that we are missing a significant part of the message that should have been conveyed (and is now inserted in the rewrite):
''"Management needs to review the credit approvals process focusing on the weaknesses identified in the finding."''
The new version both identifies who should perform the action and guidelines they should follow. It also highlights another important rule (not really one of grammar but one of service quality); the recommendation as written is essentially a plan to make a plan.
Either management should make the changes identified, or they should not. If we merely request them to review the situation we are delivering no committed improvement for the current situation to the Board. We should not say "review" when we mean "implement":
''"Management should implement the identified corrections to rectify the weaknesses in the credit approvals process identified in this report."''
Finally, we briefly consider two ad-hoc matters. The first is to do with capitalisation, while the second concerns the use of negatives.
Capitalising Every Word In a Sentence or even a Random selection Of a few words does not serve to help our presentation. Excessive capitalisation is affronting to the reader. In internet terminology this is akin to SHOUTING AT THE READER. Capitals belong at the beginning of a sentence or when naming a person, place or the title of a "thing". Capitalisation is rarely appropriate in the middle of a sentence.
Secondly, sentences should be expressed in the positive rather than the negative wherever possible. It is a standard sales technique to ask a prospect a question framed in the direction one wishes the answer to go:
"Would you prefer that my quote is open ended?"
As opposed to:
"Would you prefer that my quote is fixed?"
People tend to immediately think in sympathy to the speaker (at least until he or she threatens them with capitals!). If we express our sentences as negatives not only do we lead the reader to naturally disagree (because they have been "trained" to say no by our text, but we also create a sea of double negatives. Which may or may not imply a positive.
==Carrying the Case==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Death is in the details.'''
* '''Rule: Generalities are generally imprecise'''
* '''Rule: Let the facts carry the case.'''
</td></tr>
</table>
Much of what has been written in this goes to the issue of precision. In consulting and audit papers, accuracy of detail can determine the credibility attached to the consultant's/auditor's findings as well as the advice offered. The best strategy is to let the facts, clearly articulated, carry the argument.
The facts should not be embellished with emotional and vague descriptive words such as "large", "most", "substantially". We should state the quanta instead - "70%", "five out of eight", etc.
Try to avoid non-specific or vague words and expressions. This is especially true of quantities and times.
'''Examples'''
<table align=center >
<tr >
<th>Non-specific or vague</th><th> </th><th>Could mean or become</th>
</tr>
<tr>
<td>increased volumes</td><td></td><td>300 or more</td>
</tr>
<tr>
<td>drop in profit</td><td></td><td>profit was 20% lower</td>
</tr>
<tr>
<td>frequently</td><td></td><td>daily/weekly/monthly</td>
</tr>
<tr>
<td>rarely</td><td></td><td>once a year/decade</td>
</tr>
<tr>
<td>recently</td><td></td><td>yesterday/last week/month</td>
</tr>
<tr>
<td>shortly</td><td></td><td>tomorrow/next week/month</td>
</tr>
</table>
In the absence of statistical support for a finding, generalisation emerges. The discussion of the matter with the client becomes sidetracked over the meaning of words like "large" or "significant", rather than focussing on the issue identified and the solution required by the adviser.
Linked to these ideas is the form of words used to convey your point. Never use a long word where a short word will do. Long words may be interpreted by the reader as a deliberate attempt to mask purility with false grandeur, because the underlying point is decrepid or flawed. (See what I mean?).
Having said that, do not be frightened of using a long or technically correct word, simply because it has more than one syllable. Your can always provide a clossary of terms at the start of the document (and frequently that is a good idea for even some commonly missused terms). If your reader needs to get a little more educated to understand your work then fine.
Writing is not about stooping to the lowest common denominator, but it is about communicating your point accurately and effectively. That is: you must actually get your point across; not merely make your reader feel inadequate. There is no point in being right, if nobody realises.
The point, then, is to use the shortest possible ''correct'' word - not merely the shortest word.
As a rule-of-thumb, if your reader has to seek out the meaning of more than 2 or three words in your report you have probably lost them...and they will probably resent you for it. Know your audience, prepare your audience for your language, and make sure they don't feel stupid by the end of it.
The customer for a consulting or audit report needs to be assured that adopting recommendations based upon the consultant's finding will add value to the business.
Auditors (particularly) need to go well beyond describing what is wrong. They need to explain the meaning of any finding: how it affects the organisation’s bottom line; estimating the potential cost of not addressing a problem; predicting the likelihood of exposure or error.
Likewise, consultants need to go well beyond simply parroting back the latest theory they discovered in the bottom of a glass of scotch or on the back of the cereal packet that morning. Consultants need to do a little more of the 'audit' thing and actually analyse what is really the issue/wrong before agruing convincingly for change.
Wherever possible in all such instances, be specific. Numerous, several, many are words lacking in specifics. If this flies in the face of other advice to be brief, so be it.
The auditor/consultant should attempt to quantify the financial impact of a finding. While it may not be possible to arrive at a figure with mathematical precision, an informed guess can help management make a decision.
To be specific, following are some examples of content.
'''Poor'''
Differences exist in the cost of processing biscuit requisitions in various regions.
'''Better'''
The cost of processing biscuit requisitions differs from region to region. Vancouver can process a cheque for AUD 8 cents while the equivalent in Australia is AUD 15 cents. Australia might save up to AUD $15 million by adopting Vancouver’s methods.
'''Poor'''
There is a lack of adequate management information to support activities and to facilitate meaningful comparisons between regional units.
'''Better'''
Management information is inadequate: staff costs are not analysed for benchmarking across various offices; calculation of product profitability does not include processing costs; and there is no allocation of fees and interest income by product type.
Finally, '''summaries''' are meant to be just that: a tight condensation of the main point or points of an issue. Be ruthless in getting rid of perhaps interesting but non-essential pieces of additional information – but retain the specifics.
==Tense, Pronouns and Infinitives==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Don't split the infinitive'''
* '''Rule: Consistency is king'''
* '''Rule: Pronouns need a noun'''
* '''Rule: Unintroduced acronyms are antisocial'''
</td></tr>
</table>
"To Boldly Go Where No Man Has Gone Before…" Perhaps one of the most recognised phrases in the English language, this bight of the Star Trek prime directive is also a prime example of atrocious English! This is a classic example of the split infinitive (not to mention the redundant preposition at the end of the sentence).
The directive should have read:
" Boldly To Go Where No Man Has Gone…" or less poetically, " To Go Boldly Where No Man Has Gone…"
Perhaps, it would be best as:
"Go boldly, where none have gone.."
The infinitive is the basic form of verb invariably commencing with "to". It generally has no subject, and should not be split according to luminaries on the subject. The reason is more stylistic than grammatic. The problem with split infinitives is more obvious when a few words are inserted between the "to" and it's verb:
"The Roboteller machines are expected to really try hard to accurately and silently recognise the customer's identity."
Can be improved by:
"The Roboteller machines are expected to try really hard to recognise the customer's identity accurately and silently."
There are two common ways to fix avoid the split infinitive. Both are presented in the rewrite above. The first is to simply move the offending adverb after the verb, although sometimes this leads to a stilted speech pattern. The second is to move the adverb(s) to the end of the sentence as above.
Pronouns are words like he, she, it, etc that substitute for a noun like Jim, Phred or bank branch. The noun to which a pronoun relates is established by the context in which the pronoun is placed. Consequently, if too many pronouns are used together it becomes very difficult to determine for which noun an individual pronoun substitutes. As a general rule the target noun should immediately precede it's related pronoun and be refreshed at least every two pronouns.
Similarly, and acronym (abbreviation substituting for a noun or phrase) should be preceded immediately the first time it is used by the originating word or phrase. For example:
"The National Australia Bank (NAB) is a large and wonderful establishment. The NAB has an effective and happy audit team."
A completely unrelated matter (but grouped here for convenience) is that of consistency in the use of plurals and tense. It should be apparent to all authors, that the use of the singular in a sentence should be reflected continuously throughout the rest of the sentence. It may be less obvious that the same rule applies to verb tense.
If we express a verb in one tense, such as the present continuous as in "I am having a good day", the balance of the argument should normally be presented in the same tense. This is not a strict rule, because there will be situations in which a finding will relate a historic situation in the lead sentence, while the discussion relates an assessment that is in the present tense.
It is reasonable to say that within a sentence changes in tense will generally create confusion, unless separated by a conjunction. For example:
"In Antarctic Division wire transfer requests were accepted via e-mail and customer signatures were not obtained at all times."
Not
"In Antarctic Division wire transfer requests were accepted via e-mail and customer signatures are not obtained at all times."
But the following would be ok because the first part states a continuous state, while the latter part describes an historic observation relating to the first situation.:
"In Antarctic Division wire transfer requests are accepted via e-mail and customer signatures were not obtained at all times."
Agreement of subject and verb: A singular subject demands a singular verb; a plural one demands a plural verb. Many such problems are caused by long sentences overloaded with adjectives and subordinate clauses where the subject is separated from its verbs. This is another reason for keeping sentences short.
Sometimes the rule is not immediately obvious, such as in the case of "None": "none were" should be "none was" (none=not one or no one)
Example None of us is perfect.
==Confusing Words==
These words are often confused
* Affect (to impact upon, to assume) / effect (to bring about a change in)
* Object (the purpose)/ objective (the point of an exercise - usually military)
* Idol (a religious artefact, or object of worship) / Idyll (an imaginary ideal, or pastoral setting) / Idle (lazy, not in motion)
* Whom (the objective form of the relative pronoun) / who (the subjective form of the relative pronoun)
===A note about affect & effect===
A frequent source of error is confusion in the use of the similar-sounding words affect, affected, effect and effected and continual and continuous.
A cause for confusion is that affect is always a verb while effect can be either a noun or a verb. Both continual and continuous are adjectives.
Affect is a verb in the sense of being to influence. Effect as a verb means to bring about; as a noun it is equivalent to the word result.
The following represent correct usage.
Examples:
* Errors in computing affected the accuracy of the result.
* The effect of errors in computing was to produce an inaccurate result.
* Smoking cigarettes may affect your lungs.
* Giving up smoking had no effect on her general health.
* I didn’t finish the report because of continual telephone interruptions.
* Lights are left on in traffic tunnels to provide continuous illumination.
===A note about "due to"===
"Due to" is often used in the sense of through, because of or owing to. Mostly those alternatives are to be preferred. But it is correct to use due to in the sense of being attributable to.
Example The plane crash was due to bad visibility.
Don’t rely on your computer’s spellchecker for advice on grammar or correct spelling. Some systems are misleading. For example, you may be advised to change personal to personnel (or the other way round).
===A note about who & whom===
"Captain Kirk is the man whom the federation pays to fly the Enterprise." (Whom is the object of pays - the pronoun effected by the action of payment)
And
"Captain Kirk is the man who we think flies the Enterprise." (Who is the subject of flies, not the object of think).
==Punctuation==
Punctuation matters.
* "What is this thing called love?" (As in: Let me count the ways...)
* "What! Is this thing called love?" (As in: Let me out of here...)
* "What is this thing called, love?" (As in: OMG! You are not comming near me with that!)
===Comma===
Used when essential for clarity or to indicate a small interruption in continuity of thought. Short sentence construction reduces the need for commas.
===Semicolon===
Using a semicolon indicates a pause greater than a comma but less than a colon or full stop. Often a semicolon helps to alert the reader to an alternative or compensating thought.
'''Example:''' ''The risk of lost muffins was high; however, quick action averted this crisis.''
Semicolons should be used at the end of each line in a series of bullet points as an alternative to commas. (see later).
'''Example:'''
<em>
The following actions are planned to clear the biscuits:
* Engage two temporary staff for six months;
* Schedule extra training for these and permanent staff;
* Upgrade software in the Biscuit Dispensing Machine;
* Simplify the standard form used for requisitioning for biscuits from the kitchen from ten pages to five; and
* Remove the requirement for VP Supply, VP HR, and CEO counter signing of all biscuit requisitions.
</em>
===Colon===
The colon is used to introduce a quotation, summary, conclusion or list of bullet points (as in the example above); or to introduce a list within a sentence.
'''Example:'''
''The report contains the following sections: employment, training, promotion, legal compliance, relations with other departments.''
===Full stop===
(Period in U.S. usage)
As well as indicating the end of a sentence, full stops are used in some abbreviations. It has become common for periods to be ommitted from word abreviations. We counsel against such a style because: with the plethora of acronyms and technical jargon in today's language signalling that a word is an abreviation of a possible familiar word, with the use of the period; rather than a technical term unknown to the reader, adds to clarity.
Where a bulleted list includes points that have more than one sentence, it is preferable to separate the points with full stops, not semi-colons as set out in the previous example.
Example:
<em>
The following actions are planned to clear the biscuits:
* Engage two temporary staff for six months. Qualifications include large appetities and general slothfullness. It is estimated that salaries will be approximately $13,000 per month each plus biscuits.
* Schedule extra training for these and permanent staff. It is anticipated the training officer will need to allocate three hours weekly to the task.
* Upgrade software . . . (etc)
</em>
Note that where a full stop is used in a dot-point list, no conjunction is used to join the last to items.
Regardless of which dot point separater is chosen, it MUST be used consistently throughout the list and ideally the document.
===Hyphen===
General usage previously demanded that a hyphen be used if a prefix or suffix had the same letter as the word to which it was attached. So cooperate and coordinate generally were spelt co-operate and co-ordinate; hyphens in these instances are unnecessary. While reinforce and react are other examples where hyphens are not needed, sometimes a hyphen provides a warning that a word should not be read as a single syllable (e.g. re-use). Words formed by using the prefix non- should nearly always be hyphenated (e.g. non-compliant, non-aligned) as with some words prefixed by pre- (e.g. pre-existing).
===Apostrophe===
Used to indicate possession or the omission of letters in a contraction.
'''Examples'''
<em>
* Bill’s car was taken to the wreckers.
* Bill hasn’t had time to replace his car yet.
</em>
There is often confusion about its and it’s. The simple test is whether the construction of a sentence means it is (or it has etc). If so, it’s is a contraction and needs an apostrophe; if not, its is a pronoun and needs no apostrophe. (Warning: Don’t get fooled by some computer spellchecking systems which get this wrong.)
A rough rule of thumb: if we are using "it" in the possessive sense (as in "its red tyre"), leave out the "'".
'''Examples'''
<em>
* It’s been a long time between drinks.
* The engine was tuned but its vibration wasn’t greatly reduced.
</em>
===Ellipses===
This is the term to indicate words have been omitted from a quotation and is represented by three full stops separated by spaces.
'''Example'''
''Now is the time . . . to come to the aid of the party.''
===Quote marks===
These should not be used for emphasis. Use bold type or italic instead. Use quotation marks only when you are quoting or, after very long consideration of alternatives, when you are using a word or phrase you consider less than ideal for the situation.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
{{BackLinks}}
</noinclude>
e15798437995096806e5475bc55d9ac4d9e4d994
353
343
2010-12-22T12:56:59Z
Bishopj
1
wikitext
text/x-wiki
==About The Author==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2007 - Moral Rights Retained
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
==About This Document==
This paper compliments the Internal Audit and Management Consulting guides and discussions throughout the RiskWiki. It presents a brief guide to issues of style and presentation in writing up findings generally and with a very few exceptions applies universally to consultant and management reports (as well as to Internal Audit Reports).
Texts used as the basis for some of the views presented in this document and worthy of further exploration include:
* The Penguin Working Words (Penguin 1993)
* Fowler's Modern English Usage 2nd Edition (Oxford University Press 1965)
* Oxford Dictionary (Oxford University Press)
* Style Manual 4th Edition (Australian Government Press Service 1988)
* Practical English Usage - Michael Swan (Oxford University Press 1980)
* The Cambridge Encyclopedia of Language - David Crystal (Cambridge University Press 1987)
* Deloitte Internal Audit Method, Volume 6 - Report Writing - J Bishop & J Crawford (DTT 1992-3)
* Stanton Consulting Partners Style Manual (J Bishop 1995)
* NAB IA Reporting Style Guide ( J Bishop -1999- & an Unknown NAB Staff Member)
* Bishop Phillips Consulting Style Manual (J Bishop 2000)
==Writing Style==
===Introduction===
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="right">
<tr>
<td>
<div class="left" style="background-color:#FFFF99" >
====Bishop's Writing Rules:====
# Rule: The Passive puts people to sleep.
# Rule: Ending a sentence with a preposition is a situation up with which I will not put.
# Rule: Objects like subjects
# Rule: One point to a paragraph
# Rule: Get to the bottom line first
# Rule: Just do it - say what you mean.
# Rule: Readers don’t read
# Rule: Three sentences are company, four is a crowd
# Rule: Conjunctions can't commence (a sentence)
# Rule: Conjunction collections confuse
# Rule: Personalise people not things
# Rule: Negativity negates.
# Rule: DON'T SHOUT
# Rule: Don't plan to make a plan.
# Rule: Consistency is king
# Rule: Death is in the details.
# Rule: Pronouns need a noun
# Don't split the infinitive
# Rule: Unintroduced acronyms are antisocial
# Rule: Generalities are generally imprecise
# Rule: Let the facts carry the case.
</div>
</td>
</tr>
</table>
In written expression, a few simple rules can make the difference between clarity and confusion. Applying the rules in this section will help us both record our ideas efficiently and convey our meaning clearly.
The rules are a mix of style and traditional grammar identified over many years of reviewing and writing audit reports. We will need a rudimentary understanding of grammar to apply a number of these rules effectively.
Syntax assists semantics. Grammar defines the syntax of the language. Good syntax describes the structures a sentence can follow and still be considered well formed.
Semantics is the meaning of a sentence. Syntax assists semantics by managing the flow of ideas, and distinguishing ambiguities.
Consider for a moment the classic poets' joke
"What is this thing called love?" - The plaintive cry of a tortured heart.
"What is this thing called, love?" -The question of a curious friend on sighting a never before seen object.
One stray comma makes all the difference to the meaning of the question. In speech we use tone, rhythm, intonation and body language to convey meaning. In written expression we rely on syntax - the rules of grammar
We can not solve all problems of ambiguity in language with punctuation, but with a better understanding of grammar we can avoid the ambiguity in the first place. Take, for example, the sentence: "Flying saucers can be thrilling". This sentence seemingly can have a number of meanings:
# The act of flying a saucer can thrill the pilot.
# Seeing a saucer in flight can thrill the observer.
# The idea of a saucer that flies thrills.
We will see, however, that even in this situation, the judicious application of some simple rules when forming the sentence can result in clarity:
"Flying a saucer can thrill the pilot."
What has changed? We have moved from the general ("flying saucers") to the specific ("flying a saucer") (rule 20). We have also introduced a subject (the pilot) to the sentence where only the object and verb existed (rule 3) and applied plurals consistently (rule 15). Lastly applying rule 1 eliminates the problem entirely:
"A pilot can be thrilled when flying a saucer.."
To understand how to do this, we need a little grammar.
Since we can not avoid grammar if we wish to understand how best to convey our meaning, our discussion will be facilitated by first establishing the definition of a few grammatical terms. This we do in the next sub-section. Armed with a few parts of speech we will then explore the 19 rules over the subsections thereafter.
==A Grammar Crash Course==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="right">
<tr>
<td>
<div class="center" >
[[Image:IARepWriteCavemen.png]]
</div>
</td>
</tr>
</table>
===Subject, Verb and Object===
'When nine hundred years you reach, look as good, you will not. Strong with the Force you are…."
Remember Yoda ? Among the little, wrinkly, green "Star Wars" character's more distinctive features was "Yoda Speak". To a linguist, Yoda represents an imaginary member of a very rare and select group: races with languages that use an "Object - Subject - Verb" structure.
The understanding of the difference between each of these components is the first step in mastering sentence structure.
The order of subject (S) - verb (V) - object (O) (SVO) is the classic "natural" english sentence:
<table border=1 align="center" >
<tr><td >"Management </td><td> is adhering </td><td> to </td><td> credit policies."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb </td><td></td><td align="center" > Object </td></tr>
</table>
Things work quite well if we think of a sentence as revolving around a verb. The subject of a verb is the noun (or noun substitute ) that directs the action of the verb. The object of a verb is a noun (or noun substitute) that receives the action, is effected by the action, or about which the action is concerned. In the majority of instances a noun substitute is a pronoun.
In the example "management" directs the action and is therefore the subject, while "credit policies" are the things being "adhered to" and therefore the object. As a rough rule of thumb, if the noun phrase starts with a preposition it is a fair bet that the noun concerned is the object. In the example sentence, "to" is the preposition.
===Prepositions===
A preposition relates a word or phrase to another part of the sentence.
<table border=1 align="center" >
<tr><td >"Management </td><td> is adhering </td><td align="center" > to </td><td> credit policies."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb </td><td align="center" >Preposition</td><td align="center" > Object </td></tr>
</table>
Words that are prepositions include: to, in, into, on, upon, over, before, after, of, with.
In the example the word to joins (or more accurately relates) the noun phrase "credit policies" to the rest of the sentence - "Management is adhering".
A note of caution - a word that is a preposition in one case can be a conjunction in another:
* The auditor arrived before [preposition] the meeting.
* The auditor arrived before [conjunction] the meeting began.
===Conjunctions===
Conjunctions are words that join two sentences, or nouns, but not in a causal relationship as with a preposition but either as equals or in a superior - subordinate relationship. Examples of the former include: and, but, or, nor, whereas, however. Examples of the latter include: because, when, where, if, although.
==Active and Passive Voices==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: The Passive puts people to sleep.'''
</td></tr>
</table>
Recall the earlier discussion about subjects and objects of a sentence. We observed that the "natural" order in English is Subject - Verb - Object (SVO). This is the active voice:
<table border=1 align="center" >
<tr><td >"This firm</td><td> will no longer pay </td><td align="center" > for </td><td> Overtime."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb phrase </td><td align="center" >Preposition</td><td align="center" > Object </td></tr>
</table>
Now we will switch the subject and the object and contrast this with the same sentence expressed in the passive voice:
<table border=1 align="center" >
<tr><td >"Overtime payments</td><td> will no longer be made </td><td align="center" > by </td><td> this firm."</td></tr>
<tr><td align="center" >Object </td><td align="center" > Verb phrase </td><td align="center" >Preposition</td><td align="center" > Subject </td></tr>
</table>
The passive voice essentially reverses the natural order from SVO to OVS.
There is nothing grammatically wrong with either construct, but even a few lines expressed in the passive voice will bore our readers to tears. This effect arises because the passive voice places the reader at a distance from the action by making the object of the sentence the primary focus rather then the subject. Consequently, things appear to come before people.
Consider the following passage (passive voice).
"Significantly more overtime than the firm average has been incurred by roboteller maintenance staff of the Antarctic Division. A number of anomalies in the time sheets including bank branches that have been closed for many years having work recorded for them by individual staff have been revealed by a detailed analysis of the time sheets. Overtime payments will no longer be made by the Antarctic Division as a consequence."
Versus the following version (active voice)
"Roboteller maintenance staff in the Antarctic Division have incurred significantly more overtime than the firm average. An analysis of the time sheets for individual staff shows a number of anomalies, including work conducted for bank branches that have been closed for a number of years. Consequently, the Antarctic Division will no longer pay for overtime."
Which one did you have to read twice? The passive voice is difficult for the reader taken even one paragraph at a time. Try reading it for an entire report and you will be angry, frustrated and tense (assuming you are still awake by the end of it).
The active voice involves the reader, it flows better than the passive, it encourages the writer to go straight to the point rather than inserting "filler words" whose sole purpose is to make the sentence hang together and it reduces the chance of repetition (as apparent in the passage above). The passive voice, however, is not only difficult to read, but it is far more difficult (and therefore slower) to write.
In the passive voice we express the idea of the sentence before we provide the context (subject). The direct result of this is that our thought pattern is reversed and our ideas do not seem to flow properly. We end up adding extra words, leaving sentences hang in mid air (such as when we finish with a preposition) and, most importantly, failing to convince our audience of our point because they have to try too hard to understand it.
A sentence is a "word painting" of an idea. Well formed it is a thing of beauty and, like a great painting, a joy to behold.
==Positioning of Prepositions==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Ending a sentence with a preposition is a situation up with which I will not put.'''<br>
* '''Rule: Objects like Subjects.'''
</td></tr>
</table>
One of the most common errors in common speech is to place the preposition at the end of a sentence. Prepositions, by definition connect and introduce a noun phrase in a sentence. After the use of the active voice, I consider that this is almost the single most important trick to forming logical, easily understood sentences quickly.
Given that it has become almost standard usage to let prepositions drift to the end of a sentence, why is it such a gross error?
You will recall that we defined a preposition as a word that joins and relates a noun phrase to the rest of the sentence. It literally "leads" a phrase. Without the preposition connecting the two ideas in a sentence the sentence appears stilted (or as in the following example the sentence actually seams to mean something completely different):
"Management is adhering credit policies."
Consider a few examples:
<table border=1 align="center" >
<tr><th >Bad Form</th ><th >Good Form</th ></tr>
<tr><td>Where have the auditors come from?</td><td>From where have the auditors come?</td></tr>
<tr><td>Peace is worth striving for.</td><td>It is worth striving for peace</td></tr>
<tr><td>Firm credit policies must be complied with.</td><td>Management must comply with firm credit policies.</td></tr>
</table>
The first two on the left-hand side are merely untidy, but the third highlights the problem with prepositions shifting to the end of a sentence. The version on the left-hand side leaves the sentence "hanging" and most importantly, leaves out the subject. The lack of a subject in the sentence means that it is unclear who should perform the action. (ie. Objects Like Subjects)
If we use the active voice, and lead the sentence with the subject, we will be far less likely to end up with the versions on the left hand side. Since a preposition generally connects the object to the subject, it is the habit of placing the object at the start of the sentence (i.e. the passive voice) that leads to sentences with the preposition at the end.
The second example on the right hand side is still unsatisfactory, because it does not identify the responsibility of the action, and consequently is a generalisation - which is too easy to fault. For whom is it better to strive for peace? An arms manufacturer may see things a little differently! A better rewrite would have been: "We will benefit both materially and socially if we strive for peace."
It is easy to put prepositions in the right place if we remember to use the words "which" and "whom":
This is the day for which we have been waiting. (Not: This is the day we have been waiting for.)
These are the results of which we heard. (Not: These are the results we heard of.)
The rule (attributed to Winston Churchill) "Ending a sentence with a preposition is a situation up with which I will not put" (instead of - "Ending a sentence with a proposition is a situation I will not put up with.") illustrates how to arrange the words to achieve the desired outcome. It also tends to stick in one's mind and so is easily remembered.
==The Formula For A Paragraph==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: One idea to a paragraph'''
* '''Rule: Get to the bottom line first'''
* '''Rule: Three sentences are company, four is a crowd'''
* '''Rule: Just Do It - saying what we mean.'''
* '''Rule: Readers Don't Read'''
</td></tr>
</table>
The purpose of dividing a body of writing into paragraphs is to help the reader absorb the points being made, and the writer to formulate them. These five rules are each about how to put together a paragraph that works.
A couple of simple formulae describing the sequence of sentences in a paragraph can show us what to do:
# Main Point + Counter Point + Conclusion.
# Main Point + Expansion + [Expansion].
In each case we are saying a paragraph should consist of between 2 and 3 sentences. Using more or less sentences in a paragraph is permissible, but to be discouraged unless it is absolutely essential for the purpose of the point. This is particularly true when we a planning to use more than three sentences. (ie Three sentences are company, four is a crowd)
A paragraph end forms a natural break in the flow of though. By implication, we are asking the reader to absorb the entire a paragraph as a single concept before they evaluate it in their minds. The longer is the paragraph, the longer the reader must store the ideas before evaluation.
We risk loosing the reader's attention and comprehension if we ask him or her to temporarily store the ideas for too long a time or to store too many ideas at once. Short, punchy paragraphs built around a single central idea help minimise waffle and assist the reader to rapidly absorb our message. (i.e. One idea to a paragraph))
<table border=0 align="left" width="200px" style="background-color:#FFFF99" >
<tr><td align="center">
<p >
<font color="#990000" size="+1" >'''''…short, punchy paragraphs built around a single idea…'''''</font >
</p >
</td></tr>
</table>
It is a courtesy to the reader, to endeavour to minimise the work they need to do to in reading our work. Opening the paragraph with the main point allows the reader to skip the rest of the sentences in the paragraph if they agree with the point. In each of the two formulae we open with the main point (ie. we get to the bottom line first).
The difference between the forms is that in the first formula we offer a counter point in the second sentence, which is then offset by the conclusion. In this case the conclusion should be consistent with the main point (rather than the second or counter point).
In the second formula we are presenting the main point supported by one or two additional arguments. Should we need six or seven sentences to support the point, these should be presented as a dot-point list, or subdivided into two or three logical groups and split across two or three paragraphs.
<table border=0 align="right" width="200px" style="background-color:#FFFF99" >
<tr><td align="center">
<p >
<font color="#990000" size="+1" >'''''…the most convincing expression of an idea is usually the simplest…'''''</font >
</p >
</td></tr>
</table>
The essence of these ideas is that the most convincing expression of an idea is usually the simplest. Winning a point through confusion is, at best, a Pyrrhic victory. If the issue is important, the reader will dwell on it, and form their own opinion. If they didn't understand your arguments, you will have no effective input into the formation of their position on the matter, other than to raise it in the first place.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="left">
<tr>
<td>
<div class="center" >
[[Image:IARepWriteSectionStructure.png]]
</div>
</td>
</tr>
</table>
The essence of newspaper journalism is that most readers will not read most of the articles in a paper or magazine completely. Consequently, from the headline down to the end of the article the item is arranged as a series of progressively more detailed "summaries" of the information. There are usually three to four layers.
The first layer is the headline, which attempts to summarise the entire issue in a few words. The second layer is the first paragraph which presents a twenty to thirty word summary of the issue. The third layer is the second, third and perhaps fourth paragraphs, which provide the full story and the fourth layer provides incidental minor details.
The purpose of the structure is to allow the readers to exit at several points when they have collected sufficient information for their interest level. The approach recognises that none of us has time to read every piece of information presented to us, and when we do we tend to skim the information for issues that are relevant to us. (ie. readers don't read)
We should design our reports so that the reader does not have to read all the way to the end to "get" the issue. We can imagine this pattern as a pyramid, with the highest level summary at the top, and progressively more detail to the bottom.
==Using Conjunctions==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Conjunctions can't commence (a sentence)'''
* '''Rule: Conjunction collections confuse'''
</td></tr>
</table>
<table border=0 align="right" width="400px" style="background-color:#FFFF99;margin-left:0.9em" cellpadding="2" cellspacing="10" >
<tr><td align="left">
===The Importance of Correct Punctuation===
'''''The following two passages were written by Rowland Croucher. They illustrate neatly the importance of punctuation in written expression. Only the punctuation changes between the passages….'''''
<em>Dear Thomas,
I want a man who knows what love is all about. You are generous, kind, and thoughtful. People who are not like you admit to being useless and inferior. You have ruined me for other men. I yearn for you. I have no feelings whatsoever when we're apart. I can be forever happy--will you let me be yours?
Maria
----
Dear Thomas,
I want a man who knows what love is. All about you are generous, kind and thoughtful people, who are not like you. Admit to being useless and inferior. You have ruined me. For other men, I yearn; for you, I have no feelings whatsoever. When we're apart, I can be forever happy. Will you let me be?
Yours,
Maria</em>
</td></tr>
</table>
Conjunctions are important time savers and can help the flow of ideas if used correctly, but should not be used more than once in a sentence unless splitting the sentence would detract from it's meaning.
One example where two conjunctions may appear in a sentence is where the sentence contains both a list and two joined or related ideas:
''"The credit approval process involves assessing the amount, viability, merit and purpose of the loan and verifying that the borrower's credit history is of sufficient standing."''
In this case the passage would be harder to follow (and perhaps even misleading) if we wrote it as:
''"The credit approval process involves assessing the amount, viability, merit and purpose of the loan. The credit approval process should also verify that the borrower's credit history is of sufficient standing."''
By splitting the sentence we seem to imply that the credit history is of secondary importance to the information collected about the purpose of the loan.
These situations are generally pretty clear when they arise, but they are rare. A sentence with too many conjunctions suffers from the same problems as a paragraph with too many sentences; we have lost the reader before the end.
Some years ago Professor Manning Clark gave a Boyer lecture concerning the use of English in academic papers. One of his particular annoyances was the use of conjunctions to commence a sentence. His point was simple - a conjunction joins two sentences. If it starts the sentence it is prima-facie not joining two sentences together.
While we all recognise words like "and", "or" and "but" as conjunctions, words such as "however" and "because" are more often missed. Consider the following passage:
''"Because they operate unattended, Roboteller machines are prime targets for fraud. However, if we attach cameras to them they become leading tools in the capture of the perpetrators."''
This can be rewritten to eliminate the problem:
''"Roboteller machines are prime targets for fraud because they operate unattended. If we attach cameras to them, however, the machines become leading tools in the capture of the perpetrators."''
In rewriting the passage we also (once again) moved the subject to the start of the sentences. The "however": is redundant and the passage can be further simplified by writing it thus:
''"Roboteller machines are prime targets for fraud because they operate unattended. The machines become leading tools in the capture of the defrauders if we attach cameras to them."''
This passage demonstrates the appropriate use of "however":
''"Overall corporate / strategic planning is adequately addressed within Premium and Private, however, management attention is required concerning:…"''
==A Few Points of Style==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Personalise people not things'''
* '''Rule: Don't plan to make a plan.'''
* '''Rule: Negativity negates.'''
* '''Rule: DON'T SHOUT'''
</td></tr>
</table>
The three rules of this subsection cover common, but minor, problems of style.
A common written mistake is for a human trait such as "need" or "requiring" to be attributed to an inanimate "thing" such that it takes on the air of an inviolate law. The practice leads to broad statements without justification and hence incomplete argument of a case. Consider:
''"The credit approvals process needs to be reviewed."''
The credit approval process can not need anything. Only living creatures can experience need. It may be appropriate for the process to be updated and management or the auditors may need this to occur, but the process can't spontaneously need such improvement of itself.
Once again we find, as with so many English language errors, that the problem has arisen because of a subject / object mix-up. In the example the credit approval process, which should have been the object has been transformed into the subject. When we rewrite it the way it should have been we find that we are missing a significant part of the message that should have been conveyed (and is now inserted in the rewrite):
''"Management needs to review the credit approvals process focusing on the weaknesses identified in the finding."''
The new version both identifies who should perform the action and guidelines they should follow. It also highlights another important rule (not really one of grammar but one of service quality); the recommendation as written is essentially a plan to make a plan.
Either management should make the changes identified, or they should not. If we merely request them to review the situation we are delivering no committed improvement for the current situation to the Board. We should not say "review" when we mean "implement":
''"Management should implement the identified corrections to rectify the weaknesses in the credit approvals process identified in this report."''
Finally, we briefly consider two ad-hoc matters. The first is to do with capitalisation, while the second concerns the use of negatives.
Capitalising Every Word In a Sentence or even a Random selection Of a few words does not serve to help our presentation. Excessive capitalisation is affronting to the reader. In internet terminology this is akin to SHOUTING AT THE READER. Capitals belong at the beginning of a sentence or when naming a person, place or the title of a "thing". Capitalisation is rarely appropriate in the middle of a sentence.
Secondly, sentences should be expressed in the positive rather than the negative wherever possible. It is a standard sales technique to ask a prospect a question framed in the direction one wishes the answer to go:
"Would you prefer that my quote is open ended?"
As opposed to:
"Would you prefer that my quote is fixed?"
People tend to immediately think in sympathy to the speaker (at least until he or she threatens them with capitals!). If we express our sentences as negatives not only do we lead the reader to naturally disagree (because they have been "trained" to say no by our text, but we also create a sea of double negatives. Which may or may not imply a positive.
==Carrying the Case==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Death is in the details.'''
* '''Rule: Generalities are generally imprecise'''
* '''Rule: Let the facts carry the case.'''
</td></tr>
</table>
Much of what has been written in this goes to the issue of precision. In consulting and audit papers, accuracy of detail can determine the credibility attached to the consultant's/auditor's findings as well as the advice offered. The best strategy is to let the facts, clearly articulated, carry the argument.
The facts should not be embellished with emotional and vague descriptive words such as "large", "most", "substantially". We should state the quanta instead - "70%", "five out of eight", etc.
Try to avoid non-specific or vague words and expressions. This is especially true of quantities and times.
'''Examples'''
<table align=center >
<tr >
<th>Non-specific or vague</th><th> </th><th>Could mean or become</th>
</tr>
<tr>
<td>increased volumes</td><td></td><td>300 or more</td>
</tr>
<tr>
<td>drop in profit</td><td></td><td>profit was 20% lower</td>
</tr>
<tr>
<td>frequently</td><td></td><td>daily/weekly/monthly</td>
</tr>
<tr>
<td>rarely</td><td></td><td>once a year/decade</td>
</tr>
<tr>
<td>recently</td><td></td><td>yesterday/last week/month</td>
</tr>
<tr>
<td>shortly</td><td></td><td>tomorrow/next week/month</td>
</tr>
</table>
In the absence of statistical support for a finding, generalisation emerges. The discussion of the matter with the client becomes sidetracked over the meaning of words like "large" or "significant", rather than focussing on the issue identified and the solution required by the adviser.
Linked to these ideas is the form of words used to convey your point. Never use a long word where a short word will do. Long words may be interpreted by the reader as a deliberate attempt to mask purility with false grandeur, because the underlying point is decrepid or flawed. (See what I mean?).
Having said that, do not be frightened of using a long or technically correct word, simply because it has more than one syllable. Your can always provide a clossary of terms at the start of the document (and frequently that is a good idea for even some commonly missused terms). If your reader needs to get a little more educated to understand your work then fine.
Writing is not about stooping to the lowest common denominator, but it is about communicating your point accurately and effectively. That is: you must actually get your point across; not merely make your reader feel inadequate. There is no point in being right, if nobody realises.
The point, then, is to use the shortest possible ''correct'' word - not merely the shortest word.
As a rule-of-thumb, if your reader has to seek out the meaning of more than 2 or three words in your report you have probably lost them...and they will probably resent you for it. Know your audience, prepare your audience for your language, and make sure they don't feel stupid by the end of it.
The customer for a consulting or audit report needs to be assured that adopting recommendations based upon the consultant's finding will add value to the business.
Auditors (particularly) need to go well beyond describing what is wrong. They need to explain the meaning of any finding: how it affects the organisation’s bottom line; estimating the potential cost of not addressing a problem; predicting the likelihood of exposure or error.
Likewise, consultants need to go well beyond simply parroting back the latest theory they discovered in the bottom of a glass of scotch or on the back of the cereal packet that morning. Consultants need to do a little more of the 'audit' thing and actually analyse what is really the issue/wrong before agruing convincingly for change.
Wherever possible in all such instances, be specific. Numerous, several, many are words lacking in specifics. If this flies in the face of other advice to be brief, so be it.
The auditor/consultant should attempt to quantify the financial impact of a finding. While it may not be possible to arrive at a figure with mathematical precision, an informed guess can help management make a decision.
To be specific, following are some examples of content.
'''Poor'''
Differences exist in the cost of processing biscuit requisitions in various regions.
'''Better'''
The cost of processing biscuit requisitions differs from region to region. Vancouver can process a cheque for AUD 8 cents while the equivalent in Australia is AUD 15 cents. Australia might save up to AUD $15 million by adopting Vancouver’s methods.
'''Poor'''
There is a lack of adequate management information to support activities and to facilitate meaningful comparisons between regional units.
'''Better'''
Management information is inadequate: staff costs are not analysed for benchmarking across various offices; calculation of product profitability does not include processing costs; and there is no allocation of fees and interest income by product type.
Finally, '''summaries''' are meant to be just that: a tight condensation of the main point or points of an issue. Be ruthless in getting rid of perhaps interesting but non-essential pieces of additional information – but retain the specifics.
==Tense, Pronouns and Infinitives==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Don't split the infinitive'''
* '''Rule: Consistency is king'''
* '''Rule: Pronouns need a noun'''
* '''Rule: Unintroduced acronyms are antisocial'''
</td></tr>
</table>
"To Boldly Go Where No Man Has Gone Before…" Perhaps one of the most recognised phrases in the English language, this bight of the Star Trek prime directive is also a prime example of atrocious English! This is a classic example of the split infinitive (not to mention the redundant preposition at the end of the sentence).
The directive should have read:
" Boldly To Go Where No Man Has Gone…" or less poetically, " To Go Boldly Where No Man Has Gone…"
Perhaps, it would be best as:
"Go boldly, where none have gone.."
The infinitive is the basic form of verb invariably commencing with "to". It generally has no subject, and should not be split according to luminaries on the subject. The reason is more stylistic than grammatic. The problem with split infinitives is more obvious when a few words are inserted between the "to" and it's verb:
"The Roboteller machines are expected to really try hard to accurately and silently recognise the customer's identity."
Can be improved by:
"The Roboteller machines are expected to try really hard to recognise the customer's identity accurately and silently."
There are two common ways to fix avoid the split infinitive. Both are presented in the rewrite above. The first is to simply move the offending adverb after the verb, although sometimes this leads to a stilted speech pattern. The second is to move the adverb(s) to the end of the sentence as above.
Pronouns are words like he, she, it, etc that substitute for a noun like Jim, Phred or bank branch. The noun to which a pronoun relates is established by the context in which the pronoun is placed. Consequently, if too many pronouns are used together it becomes very difficult to determine for which noun an individual pronoun substitutes. As a general rule the target noun should immediately precede it's related pronoun and be refreshed at least every two pronouns.
Similarly, and acronym (abbreviation substituting for a noun or phrase) should be preceded immediately the first time it is used by the originating word or phrase. For example:
"The National Australia Bank (NAB) is a large and wonderful establishment. The NAB has an effective and happy audit team."
A completely unrelated matter (but grouped here for convenience) is that of consistency in the use of plurals and tense. It should be apparent to all authors, that the use of the singular in a sentence should be reflected continuously throughout the rest of the sentence. It may be less obvious that the same rule applies to verb tense.
If we express a verb in one tense, such as the present continuous as in "I am having a good day", the balance of the argument should normally be presented in the same tense. This is not a strict rule, because there will be situations in which a finding will relate a historic situation in the lead sentence, while the discussion relates an assessment that is in the present tense.
It is reasonable to say that within a sentence changes in tense will generally create confusion, unless separated by a conjunction. For example:
"In Antarctic Division wire transfer requests were accepted via e-mail and customer signatures were not obtained at all times."
Not
"In Antarctic Division wire transfer requests were accepted via e-mail and customer signatures are not obtained at all times."
But the following would be ok because the first part states a continuous state, while the latter part describes an historic observation relating to the first situation.:
"In Antarctic Division wire transfer requests are accepted via e-mail and customer signatures were not obtained at all times."
Agreement of subject and verb: A singular subject demands a singular verb; a plural one demands a plural verb. Many such problems are caused by long sentences overloaded with adjectives and subordinate clauses where the subject is separated from its verbs. This is another reason for keeping sentences short.
Sometimes the rule is not immediately obvious, such as in the case of "None": "none were" should be "none was" (none=not one or no one)
Example None of us is perfect.
==Confusing Words==
These words are often confused
* Affect (to impact upon, to assume) / effect (to bring about a change in)
* Object (the purpose)/ objective (the point of an exercise - usually military)
* Idol (a religious artefact, or object of worship) / Idyll (an imaginary ideal, or pastoral setting) / Idle (lazy, not in motion)
* Whom (the objective form of the relative pronoun) / who (the subjective form of the relative pronoun)
===A note about affect & effect===
A frequent source of error is confusion in the use of the similar-sounding words affect, affected, effect and effected and continual and continuous.
A cause for confusion is that affect is always a verb while effect can be either a noun or a verb. Both continual and continuous are adjectives.
Affect is a verb in the sense of being to influence. Effect as a verb means to bring about; as a noun it is equivalent to the word result.
The following represent correct usage.
Examples:
* Errors in computing affected the accuracy of the result.
* The effect of errors in computing was to produce an inaccurate result.
* Smoking cigarettes may affect your lungs.
* Giving up smoking had no effect on her general health.
* I didn’t finish the report because of continual telephone interruptions.
* Lights are left on in traffic tunnels to provide continuous illumination.
===A note about "due to"===
"Due to" is often used in the sense of through, because of or owing to. Mostly those alternatives are to be preferred. But it is correct to use due to in the sense of being attributable to.
Example The plane crash was due to bad visibility.
Don’t rely on your computer’s spellchecker for advice on grammar or correct spelling. Some systems are misleading. For example, you may be advised to change personal to personnel (or the other way round).
===A note about who & whom===
"Captain Kirk is the man whom the federation pays to fly the Enterprise." (Whom is the object of pays - the pronoun effected by the action of payment)
And
"Captain Kirk is the man who we think flies the Enterprise." (Who is the subject of flies, not the object of think).
==Punctuation==
Punctuation matters.
* "What is this thing called love?" (As in: Let me count the ways...)
* "What! Is this thing called love?" (As in: Let me out of here...)
* "What is this thing called, love?" (As in: OMG! You are not comming near me with that!)
===Comma===
Used when essential for clarity or to indicate a small interruption in continuity of thought. Short sentence construction reduces the need for commas.
===Semicolon===
Using a semicolon indicates a pause greater than a comma but less than a colon or full stop. Often a semicolon helps to alert the reader to an alternative or compensating thought.
'''Example:''' ''The risk of lost muffins was high; however, quick action averted this crisis.''
Semicolons should be used at the end of each line in a series of bullet points as an alternative to commas. (see later).
'''Example:'''
<em>
The following actions are planned to clear the biscuits:
* Engage two temporary staff for six months;
* Schedule extra training for these and permanent staff;
* Upgrade software in the Biscuit Dispensing Machine;
* Simplify the standard form used for requisitioning for biscuits from the kitchen from ten pages to five; and
* Remove the requirement for VP Supply, VP HR, and CEO counter signing of all biscuit requisitions.
</em>
===Colon===
The colon is used to introduce a quotation, summary, conclusion or list of bullet points (as in the example above); or to introduce a list within a sentence.
'''Example:'''
''The report contains the following sections: employment, training, promotion, legal compliance, relations with other departments.''
===Full stop===
(Period in U.S. usage)
As well as indicating the end of a sentence, full stops are used in some abbreviations. It has become common for periods to be ommitted from word abreviations. We counsel against such a style because: with the plethora of acronyms and technical jargon in today's language signalling that a word is an abreviation of a possible familiar word, with the use of the period; rather than a technical term unknown to the reader, adds to clarity.
Where a bulleted list includes points that have more than one sentence, it is preferable to separate the points with full stops, not semi-colons as set out in the previous example.
Example:
<em>
The following actions are planned to clear the biscuits:
* Engage two temporary staff for six months. Qualifications include large appetities and general slothfullness. It is estimated that salaries will be approximately $13,000 per month each plus biscuits.
* Schedule extra training for these and permanent staff. It is anticipated the training officer will need to allocate three hours weekly to the task.
* Upgrade software . . . (etc)
</em>
Note that where a full stop is used in a dot-point list, no conjunction is used to join the last to items.
Regardless of which dot point separater is chosen, it MUST be used consistently throughout the list and ideally the document.
===Hyphen===
General usage previously demanded that a hyphen be used if a prefix or suffix had the same letter as the word to which it was attached. So cooperate and coordinate generally were spelt co-operate and co-ordinate; hyphens in these instances are unnecessary. While reinforce and react are other examples where hyphens are not needed, sometimes a hyphen provides a warning that a word should not be read as a single syllable (e.g. re-use). Words formed by using the prefix non- should nearly always be hyphenated (e.g. non-compliant, non-aligned) as with some words prefixed by pre- (e.g. pre-existing).
===Apostrophe===
Used to indicate possession or the omission of letters in a contraction.
'''Examples'''
<em>
* Bill’s car was taken to the wreckers.
* Bill hasn’t had time to replace his car yet.
</em>
There is often confusion about its and it’s. The simple test is whether the construction of a sentence means it is (or it has etc). If so, it’s is a contraction and needs an apostrophe; if not, its is a pronoun and needs no apostrophe. (Warning: Don’t get fooled by some computer spellchecking systems which get this wrong.)
A rough rule of thumb: if we are using "it" in the possessive sense (as in "its red tyre"), leave out the "'".
'''Examples'''
<em>
* It’s been a long time between drinks.
* The engine was tuned but its vibration wasn’t greatly reduced.
</em>
===Ellipses===
This is the term to indicate words have been omitted from a quotation and is represented by three full stops separated by spaces.
'''Example'''
''Now is the time . . . to come to the aid of the party.''
===Quote marks===
These should not be used for emphasis. Use bold type or italic instead. Use quotation marks only when you are quoting or, after very long consideration of alternatives, when you are using a word or phrase you consider less than ideal for the situation.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
{{BackLinks}}
</noinclude>
e15798437995096806e5475bc55d9ac4d9e4d994
391
353
2010-12-22T12:56:59Z
Bishopj
1
wikitext
text/x-wiki
==About The Author==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2007 - Moral Rights Retained
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
==About This Document==
This paper compliments the Internal Audit and Management Consulting guides and discussions throughout the RiskWiki. It presents a brief guide to issues of style and presentation in writing up findings generally and with a very few exceptions applies universally to consultant and management reports (as well as to Internal Audit Reports).
Texts used as the basis for some of the views presented in this document and worthy of further exploration include:
* The Penguin Working Words (Penguin 1993)
* Fowler's Modern English Usage 2nd Edition (Oxford University Press 1965)
* Oxford Dictionary (Oxford University Press)
* Style Manual 4th Edition (Australian Government Press Service 1988)
* Practical English Usage - Michael Swan (Oxford University Press 1980)
* The Cambridge Encyclopedia of Language - David Crystal (Cambridge University Press 1987)
* Deloitte Internal Audit Method, Volume 6 - Report Writing - J Bishop & J Crawford (DTT 1992-3)
* Stanton Consulting Partners Style Manual (J Bishop 1995)
* NAB IA Reporting Style Guide ( J Bishop -1999- & an Unknown NAB Staff Member)
* Bishop Phillips Consulting Style Manual (J Bishop 2000)
==Writing Style==
===Introduction===
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="right">
<tr>
<td>
<div class="left" style="background-color:#FFFF99" >
====Bishop's Writing Rules:====
# Rule: The Passive puts people to sleep.
# Rule: Ending a sentence with a preposition is a situation up with which I will not put.
# Rule: Objects like subjects
# Rule: One point to a paragraph
# Rule: Get to the bottom line first
# Rule: Just do it - say what you mean.
# Rule: Readers don’t read
# Rule: Three sentences are company, four is a crowd
# Rule: Conjunctions can't commence (a sentence)
# Rule: Conjunction collections confuse
# Rule: Personalise people not things
# Rule: Negativity negates.
# Rule: DON'T SHOUT
# Rule: Don't plan to make a plan.
# Rule: Consistency is king
# Rule: Death is in the details.
# Rule: Pronouns need a noun
# Don't split the infinitive
# Rule: Unintroduced acronyms are antisocial
# Rule: Generalities are generally imprecise
# Rule: Let the facts carry the case.
</div>
</td>
</tr>
</table>
In written expression, a few simple rules can make the difference between clarity and confusion. Applying the rules in this section will help us both record our ideas efficiently and convey our meaning clearly.
The rules are a mix of style and traditional grammar identified over many years of reviewing and writing audit reports. We will need a rudimentary understanding of grammar to apply a number of these rules effectively.
Syntax assists semantics. Grammar defines the syntax of the language. Good syntax describes the structures a sentence can follow and still be considered well formed.
Semantics is the meaning of a sentence. Syntax assists semantics by managing the flow of ideas, and distinguishing ambiguities.
Consider for a moment the classic poets' joke
"What is this thing called love?" - The plaintive cry of a tortured heart.
"What is this thing called, love?" -The question of a curious friend on sighting a never before seen object.
One stray comma makes all the difference to the meaning of the question. In speech we use tone, rhythm, intonation and body language to convey meaning. In written expression we rely on syntax - the rules of grammar
We can not solve all problems of ambiguity in language with punctuation, but with a better understanding of grammar we can avoid the ambiguity in the first place. Take, for example, the sentence: "Flying saucers can be thrilling". This sentence seemingly can have a number of meanings:
# The act of flying a saucer can thrill the pilot.
# Seeing a saucer in flight can thrill the observer.
# The idea of a saucer that flies thrills.
We will see, however, that even in this situation, the judicious application of some simple rules when forming the sentence can result in clarity:
"Flying a saucer can thrill the pilot."
What has changed? We have moved from the general ("flying saucers") to the specific ("flying a saucer") (rule 20). We have also introduced a subject (the pilot) to the sentence where only the object and verb existed (rule 3) and applied plurals consistently (rule 15). Lastly applying rule 1 eliminates the problem entirely:
"A pilot can be thrilled when flying a saucer.."
To understand how to do this, we need a little grammar.
Since we can not avoid grammar if we wish to understand how best to convey our meaning, our discussion will be facilitated by first establishing the definition of a few grammatical terms. This we do in the next sub-section. Armed with a few parts of speech we will then explore the 19 rules over the subsections thereafter.
==A Grammar Crash Course==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="right">
<tr>
<td>
<div class="center" >
[[Image:IARepWriteCavemen.png]]
</div>
</td>
</tr>
</table>
===Subject, Verb and Object===
'When nine hundred years you reach, look as good, you will not. Strong with the Force you are…."
Remember Yoda ? Among the little, wrinkly, green "Star Wars" character's more distinctive features was "Yoda Speak". To a linguist, Yoda represents an imaginary member of a very rare and select group: races with languages that use an "Object - Subject - Verb" structure.
The understanding of the difference between each of these components is the first step in mastering sentence structure.
The order of subject (S) - verb (V) - object (O) (SVO) is the classic "natural" english sentence:
<table border=1 align="center" >
<tr><td >"Management </td><td> is adhering </td><td> to </td><td> credit policies."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb </td><td></td><td align="center" > Object </td></tr>
</table>
Things work quite well if we think of a sentence as revolving around a verb. The subject of a verb is the noun (or noun substitute ) that directs the action of the verb. The object of a verb is a noun (or noun substitute) that receives the action, is effected by the action, or about which the action is concerned. In the majority of instances a noun substitute is a pronoun.
In the example "management" directs the action and is therefore the subject, while "credit policies" are the things being "adhered to" and therefore the object. As a rough rule of thumb, if the noun phrase starts with a preposition it is a fair bet that the noun concerned is the object. In the example sentence, "to" is the preposition.
===Prepositions===
A preposition relates a word or phrase to another part of the sentence.
<table border=1 align="center" >
<tr><td >"Management </td><td> is adhering </td><td align="center" > to </td><td> credit policies."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb </td><td align="center" >Preposition</td><td align="center" > Object </td></tr>
</table>
Words that are prepositions include: to, in, into, on, upon, over, before, after, of, with.
In the example the word to joins (or more accurately relates) the noun phrase "credit policies" to the rest of the sentence - "Management is adhering".
A note of caution - a word that is a preposition in one case can be a conjunction in another:
* The auditor arrived before [preposition] the meeting.
* The auditor arrived before [conjunction] the meeting began.
===Conjunctions===
Conjunctions are words that join two sentences, or nouns, but not in a causal relationship as with a preposition but either as equals or in a superior - subordinate relationship. Examples of the former include: and, but, or, nor, whereas, however. Examples of the latter include: because, when, where, if, although.
==Active and Passive Voices==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: The Passive puts people to sleep.'''
</td></tr>
</table>
Recall the earlier discussion about subjects and objects of a sentence. We observed that the "natural" order in English is Subject - Verb - Object (SVO). This is the active voice:
<table border=1 align="center" >
<tr><td >"This firm</td><td> will no longer pay </td><td align="center" > for </td><td> Overtime."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb phrase </td><td align="center" >Preposition</td><td align="center" > Object </td></tr>
</table>
Now we will switch the subject and the object and contrast this with the same sentence expressed in the passive voice:
<table border=1 align="center" >
<tr><td >"Overtime payments</td><td> will no longer be made </td><td align="center" > by </td><td> this firm."</td></tr>
<tr><td align="center" >Object </td><td align="center" > Verb phrase </td><td align="center" >Preposition</td><td align="center" > Subject </td></tr>
</table>
The passive voice essentially reverses the natural order from SVO to OVS.
There is nothing grammatically wrong with either construct, but even a few lines expressed in the passive voice will bore our readers to tears. This effect arises because the passive voice places the reader at a distance from the action by making the object of the sentence the primary focus rather then the subject. Consequently, things appear to come before people.
Consider the following passage (passive voice).
"Significantly more overtime than the firm average has been incurred by roboteller maintenance staff of the Antarctic Division. A number of anomalies in the time sheets including bank branches that have been closed for many years having work recorded for them by individual staff have been revealed by a detailed analysis of the time sheets. Overtime payments will no longer be made by the Antarctic Division as a consequence."
Versus the following version (active voice)
"Roboteller maintenance staff in the Antarctic Division have incurred significantly more overtime than the firm average. An analysis of the time sheets for individual staff shows a number of anomalies, including work conducted for bank branches that have been closed for a number of years. Consequently, the Antarctic Division will no longer pay for overtime."
Which one did you have to read twice? The passive voice is difficult for the reader taken even one paragraph at a time. Try reading it for an entire report and you will be angry, frustrated and tense (assuming you are still awake by the end of it).
The active voice involves the reader, it flows better than the passive, it encourages the writer to go straight to the point rather than inserting "filler words" whose sole purpose is to make the sentence hang together and it reduces the chance of repetition (as apparent in the passage above). The passive voice, however, is not only difficult to read, but it is far more difficult (and therefore slower) to write.
In the passive voice we express the idea of the sentence before we provide the context (subject). The direct result of this is that our thought pattern is reversed and our ideas do not seem to flow properly. We end up adding extra words, leaving sentences hang in mid air (such as when we finish with a preposition) and, most importantly, failing to convince our audience of our point because they have to try too hard to understand it.
A sentence is a "word painting" of an idea. Well formed it is a thing of beauty and, like a great painting, a joy to behold.
==Positioning of Prepositions==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Ending a sentence with a preposition is a situation up with which I will not put.'''<br>
* '''Rule: Objects like Subjects.'''
</td></tr>
</table>
One of the most common errors in common speech is to place the preposition at the end of a sentence. Prepositions, by definition connect and introduce a noun phrase in a sentence. After the use of the active voice, I consider that this is almost the single most important trick to forming logical, easily understood sentences quickly.
Given that it has become almost standard usage to let prepositions drift to the end of a sentence, why is it such a gross error?
You will recall that we defined a preposition as a word that joins and relates a noun phrase to the rest of the sentence. It literally "leads" a phrase. Without the preposition connecting the two ideas in a sentence the sentence appears stilted (or as in the following example the sentence actually seams to mean something completely different):
"Management is adhering credit policies."
Consider a few examples:
<table border=1 align="center" >
<tr><th >Bad Form</th ><th >Good Form</th ></tr>
<tr><td>Where have the auditors come from?</td><td>From where have the auditors come?</td></tr>
<tr><td>Peace is worth striving for.</td><td>It is worth striving for peace</td></tr>
<tr><td>Firm credit policies must be complied with.</td><td>Management must comply with firm credit policies.</td></tr>
</table>
The first two on the left-hand side are merely untidy, but the third highlights the problem with prepositions shifting to the end of a sentence. The version on the left-hand side leaves the sentence "hanging" and most importantly, leaves out the subject. The lack of a subject in the sentence means that it is unclear who should perform the action. (ie. Objects Like Subjects)
If we use the active voice, and lead the sentence with the subject, we will be far less likely to end up with the versions on the left hand side. Since a preposition generally connects the object to the subject, it is the habit of placing the object at the start of the sentence (i.e. the passive voice) that leads to sentences with the preposition at the end.
The second example on the right hand side is still unsatisfactory, because it does not identify the responsibility of the action, and consequently is a generalisation - which is too easy to fault. For whom is it better to strive for peace? An arms manufacturer may see things a little differently! A better rewrite would have been: "We will benefit both materially and socially if we strive for peace."
It is easy to put prepositions in the right place if we remember to use the words "which" and "whom":
This is the day for which we have been waiting. (Not: This is the day we have been waiting for.)
These are the results of which we heard. (Not: These are the results we heard of.)
The rule (attributed to Winston Churchill) "Ending a sentence with a preposition is a situation up with which I will not put" (instead of - "Ending a sentence with a proposition is a situation I will not put up with.") illustrates how to arrange the words to achieve the desired outcome. It also tends to stick in one's mind and so is easily remembered.
==The Formula For A Paragraph==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: One idea to a paragraph'''
* '''Rule: Get to the bottom line first'''
* '''Rule: Three sentences are company, four is a crowd'''
* '''Rule: Just Do It - saying what we mean.'''
* '''Rule: Readers Don't Read'''
</td></tr>
</table>
The purpose of dividing a body of writing into paragraphs is to help the reader absorb the points being made, and the writer to formulate them. These five rules are each about how to put together a paragraph that works.
A couple of simple formulae describing the sequence of sentences in a paragraph can show us what to do:
# Main Point + Counter Point + Conclusion.
# Main Point + Expansion + [Expansion].
In each case we are saying a paragraph should consist of between 2 and 3 sentences. Using more or less sentences in a paragraph is permissible, but to be discouraged unless it is absolutely essential for the purpose of the point. This is particularly true when we a planning to use more than three sentences. (ie Three sentences are company, four is a crowd)
A paragraph end forms a natural break in the flow of though. By implication, we are asking the reader to absorb the entire a paragraph as a single concept before they evaluate it in their minds. The longer is the paragraph, the longer the reader must store the ideas before evaluation.
We risk loosing the reader's attention and comprehension if we ask him or her to temporarily store the ideas for too long a time or to store too many ideas at once. Short, punchy paragraphs built around a single central idea help minimise waffle and assist the reader to rapidly absorb our message. (i.e. One idea to a paragraph))
<table border=0 align="left" width="200px" style="background-color:#FFFF99" >
<tr><td align="center">
<p >
<font color="#990000" size="+1" >'''''…short, punchy paragraphs built around a single idea…'''''</font >
</p >
</td></tr>
</table>
It is a courtesy to the reader, to endeavour to minimise the work they need to do to in reading our work. Opening the paragraph with the main point allows the reader to skip the rest of the sentences in the paragraph if they agree with the point. In each of the two formulae we open with the main point (ie. we get to the bottom line first).
The difference between the forms is that in the first formula we offer a counter point in the second sentence, which is then offset by the conclusion. In this case the conclusion should be consistent with the main point (rather than the second or counter point).
In the second formula we are presenting the main point supported by one or two additional arguments. Should we need six or seven sentences to support the point, these should be presented as a dot-point list, or subdivided into two or three logical groups and split across two or three paragraphs.
<table border=0 align="right" width="200px" style="background-color:#FFFF99" >
<tr><td align="center">
<p >
<font color="#990000" size="+1" >'''''…the most convincing expression of an idea is usually the simplest…'''''</font >
</p >
</td></tr>
</table>
The essence of these ideas is that the most convincing expression of an idea is usually the simplest. Winning a point through confusion is, at best, a Pyrrhic victory. If the issue is important, the reader will dwell on it, and form their own opinion. If they didn't understand your arguments, you will have no effective input into the formation of their position on the matter, other than to raise it in the first place.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="left">
<tr>
<td>
<div class="center" >
[[Image:IARepWriteSectionStructure.png]]
</div>
</td>
</tr>
</table>
The essence of newspaper journalism is that most readers will not read most of the articles in a paper or magazine completely. Consequently, from the headline down to the end of the article the item is arranged as a series of progressively more detailed "summaries" of the information. There are usually three to four layers.
The first layer is the headline, which attempts to summarise the entire issue in a few words. The second layer is the first paragraph which presents a twenty to thirty word summary of the issue. The third layer is the second, third and perhaps fourth paragraphs, which provide the full story and the fourth layer provides incidental minor details.
The purpose of the structure is to allow the readers to exit at several points when they have collected sufficient information for their interest level. The approach recognises that none of us has time to read every piece of information presented to us, and when we do we tend to skim the information for issues that are relevant to us. (ie. readers don't read)
We should design our reports so that the reader does not have to read all the way to the end to "get" the issue. We can imagine this pattern as a pyramid, with the highest level summary at the top, and progressively more detail to the bottom.
==Using Conjunctions==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Conjunctions can't commence (a sentence)'''
* '''Rule: Conjunction collections confuse'''
</td></tr>
</table>
<table border=0 align="right" width="400px" style="background-color:#FFFF99;margin-left:0.9em" cellpadding="2" cellspacing="10" >
<tr><td align="left">
===The Importance of Correct Punctuation===
'''''The following two passages were written by Rowland Croucher. They illustrate neatly the importance of punctuation in written expression. Only the punctuation changes between the passages….'''''
<em>Dear Thomas,
I want a man who knows what love is all about. You are generous, kind, and thoughtful. People who are not like you admit to being useless and inferior. You have ruined me for other men. I yearn for you. I have no feelings whatsoever when we're apart. I can be forever happy--will you let me be yours?
Maria
----
Dear Thomas,
I want a man who knows what love is. All about you are generous, kind and thoughtful people, who are not like you. Admit to being useless and inferior. You have ruined me. For other men, I yearn; for you, I have no feelings whatsoever. When we're apart, I can be forever happy. Will you let me be?
Yours,
Maria</em>
</td></tr>
</table>
Conjunctions are important time savers and can help the flow of ideas if used correctly, but should not be used more than once in a sentence unless splitting the sentence would detract from it's meaning.
One example where two conjunctions may appear in a sentence is where the sentence contains both a list and two joined or related ideas:
''"The credit approval process involves assessing the amount, viability, merit and purpose of the loan and verifying that the borrower's credit history is of sufficient standing."''
In this case the passage would be harder to follow (and perhaps even misleading) if we wrote it as:
''"The credit approval process involves assessing the amount, viability, merit and purpose of the loan. The credit approval process should also verify that the borrower's credit history is of sufficient standing."''
By splitting the sentence we seem to imply that the credit history is of secondary importance to the information collected about the purpose of the loan.
These situations are generally pretty clear when they arise, but they are rare. A sentence with too many conjunctions suffers from the same problems as a paragraph with too many sentences; we have lost the reader before the end.
Some years ago Professor Manning Clark gave a Boyer lecture concerning the use of English in academic papers. One of his particular annoyances was the use of conjunctions to commence a sentence. His point was simple - a conjunction joins two sentences. If it starts the sentence it is prima-facie not joining two sentences together.
While we all recognise words like "and", "or" and "but" as conjunctions, words such as "however" and "because" are more often missed. Consider the following passage:
''"Because they operate unattended, Roboteller machines are prime targets for fraud. However, if we attach cameras to them they become leading tools in the capture of the perpetrators."''
This can be rewritten to eliminate the problem:
''"Roboteller machines are prime targets for fraud because they operate unattended. If we attach cameras to them, however, the machines become leading tools in the capture of the perpetrators."''
In rewriting the passage we also (once again) moved the subject to the start of the sentences. The "however": is redundant and the passage can be further simplified by writing it thus:
''"Roboteller machines are prime targets for fraud because they operate unattended. The machines become leading tools in the capture of the defrauders if we attach cameras to them."''
This passage demonstrates the appropriate use of "however":
''"Overall corporate / strategic planning is adequately addressed within Premium and Private, however, management attention is required concerning:…"''
==A Few Points of Style==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Personalise people not things'''
* '''Rule: Don't plan to make a plan.'''
* '''Rule: Negativity negates.'''
* '''Rule: DON'T SHOUT'''
</td></tr>
</table>
The three rules of this subsection cover common, but minor, problems of style.
A common written mistake is for a human trait such as "need" or "requiring" to be attributed to an inanimate "thing" such that it takes on the air of an inviolate law. The practice leads to broad statements without justification and hence incomplete argument of a case. Consider:
''"The credit approvals process needs to be reviewed."''
The credit approval process can not need anything. Only living creatures can experience need. It may be appropriate for the process to be updated and management or the auditors may need this to occur, but the process can't spontaneously need such improvement of itself.
Once again we find, as with so many English language errors, that the problem has arisen because of a subject / object mix-up. In the example the credit approval process, which should have been the object has been transformed into the subject. When we rewrite it the way it should have been we find that we are missing a significant part of the message that should have been conveyed (and is now inserted in the rewrite):
''"Management needs to review the credit approvals process focusing on the weaknesses identified in the finding."''
The new version both identifies who should perform the action and guidelines they should follow. It also highlights another important rule (not really one of grammar but one of service quality); the recommendation as written is essentially a plan to make a plan.
Either management should make the changes identified, or they should not. If we merely request them to review the situation we are delivering no committed improvement for the current situation to the Board. We should not say "review" when we mean "implement":
''"Management should implement the identified corrections to rectify the weaknesses in the credit approvals process identified in this report."''
Finally, we briefly consider two ad-hoc matters. The first is to do with capitalisation, while the second concerns the use of negatives.
Capitalising Every Word In a Sentence or even a Random selection Of a few words does not serve to help our presentation. Excessive capitalisation is affronting to the reader. In internet terminology this is akin to SHOUTING AT THE READER. Capitals belong at the beginning of a sentence or when naming a person, place or the title of a "thing". Capitalisation is rarely appropriate in the middle of a sentence.
Secondly, sentences should be expressed in the positive rather than the negative wherever possible. It is a standard sales technique to ask a prospect a question framed in the direction one wishes the answer to go:
"Would you prefer that my quote is open ended?"
As opposed to:
"Would you prefer that my quote is fixed?"
People tend to immediately think in sympathy to the speaker (at least until he or she threatens them with capitals!). If we express our sentences as negatives not only do we lead the reader to naturally disagree (because they have been "trained" to say no by our text, but we also create a sea of double negatives. Which may or may not imply a positive.
==Carrying the Case==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Death is in the details.'''
* '''Rule: Generalities are generally imprecise'''
* '''Rule: Let the facts carry the case.'''
</td></tr>
</table>
Much of what has been written in this goes to the issue of precision. In consulting and audit papers, accuracy of detail can determine the credibility attached to the consultant's/auditor's findings as well as the advice offered. The best strategy is to let the facts, clearly articulated, carry the argument.
The facts should not be embellished with emotional and vague descriptive words such as "large", "most", "substantially". We should state the quanta instead - "70%", "five out of eight", etc.
Try to avoid non-specific or vague words and expressions. This is especially true of quantities and times.
'''Examples'''
<table align=center >
<tr >
<th>Non-specific or vague</th><th> </th><th>Could mean or become</th>
</tr>
<tr>
<td>increased volumes</td><td></td><td>300 or more</td>
</tr>
<tr>
<td>drop in profit</td><td></td><td>profit was 20% lower</td>
</tr>
<tr>
<td>frequently</td><td></td><td>daily/weekly/monthly</td>
</tr>
<tr>
<td>rarely</td><td></td><td>once a year/decade</td>
</tr>
<tr>
<td>recently</td><td></td><td>yesterday/last week/month</td>
</tr>
<tr>
<td>shortly</td><td></td><td>tomorrow/next week/month</td>
</tr>
</table>
In the absence of statistical support for a finding, generalisation emerges. The discussion of the matter with the client becomes sidetracked over the meaning of words like "large" or "significant", rather than focussing on the issue identified and the solution required by the adviser.
Linked to these ideas is the form of words used to convey your point. Never use a long word where a short word will do. Long words may be interpreted by the reader as a deliberate attempt to mask purility with false grandeur, because the underlying point is decrepid or flawed. (See what I mean?).
Having said that, do not be frightened of using a long or technically correct word, simply because it has more than one syllable. Your can always provide a clossary of terms at the start of the document (and frequently that is a good idea for even some commonly missused terms). If your reader needs to get a little more educated to understand your work then fine.
Writing is not about stooping to the lowest common denominator, but it is about communicating your point accurately and effectively. That is: you must actually get your point across; not merely make your reader feel inadequate. There is no point in being right, if nobody realises.
The point, then, is to use the shortest possible ''correct'' word - not merely the shortest word.
As a rule-of-thumb, if your reader has to seek out the meaning of more than 2 or three words in your report you have probably lost them...and they will probably resent you for it. Know your audience, prepare your audience for your language, and make sure they don't feel stupid by the end of it.
The customer for a consulting or audit report needs to be assured that adopting recommendations based upon the consultant's finding will add value to the business.
Auditors (particularly) need to go well beyond describing what is wrong. They need to explain the meaning of any finding: how it affects the organisation’s bottom line; estimating the potential cost of not addressing a problem; predicting the likelihood of exposure or error.
Likewise, consultants need to go well beyond simply parroting back the latest theory they discovered in the bottom of a glass of scotch or on the back of the cereal packet that morning. Consultants need to do a little more of the 'audit' thing and actually analyse what is really the issue/wrong before agruing convincingly for change.
Wherever possible in all such instances, be specific. Numerous, several, many are words lacking in specifics. If this flies in the face of other advice to be brief, so be it.
The auditor/consultant should attempt to quantify the financial impact of a finding. While it may not be possible to arrive at a figure with mathematical precision, an informed guess can help management make a decision.
To be specific, following are some examples of content.
'''Poor'''
Differences exist in the cost of processing biscuit requisitions in various regions.
'''Better'''
The cost of processing biscuit requisitions differs from region to region. Vancouver can process a cheque for AUD 8 cents while the equivalent in Australia is AUD 15 cents. Australia might save up to AUD $15 million by adopting Vancouver’s methods.
'''Poor'''
There is a lack of adequate management information to support activities and to facilitate meaningful comparisons between regional units.
'''Better'''
Management information is inadequate: staff costs are not analysed for benchmarking across various offices; calculation of product profitability does not include processing costs; and there is no allocation of fees and interest income by product type.
Finally, '''summaries''' are meant to be just that: a tight condensation of the main point or points of an issue. Be ruthless in getting rid of perhaps interesting but non-essential pieces of additional information – but retain the specifics.
==Tense, Pronouns and Infinitives==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Don't split the infinitive'''
* '''Rule: Consistency is king'''
* '''Rule: Pronouns need a noun'''
* '''Rule: Unintroduced acronyms are antisocial'''
</td></tr>
</table>
"To Boldly Go Where No Man Has Gone Before…" Perhaps one of the most recognised phrases in the English language, this bight of the Star Trek prime directive is also a prime example of atrocious English! This is a classic example of the split infinitive (not to mention the redundant preposition at the end of the sentence).
The directive should have read:
" Boldly To Go Where No Man Has Gone…" or less poetically, " To Go Boldly Where No Man Has Gone…"
Perhaps, it would be best as:
"Go boldly, where none have gone.."
The infinitive is the basic form of verb invariably commencing with "to". It generally has no subject, and should not be split according to luminaries on the subject. The reason is more stylistic than grammatic. The problem with split infinitives is more obvious when a few words are inserted between the "to" and it's verb:
"The Roboteller machines are expected to really try hard to accurately and silently recognise the customer's identity."
Can be improved by:
"The Roboteller machines are expected to try really hard to recognise the customer's identity accurately and silently."
There are two common ways to fix avoid the split infinitive. Both are presented in the rewrite above. The first is to simply move the offending adverb after the verb, although sometimes this leads to a stilted speech pattern. The second is to move the adverb(s) to the end of the sentence as above.
Pronouns are words like he, she, it, etc that substitute for a noun like Jim, Phred or bank branch. The noun to which a pronoun relates is established by the context in which the pronoun is placed. Consequently, if too many pronouns are used together it becomes very difficult to determine for which noun an individual pronoun substitutes. As a general rule the target noun should immediately precede it's related pronoun and be refreshed at least every two pronouns.
Similarly, and acronym (abbreviation substituting for a noun or phrase) should be preceded immediately the first time it is used by the originating word or phrase. For example:
"The National Australia Bank (NAB) is a large and wonderful establishment. The NAB has an effective and happy audit team."
A completely unrelated matter (but grouped here for convenience) is that of consistency in the use of plurals and tense. It should be apparent to all authors, that the use of the singular in a sentence should be reflected continuously throughout the rest of the sentence. It may be less obvious that the same rule applies to verb tense.
If we express a verb in one tense, such as the present continuous as in "I am having a good day", the balance of the argument should normally be presented in the same tense. This is not a strict rule, because there will be situations in which a finding will relate a historic situation in the lead sentence, while the discussion relates an assessment that is in the present tense.
It is reasonable to say that within a sentence changes in tense will generally create confusion, unless separated by a conjunction. For example:
"In Antarctic Division wire transfer requests were accepted via e-mail and customer signatures were not obtained at all times."
Not
"In Antarctic Division wire transfer requests were accepted via e-mail and customer signatures are not obtained at all times."
But the following would be ok because the first part states a continuous state, while the latter part describes an historic observation relating to the first situation.:
"In Antarctic Division wire transfer requests are accepted via e-mail and customer signatures were not obtained at all times."
Agreement of subject and verb: A singular subject demands a singular verb; a plural one demands a plural verb. Many such problems are caused by long sentences overloaded with adjectives and subordinate clauses where the subject is separated from its verbs. This is another reason for keeping sentences short.
Sometimes the rule is not immediately obvious, such as in the case of "None": "none were" should be "none was" (none=not one or no one)
Example None of us is perfect.
==Confusing Words==
These words are often confused
* Affect (to impact upon, to assume) / effect (to bring about a change in)
* Object (the purpose)/ objective (the point of an exercise - usually military)
* Idol (a religious artefact, or object of worship) / Idyll (an imaginary ideal, or pastoral setting) / Idle (lazy, not in motion)
* Whom (the objective form of the relative pronoun) / who (the subjective form of the relative pronoun)
===A note about affect & effect===
A frequent source of error is confusion in the use of the similar-sounding words affect, affected, effect and effected and continual and continuous.
A cause for confusion is that affect is always a verb while effect can be either a noun or a verb. Both continual and continuous are adjectives.
Affect is a verb in the sense of being to influence. Effect as a verb means to bring about; as a noun it is equivalent to the word result.
The following represent correct usage.
Examples:
* Errors in computing affected the accuracy of the result.
* The effect of errors in computing was to produce an inaccurate result.
* Smoking cigarettes may affect your lungs.
* Giving up smoking had no effect on her general health.
* I didn’t finish the report because of continual telephone interruptions.
* Lights are left on in traffic tunnels to provide continuous illumination.
===A note about "due to"===
"Due to" is often used in the sense of through, because of or owing to. Mostly those alternatives are to be preferred. But it is correct to use due to in the sense of being attributable to.
Example The plane crash was due to bad visibility.
Don’t rely on your computer’s spellchecker for advice on grammar or correct spelling. Some systems are misleading. For example, you may be advised to change personal to personnel (or the other way round).
===A note about who & whom===
"Captain Kirk is the man whom the federation pays to fly the Enterprise." (Whom is the object of pays - the pronoun effected by the action of payment)
And
"Captain Kirk is the man who we think flies the Enterprise." (Who is the subject of flies, not the object of think).
==Punctuation==
Punctuation matters.
* "What is this thing called love?" (As in: Let me count the ways...)
* "What! Is this thing called love?" (As in: Let me out of here...)
* "What is this thing called, love?" (As in: OMG! You are not comming near me with that!)
===Comma===
Used when essential for clarity or to indicate a small interruption in continuity of thought. Short sentence construction reduces the need for commas.
===Semicolon===
Using a semicolon indicates a pause greater than a comma but less than a colon or full stop. Often a semicolon helps to alert the reader to an alternative or compensating thought.
'''Example:''' ''The risk of lost muffins was high; however, quick action averted this crisis.''
Semicolons should be used at the end of each line in a series of bullet points as an alternative to commas. (see later).
'''Example:'''
<em>
The following actions are planned to clear the biscuits:
* Engage two temporary staff for six months;
* Schedule extra training for these and permanent staff;
* Upgrade software in the Biscuit Dispensing Machine;
* Simplify the standard form used for requisitioning for biscuits from the kitchen from ten pages to five; and
* Remove the requirement for VP Supply, VP HR, and CEO counter signing of all biscuit requisitions.
</em>
===Colon===
The colon is used to introduce a quotation, summary, conclusion or list of bullet points (as in the example above); or to introduce a list within a sentence.
'''Example:'''
''The report contains the following sections: employment, training, promotion, legal compliance, relations with other departments.''
===Full stop===
(Period in U.S. usage)
As well as indicating the end of a sentence, full stops are used in some abbreviations. It has become common for periods to be ommitted from word abreviations. We counsel against such a style because: with the plethora of acronyms and technical jargon in today's language signalling that a word is an abreviation of a possible familiar word, with the use of the period; rather than a technical term unknown to the reader, adds to clarity.
Where a bulleted list includes points that have more than one sentence, it is preferable to separate the points with full stops, not semi-colons as set out in the previous example.
Example:
<em>
The following actions are planned to clear the biscuits:
* Engage two temporary staff for six months. Qualifications include large appetities and general slothfullness. It is estimated that salaries will be approximately $13,000 per month each plus biscuits.
* Schedule extra training for these and permanent staff. It is anticipated the training officer will need to allocate three hours weekly to the task.
* Upgrade software . . . (etc)
</em>
Note that where a full stop is used in a dot-point list, no conjunction is used to join the last to items.
Regardless of which dot point separater is chosen, it MUST be used consistently throughout the list and ideally the document.
===Hyphen===
General usage previously demanded that a hyphen be used if a prefix or suffix had the same letter as the word to which it was attached. So cooperate and coordinate generally were spelt co-operate and co-ordinate; hyphens in these instances are unnecessary. While reinforce and react are other examples where hyphens are not needed, sometimes a hyphen provides a warning that a word should not be read as a single syllable (e.g. re-use). Words formed by using the prefix non- should nearly always be hyphenated (e.g. non-compliant, non-aligned) as with some words prefixed by pre- (e.g. pre-existing).
===Apostrophe===
Used to indicate possession or the omission of letters in a contraction.
'''Examples'''
<em>
* Bill’s car was taken to the wreckers.
* Bill hasn’t had time to replace his car yet.
</em>
There is often confusion about its and it’s. The simple test is whether the construction of a sentence means it is (or it has etc). If so, it’s is a contraction and needs an apostrophe; if not, its is a pronoun and needs no apostrophe. (Warning: Don’t get fooled by some computer spellchecking systems which get this wrong.)
A rough rule of thumb: if we are using "it" in the possessive sense (as in "its red tyre"), leave out the "'".
'''Examples'''
<em>
* It’s been a long time between drinks.
* The engine was tuned but its vibration wasn’t greatly reduced.
</em>
===Ellipses===
This is the term to indicate words have been omitted from a quotation and is represented by three full stops separated by spaces.
'''Example'''
''Now is the time . . . to come to the aid of the party.''
===Quote marks===
These should not be used for emphasis. Use bold type or italic instead. Use quotation marks only when you are quoting or, after very long consideration of alternatives, when you are using a word or phrase you consider less than ideal for the situation.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
{{BackLinks}}
</noinclude>
e15798437995096806e5475bc55d9ac4d9e4d994
BPC SurveyManager Web Client Manual: Home (ACFE/ACE) - Working with The LSS
0
277
297
2011-01-11T14:32:10Z
Bishopj
1
wikitext
text/x-wiki
=SECTION 2A. Locating the current LSS=
After login you will be presented with the Survey List page. Think of this as your Survey Manager "home" page. From here you can reach everything you need. If this is the first time you have accessed your organisation, you will generally find only one survey in your survey list - the current LSS survey (similar to the image below). It is possible that you will see other surveys as well. These will be surveys you or your previous users have created for your organisation. Each year BPC removes the previous LSS from your organisation, but preserves all non LSS surveys and data until requested to remove them.
[[IMAGE:3_BPCSurveyManagerWCSurveyListScreenPNA.jpg]]
Locate the current LSS in your list. You will note that it has actions listed including:
# "Edit" - This allows you to edit certain presentational aspects of the survey such as the enquiry email address, the logo graphic, incitation text, help, etc. You can not change the questions in the LSS, so you will not have this option available in the edit screen. Unless you wish to change the default appearance of the LSS you do not need to use the action.
# "Manage" - This is the main action you will use. It enables publication of the survey to responders, sending of invitation, viewing of reports, and general management of the survey. For Providers using email invitations exclusively, this will be the only action in which you are interested.
# "Data Entry" - The data entry action enables the entry of surveys from both hardcopy and telephone/interview. A survey administrater or data entry account holder can enter the survey responsers by selecting from the list of published responders.
# "Make Template" - The make a template action creates a template from the associated survey that can be transferred bewteen organisations and used to create new modifiable duplicate surveys. This is one way to start a new survey that is an alternative to starting from scratch. LSS coordinators will NOT need to use this action to meet the survey submission requirements.
A note on terminology: a survey is what all respondents complete. A "survey response" is what we get back when each responder enters data into the survey. You do not need to create a survey for each responder - you need just one survey and many invitations and/or many "survey reponses".
Below the survey list, you will find a "Create a New Survey" button. This button will allow you to create new surveys for publication to groups of resonders. As the LSS is already deployed to your organisation and therefore visible in your organisation's survey list, your do NOT need to use this button to meet the LSS survey submission requirements. If you want to know about this facility go to [[BPC SurveyManager Web Client Manual: Home - The Survey List Page]].
==The Next Step==
If you wish to use the portal (which enables you to have class based survey collection in computer labs, for example), or you are working on a survey OTHER than the LSS, you should proceed to the next section: [[BPC SurveyManager Web Client Manual: Home - The Survey List Page]].
If you are exclusively interested in the LSS, you should proceed to section: [[BPC SurveyManager Web Client Manual: Creating the list of respondents]].
[[Category:BPC SurveyManager Web Client Manual]]
<noinclude>{{BackLinks}}
</noinclude>
1d60f7b7af8b7e9bba74b2912822cd1201634305
Business Process Reengineering - Chart Key
0
290
327
2011-01-18T04:59:48Z
Bishopj
1
wikitext
text/x-wiki
==Chart Symbols and Their Meanings==
[[IMAGE:BPRChartKeyV4.gif]]
==Process Charting Design Rules==
===Introduction - Key Concept===
The full process charting model forms a language for accurately describing processes and other object relationships. The language can be represented either diagrammatically or descriptively (textually). A chart drawn according to the charting method describes a network of unstructured interacting objects (processes, people, etc) and the data output states of this network as it consumes data through its inputs.
The charting method goes beyond a standard process flowchart in that its symbol grammar is sufficiently consistent and structured as to enable the translation of the chart to a text description. The text description takes the form of a program that in turn could be executed directly or translated / re-coded into a standard application programming language as an executable application.
This ability to reliably define a program simply by documenting a real world process according to the design rules below allows an automated modelling testbed to be constructed from the chart, and then stress tested with different data loads, or different error types, or checked for deadlocks, bottle knecks or compared against alternate process designs, etc. Such testing and anlysis can be done either manually or via automation.
There are a number of different symbols and descriptive encoding rules, but in essence many of thesee enhancements are for diagramtic efficiency. The core of the charting system revolves around one meta (undrawn) symbol - data - a few drawn symbols. The full model merely expands on these to provide a richer descriptive set, and more analytic detail with fewer individual diagramatic elements being required to represent the idea than otherwise.
All symbols are one of three classes:
* Objects - Things that originate, transform, store or consume data
* Events - Both consumers and orginators of event data. Events may receive and/or generate an excite or inhibit signal.
* Connectors - Lines joining events and objects through which data flows
===The importance of Data===
The life blood of the process diagram (or description) is "data". It is data that flows through the connectors to join event or object to event or object. Data is created when an event fires, or a data orgination object manufactures or otherwsie supplies data. Data is stored in data stores and transformed in processes. Data is discarded in data sinks.
Data is inherently transient and never drawn as a symbol, although it is documented. When data is stationary it is held in a data store. A document with writing on it is therefore a data store - not the data itself. Likewise a database record is a data store, not the data itself.
Data is virtual and can take many forms. It may be a piece of information a human would understand or an electronic blib with a voltage value to excite or inhibit the recipient proportionately.
Data is infinitely divisable, imutable and transformable.
Like energy, data can neither be created or destroyed across the entire universe of processes, but within the context of any subset of processes less than the infinite set of all possible processes, data can be orginated and discarded.
When data is held in a data store it transforms the data store in some way. In a paper document datastore, it results in a blank sheet displaying written or image data. In a manufactured item "data store" it results in the transformation of petro chemicals and metals into a consumer item like a lamp shade or a car.
===The Class of Objects===
<div class="mainfloatright" style="width:40%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" align=right>
[[Image:RecursiveShapes.png]]
''All objects are recursive and containers.''
[[Image:BPC4KeyChartObj.png]]
''All objects or events are connected by lines called connectors.''</div>
The key chart comes with a number of design usage rules that are perhaps a little unusual and therefore should be considered carefully:
* All symbols are either events, objects or connectors ( lines or arrows).
* All objects are (except events) are recursive - meaning that they can include nested members of the same type as the parent (as well as other types), a constrained subset of the child objects or, in some cases, unrestrained subsets. In computational terms a recursive function is one that invokes itself, while this form of pure recursion of objects is rare in process maps, it is legal within the charting rules.
* All objects are potentially containers of other objects and, therefore, all objects are notionally sets of one or more objects. (Object encapsulation)
* Objects contained within a parent inherit the in and out flows (connectors) of the parent - or rather they inherit the right to use the flows. (Object inheritance)
* All objects and/or events are connected by lines called connectors, or by being recursively embedded in a parent object - which then becomes a container for that object.
* Data flows through the connecting lines into the objects where it is stored, and/or transformed and/or distributed. Data is ethereal and moves from one place to another transforming and being transformed by the vessels in which it is store. A document, for example, is therefore considered to be a data store - not the data itself. A manufactured item, is also a data store, containing the end result of multiple processes each transforming the storage vessel. This is the key concept that enables this process charting method to transcend both service and manufacturing process modelling domains.
* The arrows connecting objects are data-flows - referring to the movement of information, not explicitly the media on which the information is stored at the time.
* Connecting Arrows can take a number of annotations, including:
** identification of the data stream (or data streams)
** a filter condition for access
** selector bars
** optional (conditional) flags
** authorisation signature lock
** global type flags (like E for error flows) and/or
** weight and fuzzyfiers (mainly used for neural and bayesian process modelling)
* Objects are scriptable
* All objects (and ideally, but not mandated - connectors) have unique identifiers.
* All objects can be contained in multiple container objects simultaneously - but each occurrence of object is globally unique - and therefore has the same definition everywhere where it appears.
* All objects can be containers and as such may be "drilled through" to their content
* A process object may be a "map" (tranformational or distributive) or a "controller" (quality governor).
* A process fires or executes when all required inflows have data present (asynchronous).
* Events impose a block on some or all functions of the connected object until the event fires.
* All processes are assumed to operate concurrently when data is present on their incoming connectors, or an event fires, unless also constrained by other events blocking the object's functions. Events may thus operate as a clock, or trigger and as a governor or inhibitor.
* The data-flow method is capable of modelling both excitatory networks and inhibitory process networks.
* Everything, that is not a connector or event, is an object of one type or another - including the organisation itself.
===Object Hierarchy===
There is an implied object as container hierarchy (although not in any way mandatory):
* Entities can contain processes and all other objects
* Processes can contain processes and all other objects
* Data-stores can contain data-store objects
This hierarchy is very much a rough rule of thumb, for there are many cases where a data-store will be modelled with containing processes and data-stores - such as where the data-store is intelligent. Entities like organisations or people are, however better seen as external to the process unless they are containers of the process, as they will always have some processes that are not modelled in any given chart and therefore are potentially unreliable.
===Entities and Entity Groups===
Notionally, every process, can have a controlling entity (particularly where a person is actually doing the process itself). In the charting method, processes are not "owned" by people (although this is how one tends to conceptualise them), so much as controlled by them. In its pure form the process chart would show "process owners" as controlling entities connecting to their processes and thus, like events, constraining their execution unless present and active. To avoid diagrammatic clutter, where a process is controlled by a single entity (or single entity group), that entity (or entity group) can be identified in the process "owner-controller" property in the process description.
An entity group might be a typing pool, call centre staff pool, a community, etc. Each member of the entity group is inter-changeable for each other member with respect to the process concerned. Individual entities within the entity group may have other filters, conditions and constraints that subsequently exclude them from actually controlling the process. An entity group may be a sub-group of another entity group such as C-level executives in a company entity, or administration staff in a stakeholder community.
With the exception of community entities (which are effectively both an entity and an entity group), all entities and entity groups are presented using the same symbol. This is consistent with the central assumptions about entities with respect to the view of the process flows presented in a chart.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
2a16cb2e0b8c5acd961534a7b1bbbfc19b9883c1
349
327
2011-01-18T04:59:48Z
Bishopj
1
wikitext
text/x-wiki
==Chart Symbols and Their Meanings==
[[IMAGE:BPRChartKeyV4.gif]]
==Process Charting Design Rules==
===Introduction - Key Concept===
The full process charting model forms a language for accurately describing processes and other object relationships. The language can be represented either diagrammatically or descriptively (textually). A chart drawn according to the charting method describes a network of unstructured interacting objects (processes, people, etc) and the data output states of this network as it consumes data through its inputs.
The charting method goes beyond a standard process flowchart in that its symbol grammar is sufficiently consistent and structured as to enable the translation of the chart to a text description. The text description takes the form of a program that in turn could be executed directly or translated / re-coded into a standard application programming language as an executable application.
This ability to reliably define a program simply by documenting a real world process according to the design rules below allows an automated modelling testbed to be constructed from the chart, and then stress tested with different data loads, or different error types, or checked for deadlocks, bottle knecks or compared against alternate process designs, etc. Such testing and anlysis can be done either manually or via automation.
There are a number of different symbols and descriptive encoding rules, but in essence many of thesee enhancements are for diagramtic efficiency. The core of the charting system revolves around one meta (undrawn) symbol - data - a few drawn symbols. The full model merely expands on these to provide a richer descriptive set, and more analytic detail with fewer individual diagramatic elements being required to represent the idea than otherwise.
All symbols are one of three classes:
* Objects - Things that originate, transform, store or consume data
* Events - Both consumers and orginators of event data. Events may receive and/or generate an excite or inhibit signal.
* Connectors - Lines joining events and objects through which data flows
===The importance of Data===
The life blood of the process diagram (or description) is "data". It is data that flows through the connectors to join event or object to event or object. Data is created when an event fires, or a data orgination object manufactures or otherwsie supplies data. Data is stored in data stores and transformed in processes. Data is discarded in data sinks.
Data is inherently transient and never drawn as a symbol, although it is documented. When data is stationary it is held in a data store. A document with writing on it is therefore a data store - not the data itself. Likewise a database record is a data store, not the data itself.
Data is virtual and can take many forms. It may be a piece of information a human would understand or an electronic blib with a voltage value to excite or inhibit the recipient proportionately.
Data is infinitely divisable, imutable and transformable.
Like energy, data can neither be created or destroyed across the entire universe of processes, but within the context of any subset of processes less than the infinite set of all possible processes, data can be orginated and discarded.
When data is held in a data store it transforms the data store in some way. In a paper document datastore, it results in a blank sheet displaying written or image data. In a manufactured item "data store" it results in the transformation of petro chemicals and metals into a consumer item like a lamp shade or a car.
===The Class of Objects===
<div class="mainfloatright" style="width:40%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" align=right>
[[Image:RecursiveShapes.png]]
''All objects are recursive and containers.''
[[Image:BPC4KeyChartObj.png]]
''All objects or events are connected by lines called connectors.''</div>
The key chart comes with a number of design usage rules that are perhaps a little unusual and therefore should be considered carefully:
* All symbols are either events, objects or connectors ( lines or arrows).
* All objects are (except events) are recursive - meaning that they can include nested members of the same type as the parent (as well as other types), a constrained subset of the child objects or, in some cases, unrestrained subsets. In computational terms a recursive function is one that invokes itself, while this form of pure recursion of objects is rare in process maps, it is legal within the charting rules.
* All objects are potentially containers of other objects and, therefore, all objects are notionally sets of one or more objects. (Object encapsulation)
* Objects contained within a parent inherit the in and out flows (connectors) of the parent - or rather they inherit the right to use the flows. (Object inheritance)
* All objects and/or events are connected by lines called connectors, or by being recursively embedded in a parent object - which then becomes a container for that object.
* Data flows through the connecting lines into the objects where it is stored, and/or transformed and/or distributed. Data is ethereal and moves from one place to another transforming and being transformed by the vessels in which it is store. A document, for example, is therefore considered to be a data store - not the data itself. A manufactured item, is also a data store, containing the end result of multiple processes each transforming the storage vessel. This is the key concept that enables this process charting method to transcend both service and manufacturing process modelling domains.
* The arrows connecting objects are data-flows - referring to the movement of information, not explicitly the media on which the information is stored at the time.
* Connecting Arrows can take a number of annotations, including:
** identification of the data stream (or data streams)
** a filter condition for access
** selector bars
** optional (conditional) flags
** authorisation signature lock
** global type flags (like E for error flows) and/or
** weight and fuzzyfiers (mainly used for neural and bayesian process modelling)
* Objects are scriptable
* All objects (and ideally, but not mandated - connectors) have unique identifiers.
* All objects can be contained in multiple container objects simultaneously - but each occurrence of object is globally unique - and therefore has the same definition everywhere where it appears.
* All objects can be containers and as such may be "drilled through" to their content
* A process object may be a "map" (tranformational or distributive) or a "controller" (quality governor).
* A process fires or executes when all required inflows have data present (asynchronous).
* Events impose a block on some or all functions of the connected object until the event fires.
* All processes are assumed to operate concurrently when data is present on their incoming connectors, or an event fires, unless also constrained by other events blocking the object's functions. Events may thus operate as a clock, or trigger and as a governor or inhibitor.
* The data-flow method is capable of modelling both excitatory networks and inhibitory process networks.
* Everything, that is not a connector or event, is an object of one type or another - including the organisation itself.
===Object Hierarchy===
There is an implied object as container hierarchy (although not in any way mandatory):
* Entities can contain processes and all other objects
* Processes can contain processes and all other objects
* Data-stores can contain data-store objects
This hierarchy is very much a rough rule of thumb, for there are many cases where a data-store will be modelled with containing processes and data-stores - such as where the data-store is intelligent. Entities like organisations or people are, however better seen as external to the process unless they are containers of the process, as they will always have some processes that are not modelled in any given chart and therefore are potentially unreliable.
===Entities and Entity Groups===
Notionally, every process, can have a controlling entity (particularly where a person is actually doing the process itself). In the charting method, processes are not "owned" by people (although this is how one tends to conceptualise them), so much as controlled by them. In its pure form the process chart would show "process owners" as controlling entities connecting to their processes and thus, like events, constraining their execution unless present and active. To avoid diagrammatic clutter, where a process is controlled by a single entity (or single entity group), that entity (or entity group) can be identified in the process "owner-controller" property in the process description.
An entity group might be a typing pool, call centre staff pool, a community, etc. Each member of the entity group is inter-changeable for each other member with respect to the process concerned. Individual entities within the entity group may have other filters, conditions and constraints that subsequently exclude them from actually controlling the process. An entity group may be a sub-group of another entity group such as C-level executives in a company entity, or administration staff in a stakeholder community.
With the exception of community entities (which are effectively both an entity and an entity group), all entities and entity groups are presented using the same symbol. This is consistent with the central assumptions about entities with respect to the view of the process flows presented in a chart.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
2a16cb2e0b8c5acd961534a7b1bbbfc19b9883c1
397
349
2011-01-18T04:59:48Z
Bishopj
1
wikitext
text/x-wiki
==Chart Symbols and Their Meanings==
[[IMAGE:BPRChartKeyV4.gif]]
==Process Charting Design Rules==
===Introduction - Key Concept===
The full process charting model forms a language for accurately describing processes and other object relationships. The language can be represented either diagrammatically or descriptively (textually). A chart drawn according to the charting method describes a network of unstructured interacting objects (processes, people, etc) and the data output states of this network as it consumes data through its inputs.
The charting method goes beyond a standard process flowchart in that its symbol grammar is sufficiently consistent and structured as to enable the translation of the chart to a text description. The text description takes the form of a program that in turn could be executed directly or translated / re-coded into a standard application programming language as an executable application.
This ability to reliably define a program simply by documenting a real world process according to the design rules below allows an automated modelling testbed to be constructed from the chart, and then stress tested with different data loads, or different error types, or checked for deadlocks, bottle knecks or compared against alternate process designs, etc. Such testing and anlysis can be done either manually or via automation.
There are a number of different symbols and descriptive encoding rules, but in essence many of thesee enhancements are for diagramtic efficiency. The core of the charting system revolves around one meta (undrawn) symbol - data - a few drawn symbols. The full model merely expands on these to provide a richer descriptive set, and more analytic detail with fewer individual diagramatic elements being required to represent the idea than otherwise.
All symbols are one of three classes:
* Objects - Things that originate, transform, store or consume data
* Events - Both consumers and orginators of event data. Events may receive and/or generate an excite or inhibit signal.
* Connectors - Lines joining events and objects through which data flows
===The importance of Data===
The life blood of the process diagram (or description) is "data". It is data that flows through the connectors to join event or object to event or object. Data is created when an event fires, or a data orgination object manufactures or otherwsie supplies data. Data is stored in data stores and transformed in processes. Data is discarded in data sinks.
Data is inherently transient and never drawn as a symbol, although it is documented. When data is stationary it is held in a data store. A document with writing on it is therefore a data store - not the data itself. Likewise a database record is a data store, not the data itself.
Data is virtual and can take many forms. It may be a piece of information a human would understand or an electronic blib with a voltage value to excite or inhibit the recipient proportionately.
Data is infinitely divisable, imutable and transformable.
Like energy, data can neither be created or destroyed across the entire universe of processes, but within the context of any subset of processes less than the infinite set of all possible processes, data can be orginated and discarded.
When data is held in a data store it transforms the data store in some way. In a paper document datastore, it results in a blank sheet displaying written or image data. In a manufactured item "data store" it results in the transformation of petro chemicals and metals into a consumer item like a lamp shade or a car.
===The Class of Objects===
<div class="mainfloatright" style="width:40%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" align=right>
[[Image:RecursiveShapes.png]]
''All objects are recursive and containers.''
[[Image:BPC4KeyChartObj.png]]
''All objects or events are connected by lines called connectors.''</div>
The key chart comes with a number of design usage rules that are perhaps a little unusual and therefore should be considered carefully:
* All symbols are either events, objects or connectors ( lines or arrows).
* All objects are (except events) are recursive - meaning that they can include nested members of the same type as the parent (as well as other types), a constrained subset of the child objects or, in some cases, unrestrained subsets. In computational terms a recursive function is one that invokes itself, while this form of pure recursion of objects is rare in process maps, it is legal within the charting rules.
* All objects are potentially containers of other objects and, therefore, all objects are notionally sets of one or more objects. (Object encapsulation)
* Objects contained within a parent inherit the in and out flows (connectors) of the parent - or rather they inherit the right to use the flows. (Object inheritance)
* All objects and/or events are connected by lines called connectors, or by being recursively embedded in a parent object - which then becomes a container for that object.
* Data flows through the connecting lines into the objects where it is stored, and/or transformed and/or distributed. Data is ethereal and moves from one place to another transforming and being transformed by the vessels in which it is store. A document, for example, is therefore considered to be a data store - not the data itself. A manufactured item, is also a data store, containing the end result of multiple processes each transforming the storage vessel. This is the key concept that enables this process charting method to transcend both service and manufacturing process modelling domains.
* The arrows connecting objects are data-flows - referring to the movement of information, not explicitly the media on which the information is stored at the time.
* Connecting Arrows can take a number of annotations, including:
** identification of the data stream (or data streams)
** a filter condition for access
** selector bars
** optional (conditional) flags
** authorisation signature lock
** global type flags (like E for error flows) and/or
** weight and fuzzyfiers (mainly used for neural and bayesian process modelling)
* Objects are scriptable
* All objects (and ideally, but not mandated - connectors) have unique identifiers.
* All objects can be contained in multiple container objects simultaneously - but each occurrence of object is globally unique - and therefore has the same definition everywhere where it appears.
* All objects can be containers and as such may be "drilled through" to their content
* A process object may be a "map" (tranformational or distributive) or a "controller" (quality governor).
* A process fires or executes when all required inflows have data present (asynchronous).
* Events impose a block on some or all functions of the connected object until the event fires.
* All processes are assumed to operate concurrently when data is present on their incoming connectors, or an event fires, unless also constrained by other events blocking the object's functions. Events may thus operate as a clock, or trigger and as a governor or inhibitor.
* The data-flow method is capable of modelling both excitatory networks and inhibitory process networks.
* Everything, that is not a connector or event, is an object of one type or another - including the organisation itself.
===Object Hierarchy===
There is an implied object as container hierarchy (although not in any way mandatory):
* Entities can contain processes and all other objects
* Processes can contain processes and all other objects
* Data-stores can contain data-store objects
This hierarchy is very much a rough rule of thumb, for there are many cases where a data-store will be modelled with containing processes and data-stores - such as where the data-store is intelligent. Entities like organisations or people are, however better seen as external to the process unless they are containers of the process, as they will always have some processes that are not modelled in any given chart and therefore are potentially unreliable.
===Entities and Entity Groups===
Notionally, every process, can have a controlling entity (particularly where a person is actually doing the process itself). In the charting method, processes are not "owned" by people (although this is how one tends to conceptualise them), so much as controlled by them. In its pure form the process chart would show "process owners" as controlling entities connecting to their processes and thus, like events, constraining their execution unless present and active. To avoid diagrammatic clutter, where a process is controlled by a single entity (or single entity group), that entity (or entity group) can be identified in the process "owner-controller" property in the process description.
An entity group might be a typing pool, call centre staff pool, a community, etc. Each member of the entity group is inter-changeable for each other member with respect to the process concerned. Individual entities within the entity group may have other filters, conditions and constraints that subsequently exclude them from actually controlling the process. An entity group may be a sub-group of another entity group such as C-level executives in a company entity, or administration staff in a stakeholder community.
With the exception of community entities (which are effectively both an entity and an entity group), all entities and entity groups are presented using the same symbol. This is consistent with the central assumptions about entities with respect to the view of the process flows presented in a chart.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
2a16cb2e0b8c5acd961534a7b1bbbfc19b9883c1
507
397
2011-01-18T04:59:48Z
Bishopj
1
wikitext
text/x-wiki
==Chart Symbols and Their Meanings==
[[IMAGE:BPRChartKeyV4.gif]]
==Process Charting Design Rules==
===Introduction - Key Concept===
The full process charting model forms a language for accurately describing processes and other object relationships. The language can be represented either diagrammatically or descriptively (textually). A chart drawn according to the charting method describes a network of unstructured interacting objects (processes, people, etc) and the data output states of this network as it consumes data through its inputs.
The charting method goes beyond a standard process flowchart in that its symbol grammar is sufficiently consistent and structured as to enable the translation of the chart to a text description. The text description takes the form of a program that in turn could be executed directly or translated / re-coded into a standard application programming language as an executable application.
This ability to reliably define a program simply by documenting a real world process according to the design rules below allows an automated modelling testbed to be constructed from the chart, and then stress tested with different data loads, or different error types, or checked for deadlocks, bottle knecks or compared against alternate process designs, etc. Such testing and anlysis can be done either manually or via automation.
There are a number of different symbols and descriptive encoding rules, but in essence many of thesee enhancements are for diagramtic efficiency. The core of the charting system revolves around one meta (undrawn) symbol - data - a few drawn symbols. The full model merely expands on these to provide a richer descriptive set, and more analytic detail with fewer individual diagramatic elements being required to represent the idea than otherwise.
All symbols are one of three classes:
* Objects - Things that originate, transform, store or consume data
* Events - Both consumers and orginators of event data. Events may receive and/or generate an excite or inhibit signal.
* Connectors - Lines joining events and objects through which data flows
===The importance of Data===
The life blood of the process diagram (or description) is "data". It is data that flows through the connectors to join event or object to event or object. Data is created when an event fires, or a data orgination object manufactures or otherwsie supplies data. Data is stored in data stores and transformed in processes. Data is discarded in data sinks.
Data is inherently transient and never drawn as a symbol, although it is documented. When data is stationary it is held in a data store. A document with writing on it is therefore a data store - not the data itself. Likewise a database record is a data store, not the data itself.
Data is virtual and can take many forms. It may be a piece of information a human would understand or an electronic blib with a voltage value to excite or inhibit the recipient proportionately.
Data is infinitely divisable, imutable and transformable.
Like energy, data can neither be created or destroyed across the entire universe of processes, but within the context of any subset of processes less than the infinite set of all possible processes, data can be orginated and discarded.
When data is held in a data store it transforms the data store in some way. In a paper document datastore, it results in a blank sheet displaying written or image data. In a manufactured item "data store" it results in the transformation of petro chemicals and metals into a consumer item like a lamp shade or a car.
===The Class of Objects===
<div class="mainfloatright" style="width:40%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" align=right>
[[Image:RecursiveShapes.png]]
''All objects are recursive and containers.''
[[Image:BPC4KeyChartObj.png]]
''All objects or events are connected by lines called connectors.''</div>
The key chart comes with a number of design usage rules that are perhaps a little unusual and therefore should be considered carefully:
* All symbols are either events, objects or connectors ( lines or arrows).
* All objects are (except events) are recursive - meaning that they can include nested members of the same type as the parent (as well as other types), a constrained subset of the child objects or, in some cases, unrestrained subsets. In computational terms a recursive function is one that invokes itself, while this form of pure recursion of objects is rare in process maps, it is legal within the charting rules.
* All objects are potentially containers of other objects and, therefore, all objects are notionally sets of one or more objects. (Object encapsulation)
* Objects contained within a parent inherit the in and out flows (connectors) of the parent - or rather they inherit the right to use the flows. (Object inheritance)
* All objects and/or events are connected by lines called connectors, or by being recursively embedded in a parent object - which then becomes a container for that object.
* Data flows through the connecting lines into the objects where it is stored, and/or transformed and/or distributed. Data is ethereal and moves from one place to another transforming and being transformed by the vessels in which it is store. A document, for example, is therefore considered to be a data store - not the data itself. A manufactured item, is also a data store, containing the end result of multiple processes each transforming the storage vessel. This is the key concept that enables this process charting method to transcend both service and manufacturing process modelling domains.
* The arrows connecting objects are data-flows - referring to the movement of information, not explicitly the media on which the information is stored at the time.
* Connecting Arrows can take a number of annotations, including:
** identification of the data stream (or data streams)
** a filter condition for access
** selector bars
** optional (conditional) flags
** authorisation signature lock
** global type flags (like E for error flows) and/or
** weight and fuzzyfiers (mainly used for neural and bayesian process modelling)
* Objects are scriptable
* All objects (and ideally, but not mandated - connectors) have unique identifiers.
* All objects can be contained in multiple container objects simultaneously - but each occurrence of object is globally unique - and therefore has the same definition everywhere where it appears.
* All objects can be containers and as such may be "drilled through" to their content
* A process object may be a "map" (tranformational or distributive) or a "controller" (quality governor).
* A process fires or executes when all required inflows have data present (asynchronous).
* Events impose a block on some or all functions of the connected object until the event fires.
* All processes are assumed to operate concurrently when data is present on their incoming connectors, or an event fires, unless also constrained by other events blocking the object's functions. Events may thus operate as a clock, or trigger and as a governor or inhibitor.
* The data-flow method is capable of modelling both excitatory networks and inhibitory process networks.
* Everything, that is not a connector or event, is an object of one type or another - including the organisation itself.
===Object Hierarchy===
There is an implied object as container hierarchy (although not in any way mandatory):
* Entities can contain processes and all other objects
* Processes can contain processes and all other objects
* Data-stores can contain data-store objects
This hierarchy is very much a rough rule of thumb, for there are many cases where a data-store will be modelled with containing processes and data-stores - such as where the data-store is intelligent. Entities like organisations or people are, however better seen as external to the process unless they are containers of the process, as they will always have some processes that are not modelled in any given chart and therefore are potentially unreliable.
===Entities and Entity Groups===
Notionally, every process, can have a controlling entity (particularly where a person is actually doing the process itself). In the charting method, processes are not "owned" by people (although this is how one tends to conceptualise them), so much as controlled by them. In its pure form the process chart would show "process owners" as controlling entities connecting to their processes and thus, like events, constraining their execution unless present and active. To avoid diagrammatic clutter, where a process is controlled by a single entity (or single entity group), that entity (or entity group) can be identified in the process "owner-controller" property in the process description.
An entity group might be a typing pool, call centre staff pool, a community, etc. Each member of the entity group is inter-changeable for each other member with respect to the process concerned. Individual entities within the entity group may have other filters, conditions and constraints that subsequently exclude them from actually controlling the process. An entity group may be a sub-group of another entity group such as C-level executives in a company entity, or administration staff in a stakeholder community.
With the exception of community entities (which are effectively both an entity and an entity group), all entities and entity groups are presented using the same symbol. This is consistent with the central assumptions about entities with respect to the view of the process flows presented in a chart.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
2a16cb2e0b8c5acd961534a7b1bbbfc19b9883c1
BPC RiskManager V6 on 64 bit Windows
0
272
283
2012-08-30T09:48:13Z
Bishopj
1
wikitext
text/x-wiki
=Introduction=
BPC RiskManager is a 32 bit application, but it will work just fine on 64 bit Windows. In most scenarios (particularly W2008 and above and Windows 7), the supplied BPC Riskamanger auto installer will correctly install the RiskManager system on a 64bit computer with no manual intervention. The optional SurveyManager library will require some manual steps in IIS and you should consider the notes lower down this page concerning that. If you are installing the W2003 64bit you may have to do some manual steps.
If you wish to pursue this solution on Windows 2003 for 64 bit or Windows 2008 for 64 bit you will need to do the following things:
*Install on the application server machine the 32bit ADO drivers for the target database (eg the MDAC 2.8 driver set. The RiskManager Installer will automatically cheeck for these and install them for you, so you can just run the installer for this step if you wish. For standard MS databases these should already be present, but you may need to download the appropriate 32 bit MDAC driver set from Microsoft. (A 64bit DB server will still require a 32bit driver for BPC RM to connect to it, but these should already be present.).
*Install BPC RiskManager as you would on a 32 bit operating system accepting the defaults. The installer will automatically put the 32but components in the x86 directory as required.
*Run the 32bit SocketServer, BPC RiskManager, BPC RiskManager DataServer and BPC RiskMailManager in 32 bit compatible mode i.e. using WOW (Windows-32 bit on Windows-64 bit) on your server. The auto installer will automatically do this for you, so you should not need to do anything unless you are doing a manual install (ie. copying and pasting the components).
*Move the 32 bit Midas.dll into the 32 bit system directory and register it manually. Again the installer will do this automatically and you should not have to do anything unless you are doing a manual install.
*Enable IIS to run 32 bit ISAPI dll's (if using the web components like surveymanager). This, you will have to do even if using the installer.
*Move the 32 bit ISAPI libraries into the 32 bit ISAPI directory. This you may have to do even if using the installer.
If you are installing on Windows 2008 or above, Windows 7 or above the 32 bit and 64 bit MDAC drivers should already be present, or if you are using the installer they should be installed automatically by the installer.
So, the simple solution to setting up RiskManager on 64Bit windows? - Just run the RiskManager Installer and let it do all the work.
=Setting Up the Database drivers on WOW64=
If you are using the insatller to install Riskanager, the installer will check for the MDAC (ADO) drivers and install the correct ones if missing.
There are multiple scenarios that you could be facing - all have essentially the same solution:
#. Locally installed 64 bit database server : You will need the appropriate 32 bit drivers. These have probably been installed with your database installation, but you may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
#. Externally installed database server on 64 bit OS : You will need the appropriate 32 bit drivers. You may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
#. Externally installed database server on 32 bit OS : You will need the appropriate 32 bit drivers. You may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
In other words, the key "gotcha" in setting up the 64 bit OS version is making sure you have the 32 bit drivers loaded and registered appropriately. Most of the time you will already have the ADO drivers avaiable, or the RiskManager installer will have installed them for you, and you need do nothing in this step. If, however, you install and can not connect from the app server to the database, or if the installer fails to make databases when instructed, you probably have something wrong with your ADO drivers. In the early releases of 64bit OS's the existance of the 32Bit MDAC drivers were a particular issue. From Windows 2008 this does not seem to have been a problem any longer.
The second most common event we have noted is that if you are using SQLExpress and, depending on the options you chose, when you installed SQLServer your SQL Instance may be the default instance (ie. no instance name) OR SQLEXPRESS. If you can't connect check this first, then look to see if the 32 bit drivers are present.
=Enable the application components to use WOW64=
Windows-32 on Windows-64 (WoW64) is already part of you Windows 64 bit OS. All you have to do to use it is to enable the 32 bit applications to run in that mode. If you are running the RiskManager installer, it will do all these steps automatically for you.
*Install on the application server machine the 32bit ADO drivers for the target database (eg the MDAC 2.8 driver set. For standard MS databases these should already be present, but you may need to download the appropriate 32 bit MDAC driver set from Microsoft. (A 64bit DB server will still require a 32bit driver for BPC RM to connect to it).
*Install the RiskManager application normally ([[RM625ENT Installation Instructions|see the instructions for installing BPC RiskManager]])
*Run the application server components and socketserver component in W2003/W2008 32 bit compatible mode:
**Right clicking on the icons after installation and selecting properties.
**From the properties screen set the executable compatibility mode to be “Windows 2003 sp1”.
**Open a command prompt and navigate to the "Program Files\common files\borlan\socketserver" directory and type "socketserver.exe -install" to install the socket server as a service after enabling it to run in 32 bit compatible mode.
=Register the 32 bit Midas.dll on the application server=
If you are running the RiskManager installer you will not have to do anything here.
If you are installing manually (ie. copying and pasting the files), you must register the Midas.dll manually by performing the following steps to enable 32 bit MIDAS.DLL to run on 64-bit Windows:
1. Copy the midas.dll from the system32 directory (if present) or the system files
directory of the BPC RiskManager install directory to:
%systemdrive%\windows\SysWOW64\
2. Open a command prompt and navigate to the %systemdrive%\windows\SysWOW64 directory.
3. Type the following command:
Regsvr32 midas.dll
4. Press ENTER.
=Enable the IIS server to run 32 bit ISAPI dlls=
Depending on your version of IIS you will need to do different things. The primary issue is to make sure that IIS sees the components as 32bit apps.
Enable the IIS server to run 32 bit ISAPI dlls by perfoming the following steps:
*To enable IIS 6.0+ to run 32-bit applications on 64-bit Windows
1. Open a command prompt and navigate to the
%systemdrive%\Inetpub\AdminScripts directory.
2. Type the following command:
cscript.exe adsutil.vbs set W3SVC/AppPools/Enable32BitAppOnWin64 “true”
3. Press ENTER.
*Copy the surveymanager dll’s generated during configuration to the IIS server to run 32 bit ISAPI dlls to the special 32 bit ISAPI directory:
%windir%\system32\inetsrv.
[[Category:RiskManager FAQ]]
[[Category:BPC RiskManager V6 Installation]]
[[Category:BPC RiskManager V6 System Administration]]
<noinclude>
{{BackLinks}}
</noinclude>
23221c9ff91592b379804045b1dfd398f2399395
287
283
2012-08-30T09:48:13Z
Bishopj
1
wikitext
text/x-wiki
=Introduction=
BPC RiskManager is a 32 bit application, but it will work just fine on 64 bit Windows. In most scenarios (particularly W2008 and above and Windows 7), the supplied BPC Riskamanger auto installer will correctly install the RiskManager system on a 64bit computer with no manual intervention. The optional SurveyManager library will require some manual steps in IIS and you should consider the notes lower down this page concerning that. If you are installing the W2003 64bit you may have to do some manual steps.
If you wish to pursue this solution on Windows 2003 for 64 bit or Windows 2008 for 64 bit you will need to do the following things:
*Install on the application server machine the 32bit ADO drivers for the target database (eg the MDAC 2.8 driver set. The RiskManager Installer will automatically cheeck for these and install them for you, so you can just run the installer for this step if you wish. For standard MS databases these should already be present, but you may need to download the appropriate 32 bit MDAC driver set from Microsoft. (A 64bit DB server will still require a 32bit driver for BPC RM to connect to it, but these should already be present.).
*Install BPC RiskManager as you would on a 32 bit operating system accepting the defaults. The installer will automatically put the 32but components in the x86 directory as required.
*Run the 32bit SocketServer, BPC RiskManager, BPC RiskManager DataServer and BPC RiskMailManager in 32 bit compatible mode i.e. using WOW (Windows-32 bit on Windows-64 bit) on your server. The auto installer will automatically do this for you, so you should not need to do anything unless you are doing a manual install (ie. copying and pasting the components).
*Move the 32 bit Midas.dll into the 32 bit system directory and register it manually. Again the installer will do this automatically and you should not have to do anything unless you are doing a manual install.
*Enable IIS to run 32 bit ISAPI dll's (if using the web components like surveymanager). This, you will have to do even if using the installer.
*Move the 32 bit ISAPI libraries into the 32 bit ISAPI directory. This you may have to do even if using the installer.
If you are installing on Windows 2008 or above, Windows 7 or above the 32 bit and 64 bit MDAC drivers should already be present, or if you are using the installer they should be installed automatically by the installer.
So, the simple solution to setting up RiskManager on 64Bit windows? - Just run the RiskManager Installer and let it do all the work.
=Setting Up the Database drivers on WOW64=
If you are using the insatller to install Riskanager, the installer will check for the MDAC (ADO) drivers and install the correct ones if missing.
There are multiple scenarios that you could be facing - all have essentially the same solution:
#. Locally installed 64 bit database server : You will need the appropriate 32 bit drivers. These have probably been installed with your database installation, but you may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
#. Externally installed database server on 64 bit OS : You will need the appropriate 32 bit drivers. You may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
#. Externally installed database server on 32 bit OS : You will need the appropriate 32 bit drivers. You may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
In other words, the key "gotcha" in setting up the 64 bit OS version is making sure you have the 32 bit drivers loaded and registered appropriately. Most of the time you will already have the ADO drivers avaiable, or the RiskManager installer will have installed them for you, and you need do nothing in this step. If, however, you install and can not connect from the app server to the database, or if the installer fails to make databases when instructed, you probably have something wrong with your ADO drivers. In the early releases of 64bit OS's the existance of the 32Bit MDAC drivers were a particular issue. From Windows 2008 this does not seem to have been a problem any longer.
The second most common event we have noted is that if you are using SQLExpress and, depending on the options you chose, when you installed SQLServer your SQL Instance may be the default instance (ie. no instance name) OR SQLEXPRESS. If you can't connect check this first, then look to see if the 32 bit drivers are present.
=Enable the application components to use WOW64=
Windows-32 on Windows-64 (WoW64) is already part of you Windows 64 bit OS. All you have to do to use it is to enable the 32 bit applications to run in that mode. If you are running the RiskManager installer, it will do all these steps automatically for you.
*Install on the application server machine the 32bit ADO drivers for the target database (eg the MDAC 2.8 driver set. For standard MS databases these should already be present, but you may need to download the appropriate 32 bit MDAC driver set from Microsoft. (A 64bit DB server will still require a 32bit driver for BPC RM to connect to it).
*Install the RiskManager application normally ([[RM625ENT Installation Instructions|see the instructions for installing BPC RiskManager]])
*Run the application server components and socketserver component in W2003/W2008 32 bit compatible mode:
**Right clicking on the icons after installation and selecting properties.
**From the properties screen set the executable compatibility mode to be “Windows 2003 sp1”.
**Open a command prompt and navigate to the "Program Files\common files\borlan\socketserver" directory and type "socketserver.exe -install" to install the socket server as a service after enabling it to run in 32 bit compatible mode.
=Register the 32 bit Midas.dll on the application server=
If you are running the RiskManager installer you will not have to do anything here.
If you are installing manually (ie. copying and pasting the files), you must register the Midas.dll manually by performing the following steps to enable 32 bit MIDAS.DLL to run on 64-bit Windows:
1. Copy the midas.dll from the system32 directory (if present) or the system files
directory of the BPC RiskManager install directory to:
%systemdrive%\windows\SysWOW64\
2. Open a command prompt and navigate to the %systemdrive%\windows\SysWOW64 directory.
3. Type the following command:
Regsvr32 midas.dll
4. Press ENTER.
=Enable the IIS server to run 32 bit ISAPI dlls=
Depending on your version of IIS you will need to do different things. The primary issue is to make sure that IIS sees the components as 32bit apps.
Enable the IIS server to run 32 bit ISAPI dlls by perfoming the following steps:
*To enable IIS 6.0+ to run 32-bit applications on 64-bit Windows
1. Open a command prompt and navigate to the
%systemdrive%\Inetpub\AdminScripts directory.
2. Type the following command:
cscript.exe adsutil.vbs set W3SVC/AppPools/Enable32BitAppOnWin64 “true”
3. Press ENTER.
*Copy the surveymanager dll’s generated during configuration to the IIS server to run 32 bit ISAPI dlls to the special 32 bit ISAPI directory:
%windir%\system32\inetsrv.
[[Category:RiskManager FAQ]]
[[Category:BPC RiskManager V6 Installation]]
[[Category:BPC RiskManager V6 System Administration]]
<noinclude>
{{BackLinks}}
</noinclude>
23221c9ff91592b379804045b1dfd398f2399395
437
287
2012-08-30T09:48:13Z
Bishopj
1
wikitext
text/x-wiki
=Introduction=
BPC RiskManager is a 32 bit application, but it will work just fine on 64 bit Windows. In most scenarios (particularly W2008 and above and Windows 7), the supplied BPC Riskamanger auto installer will correctly install the RiskManager system on a 64bit computer with no manual intervention. The optional SurveyManager library will require some manual steps in IIS and you should consider the notes lower down this page concerning that. If you are installing the W2003 64bit you may have to do some manual steps.
If you wish to pursue this solution on Windows 2003 for 64 bit or Windows 2008 for 64 bit you will need to do the following things:
*Install on the application server machine the 32bit ADO drivers for the target database (eg the MDAC 2.8 driver set. The RiskManager Installer will automatically cheeck for these and install them for you, so you can just run the installer for this step if you wish. For standard MS databases these should already be present, but you may need to download the appropriate 32 bit MDAC driver set from Microsoft. (A 64bit DB server will still require a 32bit driver for BPC RM to connect to it, but these should already be present.).
*Install BPC RiskManager as you would on a 32 bit operating system accepting the defaults. The installer will automatically put the 32but components in the x86 directory as required.
*Run the 32bit SocketServer, BPC RiskManager, BPC RiskManager DataServer and BPC RiskMailManager in 32 bit compatible mode i.e. using WOW (Windows-32 bit on Windows-64 bit) on your server. The auto installer will automatically do this for you, so you should not need to do anything unless you are doing a manual install (ie. copying and pasting the components).
*Move the 32 bit Midas.dll into the 32 bit system directory and register it manually. Again the installer will do this automatically and you should not have to do anything unless you are doing a manual install.
*Enable IIS to run 32 bit ISAPI dll's (if using the web components like surveymanager). This, you will have to do even if using the installer.
*Move the 32 bit ISAPI libraries into the 32 bit ISAPI directory. This you may have to do even if using the installer.
If you are installing on Windows 2008 or above, Windows 7 or above the 32 bit and 64 bit MDAC drivers should already be present, or if you are using the installer they should be installed automatically by the installer.
So, the simple solution to setting up RiskManager on 64Bit windows? - Just run the RiskManager Installer and let it do all the work.
=Setting Up the Database drivers on WOW64=
If you are using the insatller to install Riskanager, the installer will check for the MDAC (ADO) drivers and install the correct ones if missing.
There are multiple scenarios that you could be facing - all have essentially the same solution:
#. Locally installed 64 bit database server : You will need the appropriate 32 bit drivers. These have probably been installed with your database installation, but you may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
#. Externally installed database server on 64 bit OS : You will need the appropriate 32 bit drivers. You may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
#. Externally installed database server on 32 bit OS : You will need the appropriate 32 bit drivers. You may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
In other words, the key "gotcha" in setting up the 64 bit OS version is making sure you have the 32 bit drivers loaded and registered appropriately. Most of the time you will already have the ADO drivers avaiable, or the RiskManager installer will have installed them for you, and you need do nothing in this step. If, however, you install and can not connect from the app server to the database, or if the installer fails to make databases when instructed, you probably have something wrong with your ADO drivers. In the early releases of 64bit OS's the existance of the 32Bit MDAC drivers were a particular issue. From Windows 2008 this does not seem to have been a problem any longer.
The second most common event we have noted is that if you are using SQLExpress and, depending on the options you chose, when you installed SQLServer your SQL Instance may be the default instance (ie. no instance name) OR SQLEXPRESS. If you can't connect check this first, then look to see if the 32 bit drivers are present.
=Enable the application components to use WOW64=
Windows-32 on Windows-64 (WoW64) is already part of you Windows 64 bit OS. All you have to do to use it is to enable the 32 bit applications to run in that mode. If you are running the RiskManager installer, it will do all these steps automatically for you.
*Install on the application server machine the 32bit ADO drivers for the target database (eg the MDAC 2.8 driver set. For standard MS databases these should already be present, but you may need to download the appropriate 32 bit MDAC driver set from Microsoft. (A 64bit DB server will still require a 32bit driver for BPC RM to connect to it).
*Install the RiskManager application normally ([[RM625ENT Installation Instructions|see the instructions for installing BPC RiskManager]])
*Run the application server components and socketserver component in W2003/W2008 32 bit compatible mode:
**Right clicking on the icons after installation and selecting properties.
**From the properties screen set the executable compatibility mode to be “Windows 2003 sp1”.
**Open a command prompt and navigate to the "Program Files\common files\borlan\socketserver" directory and type "socketserver.exe -install" to install the socket server as a service after enabling it to run in 32 bit compatible mode.
=Register the 32 bit Midas.dll on the application server=
If you are running the RiskManager installer you will not have to do anything here.
If you are installing manually (ie. copying and pasting the files), you must register the Midas.dll manually by performing the following steps to enable 32 bit MIDAS.DLL to run on 64-bit Windows:
1. Copy the midas.dll from the system32 directory (if present) or the system files
directory of the BPC RiskManager install directory to:
%systemdrive%\windows\SysWOW64\
2. Open a command prompt and navigate to the %systemdrive%\windows\SysWOW64 directory.
3. Type the following command:
Regsvr32 midas.dll
4. Press ENTER.
=Enable the IIS server to run 32 bit ISAPI dlls=
Depending on your version of IIS you will need to do different things. The primary issue is to make sure that IIS sees the components as 32bit apps.
Enable the IIS server to run 32 bit ISAPI dlls by perfoming the following steps:
*To enable IIS 6.0+ to run 32-bit applications on 64-bit Windows
1. Open a command prompt and navigate to the
%systemdrive%\Inetpub\AdminScripts directory.
2. Type the following command:
cscript.exe adsutil.vbs set W3SVC/AppPools/Enable32BitAppOnWin64 “true”
3. Press ENTER.
*Copy the surveymanager dll’s generated during configuration to the IIS server to run 32 bit ISAPI dlls to the special 32 bit ISAPI directory:
%windir%\system32\inetsrv.
[[Category:RiskManager FAQ]]
[[Category:BPC RiskManager V6 Installation]]
[[Category:BPC RiskManager V6 System Administration]]
<noinclude>
{{BackLinks}}
</noinclude>
23221c9ff91592b379804045b1dfd398f2399395
BPC RiskManager Software Suite
0
3
4
2012-08-30T10:11:06Z
Bishopj
1
wikitext
text/x-wiki
=BPC RiskManager Software Suite - Risk, Compliance and Certification=
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) was first released in 2008. The latest version was released in 2011. It is updated continuously throughout the year and, as a client, we encourage you to actively participate in the development direction.
The Enrima edition of BPC RiskManager is a single-user and multi-user risk management, compliance management, financial statements certification, insurance, survey, incidents & hazards system all in one application. You can manage multiple organisations and simultaneously view governance issues as risks, compliance obligations (legislation, processes and procedures) and compliance topics simultaneously. It manages email based reminders for a large variety of user expectations internally.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width=100%
|-
|
* BPC RiskManager V5 (Express)
|[[image:BPCRiskManagerExpressV5.jpg]]
|-
|
* BPC RiskManager V6 (Enrima Edition)
|[[image:BPC_RiskManager_V6261_Main_Screen.jpg|600]]
|}
=Client Base=
BPC RiskManager clients are head quartered in Australia, Canada, the United Kingdom and the United States of America. Global clients, of course have offices in many other countries. [http://www.bishopphillips.com| Bishop Phillips Consulting] has local offices in both Australia and North America.
The system is used extensively in the education sector with a very substantial presence in Universities in both Australia and Canada and commercial education providers and colleges in the USA. Other significant client groups include insurance providers (both primary insurers and reinsurers), central government agencies (such as federal & state/province departments and local government), utilities such as postal, electrical and water utilities.
BPC RiskManager implements and substantially extends the Risk Management Standards "AS/NZS 4360:2004 :Risk Management", and "ISO 31000" and complies with the "ISO/IEC Guide 73- Risk Management – Vocabulary".
The Risk Manager is not restricted to merely following the one interpretation of the risk standards. As a consequence of its long market history, BPC RiskManager implements a large number of divergent risk management methodologies or methods. Any combination of one to three assessment groups each containing ratings for likelihood, consequnce and control are possible. For example some clients use a risk management methodology that utilises risk budgets with three rating groups "Inherent, Residual and Target" where inherent ratings shift with external factors and target shifts with the corporate risk appetite (ie a risk budget) while the residual floats according to assessment ratings.
Any number of self assessments in each group can be maintained together with a separate family of assessments and remediations created by audit/expert that coexist with management's risk assessments.
Whether your preferred risk methodology uses quantification (quantitative risk analysis), or qualification (qualitative risk analysis), BPC Risk Manager directly supports the approach on a per assessment basis. Terminology (including fields names and purpose and screen captions) is fully customisable so the system can directly implement the corporate risk methodology / risk method.
=Get a Fully Functional Evaluation Copy of BPC RiskManager for FREE=
You can get a free no-obligation fully functional copy of BPC RiskManager (Enrima Edition) simply by completing the request form here:
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php I want to evaluate BPC RiskManager without obligation for free, please.]
It will work for 60 days, and if you need more time you can contact us and request a longer evaluation. There are no limitations in the evaluation version and we will even give you support for free while you get it running. It is fully self installing and will open up accessing your first risk database when the installer finishes.
If it isn't right for you, you can just uninstall after the 60 days with no further obligation to us.
=Knowledge Base=
*[[BPC RiskManager V6 Enterprise (Enrima Edition)]]
** [[BPC RiskManager V6 Enterprise (Enrima Edition)| BPC RiskManager Features]]
** [[BPC RiskManager V6.2 Network Architecture]]
** [[RM625ENT Installation Instructions|BPC RiakManager V6.2.5 Installation Instructions]]
** [[BPC RiskManager Frequently Asked Questions|BPC RiskManager - Frequently Asked Questions]]
** [[BPC RiskManager Quick Help With Common Tasks]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
** [[BPC RiskManager V6 on 64 bit Windows]]
*[[BPC SurveyManager - Overview]]
** [[BPC Surveymanager - Key Features]]
** [[BPC SurveyManager - Introduction]]
** [[BPC SurveyManager - Creating Surveys - Layout and Markup Tags]]
** [[BPC SurveyManager - Creating Surveys - The Page Script]]
** [[BPC SurveyManager - Questions and Input Controls]]
** [[BPC SurveyManager - Creating Surveys - Properties]]
** [[BPC SurveyManager - Creating Surveys - Rules Scripting]]
** [[BPC SurveyManager - The Built In Reports]]
** [[BPC SurveyManager - Advanced Database Configuration Settings]]
** [[BPC SurveyManager - Client Overview]]
** [[BPC SurveyManager - Tutorials - Survey Layouts]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
<noinclude>
[[Category:Featured Article]]
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
dc7ccfc5f7d790cb2dd0c17b50cdde25c14ee35b
275
4
2012-08-30T10:11:06Z
Bishopj
1
wikitext
text/x-wiki
=BPC RiskManager Software Suite - Risk, Compliance and Certification=
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) was first released in 2008. The latest version was released in 2011. It is updated continuously throughout the year and, as a client, we encourage you to actively participate in the development direction.
The Enrima edition of BPC RiskManager is a single-user and multi-user risk management, compliance management, financial statements certification, insurance, survey, incidents & hazards system all in one application. You can manage multiple organisations and simultaneously view governance issues as risks, compliance obligations (legislation, processes and procedures) and compliance topics simultaneously. It manages email based reminders for a large variety of user expectations internally.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width=100%
|-
|
* BPC RiskManager V5 (Express)
|[[image:BPCRiskManagerExpressV5.jpg]]
|-
|
* BPC RiskManager V6 (Enrima Edition)
|[[image:BPC_RiskManager_V6261_Main_Screen.jpg|600]]
|}
=Client Base=
BPC RiskManager clients are head quartered in Australia, Canada, the United Kingdom and the United States of America. Global clients, of course have offices in many other countries. [http://www.bishopphillips.com| Bishop Phillips Consulting] has local offices in both Australia and North America.
The system is used extensively in the education sector with a very substantial presence in Universities in both Australia and Canada and commercial education providers and colleges in the USA. Other significant client groups include insurance providers (both primary insurers and reinsurers), central government agencies (such as federal & state/province departments and local government), utilities such as postal, electrical and water utilities.
BPC RiskManager implements and substantially extends the Risk Management Standards "AS/NZS 4360:2004 :Risk Management", and "ISO 31000" and complies with the "ISO/IEC Guide 73- Risk Management – Vocabulary".
The Risk Manager is not restricted to merely following the one interpretation of the risk standards. As a consequence of its long market history, BPC RiskManager implements a large number of divergent risk management methodologies or methods. Any combination of one to three assessment groups each containing ratings for likelihood, consequnce and control are possible. For example some clients use a risk management methodology that utilises risk budgets with three rating groups "Inherent, Residual and Target" where inherent ratings shift with external factors and target shifts with the corporate risk appetite (ie a risk budget) while the residual floats according to assessment ratings.
Any number of self assessments in each group can be maintained together with a separate family of assessments and remediations created by audit/expert that coexist with management's risk assessments.
Whether your preferred risk methodology uses quantification (quantitative risk analysis), or qualification (qualitative risk analysis), BPC Risk Manager directly supports the approach on a per assessment basis. Terminology (including fields names and purpose and screen captions) is fully customisable so the system can directly implement the corporate risk methodology / risk method.
=Get a Fully Functional Evaluation Copy of BPC RiskManager for FREE=
You can get a free no-obligation fully functional copy of BPC RiskManager (Enrima Edition) simply by completing the request form here:
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php I want to evaluate BPC RiskManager without obligation for free, please.]
It will work for 60 days, and if you need more time you can contact us and request a longer evaluation. There are no limitations in the evaluation version and we will even give you support for free while you get it running. It is fully self installing and will open up accessing your first risk database when the installer finishes.
If it isn't right for you, you can just uninstall after the 60 days with no further obligation to us.
=Knowledge Base=
*[[BPC RiskManager V6 Enterprise (Enrima Edition)]]
** [[BPC RiskManager V6 Enterprise (Enrima Edition)| BPC RiskManager Features]]
** [[BPC RiskManager V6.2 Network Architecture]]
** [[RM625ENT Installation Instructions|BPC RiakManager V6.2.5 Installation Instructions]]
** [[BPC RiskManager Frequently Asked Questions|BPC RiskManager - Frequently Asked Questions]]
** [[BPC RiskManager Quick Help With Common Tasks]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
** [[BPC RiskManager V6 on 64 bit Windows]]
*[[BPC SurveyManager - Overview]]
** [[BPC Surveymanager - Key Features]]
** [[BPC SurveyManager - Introduction]]
** [[BPC SurveyManager - Creating Surveys - Layout and Markup Tags]]
** [[BPC SurveyManager - Creating Surveys - The Page Script]]
** [[BPC SurveyManager - Questions and Input Controls]]
** [[BPC SurveyManager - Creating Surveys - Properties]]
** [[BPC SurveyManager - Creating Surveys - Rules Scripting]]
** [[BPC SurveyManager - The Built In Reports]]
** [[BPC SurveyManager - Advanced Database Configuration Settings]]
** [[BPC SurveyManager - Client Overview]]
** [[BPC SurveyManager - Tutorials - Survey Layouts]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
<noinclude>
[[Category:Featured Article]]
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
dc7ccfc5f7d790cb2dd0c17b50cdde25c14ee35b
BPC RiskManager Software Suite
0
3
339
275
2012-08-30T10:11:06Z
Bishopj
1
wikitext
text/x-wiki
=BPC RiskManager Software Suite - Risk, Compliance and Certification=
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) was first released in 2008. The latest version was released in 2011. It is updated continuously throughout the year and, as a client, we encourage you to actively participate in the development direction.
The Enrima edition of BPC RiskManager is a single-user and multi-user risk management, compliance management, financial statements certification, insurance, survey, incidents & hazards system all in one application. You can manage multiple organisations and simultaneously view governance issues as risks, compliance obligations (legislation, processes and procedures) and compliance topics simultaneously. It manages email based reminders for a large variety of user expectations internally.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width=100%
|-
|
* BPC RiskManager V5 (Express)
|[[image:BPCRiskManagerExpressV5.jpg]]
|-
|
* BPC RiskManager V6 (Enrima Edition)
|[[image:BPC_RiskManager_V6261_Main_Screen.jpg|600]]
|}
=Client Base=
BPC RiskManager clients are head quartered in Australia, Canada, the United Kingdom and the United States of America. Global clients, of course have offices in many other countries. [http://www.bishopphillips.com| Bishop Phillips Consulting] has local offices in both Australia and North America.
The system is used extensively in the education sector with a very substantial presence in Universities in both Australia and Canada and commercial education providers and colleges in the USA. Other significant client groups include insurance providers (both primary insurers and reinsurers), central government agencies (such as federal & state/province departments and local government), utilities such as postal, electrical and water utilities.
BPC RiskManager implements and substantially extends the Risk Management Standards "AS/NZS 4360:2004 :Risk Management", and "ISO 31000" and complies with the "ISO/IEC Guide 73- Risk Management – Vocabulary".
The Risk Manager is not restricted to merely following the one interpretation of the risk standards. As a consequence of its long market history, BPC RiskManager implements a large number of divergent risk management methodologies or methods. Any combination of one to three assessment groups each containing ratings for likelihood, consequnce and control are possible. For example some clients use a risk management methodology that utilises risk budgets with three rating groups "Inherent, Residual and Target" where inherent ratings shift with external factors and target shifts with the corporate risk appetite (ie a risk budget) while the residual floats according to assessment ratings.
Any number of self assessments in each group can be maintained together with a separate family of assessments and remediations created by audit/expert that coexist with management's risk assessments.
Whether your preferred risk methodology uses quantification (quantitative risk analysis), or qualification (qualitative risk analysis), BPC Risk Manager directly supports the approach on a per assessment basis. Terminology (including fields names and purpose and screen captions) is fully customisable so the system can directly implement the corporate risk methodology / risk method.
=Get a Fully Functional Evaluation Copy of BPC RiskManager for FREE=
You can get a free no-obligation fully functional copy of BPC RiskManager (Enrima Edition) simply by completing the request form here:
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php I want to evaluate BPC RiskManager without obligation for free, please.]
It will work for 60 days, and if you need more time you can contact us and request a longer evaluation. There are no limitations in the evaluation version and we will even give you support for free while you get it running. It is fully self installing and will open up accessing your first risk database when the installer finishes.
If it isn't right for you, you can just uninstall after the 60 days with no further obligation to us.
=Knowledge Base=
*[[BPC RiskManager V6 Enterprise (Enrima Edition)]]
** [[BPC RiskManager V6 Enterprise (Enrima Edition)| BPC RiskManager Features]]
** [[BPC RiskManager V6.2 Network Architecture]]
** [[RM625ENT Installation Instructions|BPC RiakManager V6.2.5 Installation Instructions]]
** [[BPC RiskManager Frequently Asked Questions|BPC RiskManager - Frequently Asked Questions]]
** [[BPC RiskManager Quick Help With Common Tasks]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
** [[BPC RiskManager V6 on 64 bit Windows]]
*[[BPC SurveyManager - Overview]]
** [[BPC Surveymanager - Key Features]]
** [[BPC SurveyManager - Introduction]]
** [[BPC SurveyManager - Creating Surveys - Layout and Markup Tags]]
** [[BPC SurveyManager - Creating Surveys - The Page Script]]
** [[BPC SurveyManager - Questions and Input Controls]]
** [[BPC SurveyManager - Creating Surveys - Properties]]
** [[BPC SurveyManager - Creating Surveys - Rules Scripting]]
** [[BPC SurveyManager - The Built In Reports]]
** [[BPC SurveyManager - Advanced Database Configuration Settings]]
** [[BPC SurveyManager - Client Overview]]
** [[BPC SurveyManager - Tutorials - Survey Layouts]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
<noinclude>
[[Category:Featured Article]]
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
dc7ccfc5f7d790cb2dd0c17b50cdde25c14ee35b
469
339
2012-08-30T10:11:06Z
Bishopj
1
wikitext
text/x-wiki
=BPC RiskManager Software Suite - Risk, Compliance and Certification=
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) was first released in 2008. The latest version was released in 2011. It is updated continuously throughout the year and, as a client, we encourage you to actively participate in the development direction.
The Enrima edition of BPC RiskManager is a single-user and multi-user risk management, compliance management, financial statements certification, insurance, survey, incidents & hazards system all in one application. You can manage multiple organisations and simultaneously view governance issues as risks, compliance obligations (legislation, processes and procedures) and compliance topics simultaneously. It manages email based reminders for a large variety of user expectations internally.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width=100%
|-
|
* BPC RiskManager V5 (Express)
|[[image:BPCRiskManagerExpressV5.jpg]]
|-
|
* BPC RiskManager V6 (Enrima Edition)
|[[image:BPC_RiskManager_V6261_Main_Screen.jpg|600]]
|}
=Client Base=
BPC RiskManager clients are head quartered in Australia, Canada, the United Kingdom and the United States of America. Global clients, of course have offices in many other countries. [http://www.bishopphillips.com| Bishop Phillips Consulting] has local offices in both Australia and North America.
The system is used extensively in the education sector with a very substantial presence in Universities in both Australia and Canada and commercial education providers and colleges in the USA. Other significant client groups include insurance providers (both primary insurers and reinsurers), central government agencies (such as federal & state/province departments and local government), utilities such as postal, electrical and water utilities.
BPC RiskManager implements and substantially extends the Risk Management Standards "AS/NZS 4360:2004 :Risk Management", and "ISO 31000" and complies with the "ISO/IEC Guide 73- Risk Management – Vocabulary".
The Risk Manager is not restricted to merely following the one interpretation of the risk standards. As a consequence of its long market history, BPC RiskManager implements a large number of divergent risk management methodologies or methods. Any combination of one to three assessment groups each containing ratings for likelihood, consequnce and control are possible. For example some clients use a risk management methodology that utilises risk budgets with three rating groups "Inherent, Residual and Target" where inherent ratings shift with external factors and target shifts with the corporate risk appetite (ie a risk budget) while the residual floats according to assessment ratings.
Any number of self assessments in each group can be maintained together with a separate family of assessments and remediations created by audit/expert that coexist with management's risk assessments.
Whether your preferred risk methodology uses quantification (quantitative risk analysis), or qualification (qualitative risk analysis), BPC Risk Manager directly supports the approach on a per assessment basis. Terminology (including fields names and purpose and screen captions) is fully customisable so the system can directly implement the corporate risk methodology / risk method.
=Get a Fully Functional Evaluation Copy of BPC RiskManager for FREE=
You can get a free no-obligation fully functional copy of BPC RiskManager (Enrima Edition) simply by completing the request form here:
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php I want to evaluate BPC RiskManager without obligation for free, please.]
It will work for 60 days, and if you need more time you can contact us and request a longer evaluation. There are no limitations in the evaluation version and we will even give you support for free while you get it running. It is fully self installing and will open up accessing your first risk database when the installer finishes.
If it isn't right for you, you can just uninstall after the 60 days with no further obligation to us.
=Knowledge Base=
*[[BPC RiskManager V6 Enterprise (Enrima Edition)]]
** [[BPC RiskManager V6 Enterprise (Enrima Edition)| BPC RiskManager Features]]
** [[BPC RiskManager V6.2 Network Architecture]]
** [[RM625ENT Installation Instructions|BPC RiakManager V6.2.5 Installation Instructions]]
** [[BPC RiskManager Frequently Asked Questions|BPC RiskManager - Frequently Asked Questions]]
** [[BPC RiskManager Quick Help With Common Tasks]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
** [[BPC RiskManager V6 on 64 bit Windows]]
*[[BPC SurveyManager - Overview]]
** [[BPC Surveymanager - Key Features]]
** [[BPC SurveyManager - Introduction]]
** [[BPC SurveyManager - Creating Surveys - Layout and Markup Tags]]
** [[BPC SurveyManager - Creating Surveys - The Page Script]]
** [[BPC SurveyManager - Questions and Input Controls]]
** [[BPC SurveyManager - Creating Surveys - Properties]]
** [[BPC SurveyManager - Creating Surveys - Rules Scripting]]
** [[BPC SurveyManager - The Built In Reports]]
** [[BPC SurveyManager - Advanced Database Configuration Settings]]
** [[BPC SurveyManager - Client Overview]]
** [[BPC SurveyManager - Tutorials - Survey Layouts]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
<noinclude>
[[Category:Featured Article]]
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
dc7ccfc5f7d790cb2dd0c17b50cdde25c14ee35b
497
469
2012-08-30T10:11:06Z
Bishopj
1
wikitext
text/x-wiki
=BPC RiskManager Software Suite - Risk, Compliance and Certification=
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) was first released in 2008. The latest version was released in 2011. It is updated continuously throughout the year and, as a client, we encourage you to actively participate in the development direction.
The Enrima edition of BPC RiskManager is a single-user and multi-user risk management, compliance management, financial statements certification, insurance, survey, incidents & hazards system all in one application. You can manage multiple organisations and simultaneously view governance issues as risks, compliance obligations (legislation, processes and procedures) and compliance topics simultaneously. It manages email based reminders for a large variety of user expectations internally.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width=100%
|-
|
* BPC RiskManager V5 (Express)
|[[image:BPCRiskManagerExpressV5.jpg]]
|-
|
* BPC RiskManager V6 (Enrima Edition)
|[[image:BPC_RiskManager_V6261_Main_Screen.jpg|600]]
|}
=Client Base=
BPC RiskManager clients are head quartered in Australia, Canada, the United Kingdom and the United States of America. Global clients, of course have offices in many other countries. [http://www.bishopphillips.com| Bishop Phillips Consulting] has local offices in both Australia and North America.
The system is used extensively in the education sector with a very substantial presence in Universities in both Australia and Canada and commercial education providers and colleges in the USA. Other significant client groups include insurance providers (both primary insurers and reinsurers), central government agencies (such as federal & state/province departments and local government), utilities such as postal, electrical and water utilities.
BPC RiskManager implements and substantially extends the Risk Management Standards "AS/NZS 4360:2004 :Risk Management", and "ISO 31000" and complies with the "ISO/IEC Guide 73- Risk Management – Vocabulary".
The Risk Manager is not restricted to merely following the one interpretation of the risk standards. As a consequence of its long market history, BPC RiskManager implements a large number of divergent risk management methodologies or methods. Any combination of one to three assessment groups each containing ratings for likelihood, consequnce and control are possible. For example some clients use a risk management methodology that utilises risk budgets with three rating groups "Inherent, Residual and Target" where inherent ratings shift with external factors and target shifts with the corporate risk appetite (ie a risk budget) while the residual floats according to assessment ratings.
Any number of self assessments in each group can be maintained together with a separate family of assessments and remediations created by audit/expert that coexist with management's risk assessments.
Whether your preferred risk methodology uses quantification (quantitative risk analysis), or qualification (qualitative risk analysis), BPC Risk Manager directly supports the approach on a per assessment basis. Terminology (including fields names and purpose and screen captions) is fully customisable so the system can directly implement the corporate risk methodology / risk method.
=Get a Fully Functional Evaluation Copy of BPC RiskManager for FREE=
You can get a free no-obligation fully functional copy of BPC RiskManager (Enrima Edition) simply by completing the request form here:
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php I want to evaluate BPC RiskManager without obligation for free, please.]
It will work for 60 days, and if you need more time you can contact us and request a longer evaluation. There are no limitations in the evaluation version and we will even give you support for free while you get it running. It is fully self installing and will open up accessing your first risk database when the installer finishes.
If it isn't right for you, you can just uninstall after the 60 days with no further obligation to us.
=Knowledge Base=
*[[BPC RiskManager V6 Enterprise (Enrima Edition)]]
** [[BPC RiskManager V6 Enterprise (Enrima Edition)| BPC RiskManager Features]]
** [[BPC RiskManager V6.2 Network Architecture]]
** [[RM625ENT Installation Instructions|BPC RiakManager V6.2.5 Installation Instructions]]
** [[BPC RiskManager Frequently Asked Questions|BPC RiskManager - Frequently Asked Questions]]
** [[BPC RiskManager Quick Help With Common Tasks]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
** [[BPC RiskManager V6 on 64 bit Windows]]
*[[BPC SurveyManager - Overview]]
** [[BPC Surveymanager - Key Features]]
** [[BPC SurveyManager - Introduction]]
** [[BPC SurveyManager - Creating Surveys - Layout and Markup Tags]]
** [[BPC SurveyManager - Creating Surveys - The Page Script]]
** [[BPC SurveyManager - Questions and Input Controls]]
** [[BPC SurveyManager - Creating Surveys - Properties]]
** [[BPC SurveyManager - Creating Surveys - Rules Scripting]]
** [[BPC SurveyManager - The Built In Reports]]
** [[BPC SurveyManager - Advanced Database Configuration Settings]]
** [[BPC SurveyManager - Client Overview]]
** [[BPC SurveyManager - Tutorials - Survey Layouts]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
<noinclude>
[[Category:Featured Article]]
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
dc7ccfc5f7d790cb2dd0c17b50cdde25c14ee35b
5
4
2018-10-28T08:48:00Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=BPC RiskManager Software Suite - Risk, Compliance and Certification=
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) was first released in 2008. The latest version was released in 2011. It is updated continuously throughout the year and, as a client, we encourage you to actively participate in the development direction.
The Enrima edition of BPC RiskManager is a single-user and multi-user risk management, compliance management, financial statements certification, insurance, survey, incidents & hazards system all in one application. You can manage multiple organisations and simultaneously view governance issues as risks, compliance obligations (legislation, processes and procedures) and compliance topics simultaneously. It manages email based reminders for a large variety of user expectations internally.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width=100%
|-
|
* BPC RiskManager V5 (Express)
|[[image:BPCRiskManagerExpressV5.jpg]]
|-
|
* BPC RiskManager V6 (Enrima Edition)
|[[image:BPC_RiskManager_V6261_Main_Screen.jpg|600]]
|}
=Client Base=
BPC RiskManager clients are head quartered in Australia, Canada, the United Kingdom and the United States of America. Global clients, of course have offices in many other countries. [http://www.bishopphillips.com| Bishop Phillips Consulting] has local offices in both Australia and North America.
The system is used extensively in the education sector with a very substantial presence in Universities in both Australia and Canada and commercial education providers and colleges in the USA. Other significant client groups include insurance providers (both primary insurers and reinsurers), central government agencies (such as federal & state/province departments and local government), utilities such as postal, electrical and water utilities.
BPC RiskManager implements and substantially extends the Risk Management Standards "AS/NZS 4360:2004 :Risk Management", and "ISO 31000" and complies with the "ISO/IEC Guide 73- Risk Management – Vocabulary".
The Risk Manager is not restricted to merely following the one interpretation of the risk standards. As a consequence of its long market history, BPC RiskManager implements a large number of divergent risk management methodologies or methods. Any combination of one to three assessment groups each containing ratings for likelihood, consequnce and control are possible. For example some clients use a risk management methodology that utilises risk budgets with three rating groups "Inherent, Residual and Target" where inherent ratings shift with external factors and target shifts with the corporate risk appetite (ie a risk budget) while the residual floats according to assessment ratings.
Any number of self assessments in each group can be maintained together with a separate family of assessments and remediations created by audit/expert that coexist with management's risk assessments.
Whether your preferred risk methodology uses quantification (quantitative risk analysis), or qualification (qualitative risk analysis), BPC Risk Manager directly supports the approach on a per assessment basis. Terminology (including fields names and purpose and screen captions) is fully customisable so the system can directly implement the corporate risk methodology / risk method.
=Get a Fully Functional Evaluation Copy of BPC RiskManager for FREE=
You can get a free no-obligation fully functional copy of BPC RiskManager (Enrima Edition) simply by completing the request form here:
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php I want to evaluate BPC RiskManager without obligation for free, please.]
It will work for 60 days, and if you need more time you can contact us and request a longer evaluation. There are no limitations in the evaluation version and we will even give you support for free while you get it running. It is fully self installing and will open up accessing your first risk database when the installer finishes.
If it isn't right for you, you can just uninstall after the 60 days with no further obligation to us.
=Knowledge Base=
*[[BPC RiskManager V6 Enterprise (Enrima Edition)]]
** [[BPC RiskManager V6 Enterprise (Enrima Edition)| BPC RiskManager Features]]
** [[BPC RiskManager V6.2 Network Architecture]]
** [[RM625ENT Installation Instructions|BPC RiakManager V6.2.5 Installation Instructions]]
** [[BPC RiskManager Frequently Asked Questions|BPC RiskManager - Frequently Asked Questions]]
** [[BPC RiskManager Quick Help With Common Tasks]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
** [[BPC RiskManager V6 on 64 bit Windows]]
*[[BPC SurveyManager - Overview]]
** [[BPC Surveymanager - Key Features]]
** [[BPC SurveyManager - Introduction]]
** [[BPC SurveyManager - Creating Surveys - Layout and Markup Tags]]
** [[BPC SurveyManager - Creating Surveys - The Page Script]]
** [[BPC SurveyManager - Questions and Input Controls]]
** [[BPC SurveyManager - Creating Surveys - Properties]]
** [[BPC SurveyManager - Creating Surveys - Rules Scripting]]
** [[BPC SurveyManager - The Built In Reports]]
** [[BPC SurveyManager - Advanced Database Configuration Settings]]
** [[BPC SurveyManager - Client Overview]]
** [[BPC SurveyManager - Tutorials - Survey Layouts]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
<noinclude>
[[Category:Featured Article]]
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
dc7ccfc5f7d790cb2dd0c17b50cdde25c14ee35b
BPC RiskManager V6 Enterprise (Enrima Edition)
0
2
2
2012-08-30T12:14:57Z
Bishopj
1
wikitext
text/x-wiki
=The BPC RiskManager Software Suite - Features=
==What is the BPC RiskManager Software Suite?==
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) in 2008. The latest release is July 2010.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width="100%"
|-width="100%"
|
* BPC RiskManager V5 (Express)
|
|-
|
* BPC RiskManager V6 (Enrima Edition)
|
|}
While there are a lot of similarities between the systems, they are not identical and not data compatible. BPC RiskManager V5 (Express) is maintained on an an annual update cycle, while BPC RiskManager (Enrima Edition) is maintained on a quarterly (every 3 months) update cycle.
In terms of scalability, both systems will handle thousands of simultaneous users, and both model risk management at the enterprise level and project level. Both systems include risk, controls/strategies, consequences, survey, compliance, incident management support and both systems feature customisable screens and field names. Both systems allow multiple simultaneously active databases.
The essential differences are in depth and complexity of issues supported and expandability of the system. Here they have significant differences. Express is designed to be extremely simple and consequently excludes both depth and breadth beyond the functions of a risk and compliance register. It therefore is able to present almost all its risk or compliance record data on a single screen.
In the Enrima V6 series this single screen display is not possible as the both multiple views and considerable anciliary management objects are brought into the system (such as documents, assets, assertions, insurance, claims, etc).
==BPC RiskManager V6.2.5 (Enrima Edition)==
[[image:BPC_RiskManager_V6261_Main_Screen.jpg|539px]]
===BPC RiskManager - Who should use it?===
====User====
BPC RiskManager designed to manage the governance function of an organisation. It therefore fits in audit, risk management, compliance management, insurance risk management, environmental risk management, project risk management, human resources, OHS and strategic planning. It delivers functions covering both ther strategic and the operational functions of these disciplines. For example the claims module actually manages insurance claims (not merely registering them), the document management system is capable of actually managing documents (not merely cataloguing them), the compliance and strategy systems actually manage the remediation of the issue, etc.
It functions best as an integrated solution with multiple governenance teams using the one system. With each release we expand the governance functions in the system.
====Scale====
BPC RiskManager is designed to scale. There are four types of clients using it:
#. Single user or small work groups running off a single user install switched to server mode.
#. Medium scale enterprises with risk and executive seats on an IT group managed server / in-cloud and database.
#. Large scale enterprise with many seats actively managing general risks and compliance issues and project risks, etc
#. Hosting consolidators providing cloud services to many clients in different organisations with many databases.
Every version of BPC RiskManager (from the single user install, up) comes capable of operating in all these modes. For each type of operation there are specific features built in to aid maintenance and management (including multi database bulk operatiions for hosting providers).
===BPC RiskManager Features===
BPC RiskManager V6.2.5 (Enrima Edition) (often referred to as RiskManager V625 or Enrima), is a powerful risk and compliance management solution with an almost unlimited range of end-user configurable solutions. It delivers:
*General
** Totally end-user configurable (change almost any label or caption or search relationship, re-task fields, define your own risk and compliance model, build your own reports, define your own work flows, customisable messages, define your own risk structure, etc)
** Runs out-of-the-box (ready to use immediately after install in single-user or small work group mode).
** Provides an optional fast configure mode (shown on first run of any client and available at any time thereafter).
** An extremely versatile ratings engine support multiple methods of ratings compliance and risk issues. Each item can simultaneously store different ratings for inherent, residual, auditor, reviewer and unlimited current self ratings for each of likelihood, impact and (residual) risk. It also holds additional ratings for compliance breach, compliance rating, and unlimited assertion sets.
** Ratings can be rolled up through trees of risks and compliance issues
*Functional
** Risk Management
** Compliance Management
** Incident Management
** Planning
** Document Management
*Registers
** General Risk register(s) with unlimited risk types and able to distinguish project and general risks
** Project Risk register(s)
** Compliance register(s) with unlimited assertions/questions and assertions/question groups AND pure HTML based compliance surveys / checklists
** Incident & Hazard register
** Insurance register
** Claims register
** Legal register
** Document register
** Causes register
** Consequence & impact register
** Standard strategies register (Type of Control)
** Strategies & control register
** Actions register
** Work flow register
** Asset register
** Business plan register
** Survey register
** Access control
*Evaluation engines
** Risk & compliance rating
** Question & assertion rating
** Assessments engine
** Survey rules engine
** Charting engine
** Email management engine
** Exception tracking engine
*Work flow control systems
** Work flow engine
** Instantaneous internal message engine
** Instant and batched email management engine
** PAX & TMS ScripterStudio scripting engines
** Survey management system
** Exception tracking engine
*Data reporting and access
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. These structures are understood by the search and reporting engines.
** Unlimited risk structuring - risk folders to any depth, risk-linking, risk categorisation, unlimited master-child structures, etc
** Tree, search and flat risk navigation simultaneously supported
** Risks/compliance issues can inhabit any number of tree folders simultaneously (allowing multiple grouping and reporting frameworks with risk roll up)
** Link Objectives, assertions, questions, processes, legislative/regulator obligation, causes, risks, consequences, compliance obligations, controls / strategies, actions, risk history, incidents / hazards, people, supporting documentation, and information web-sites, and more.
** Full live search-able audit trail of all changes
** Storable searches used through-out the application to access and feed data to tables, views, folders and reports
** Multiple reporting engines:
*** Built-in pre-written reports
*** Very powerful, programmable end user report writer and manual (outputs in various formats including HTML and PDF)
*** Word Document (mail-merge) style report engine
*** SurveyManager Instant Reporting engine (maps survey response reports back into the survey layout)
*** BPC SurveyManager operating in web forms mode is a powerful reporting engine in its own right
*** Query Exporter (Administrator only - can cross feed to the import engine creating an excellent method for doing bulk updates based on extracted data)
*** Search based end user export
*** Built-In Charting
*** End-user charting
** End user sample reports
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL and PDF
** Dashboard with drill through to risk collections, risks, assessments and incidents
** Dashboard risk collections configurable via folder tree view system (so any risk/compliance topic can be put to the dashboard with unlimited layers of drill through).
*Messaging
** Built-in automated email messaging based on events and dates for a wide range of scenarios, and occurrences, with email contents able to be fed by custom reports from the report writer.
** Multiple levels of responsibility assignment on all trackable objects
** Risk Message racking and work flow message tracking
*Secretarial, Administration and Desktop Integration
** MS Office compatible
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL
** Spell checking using your MS Word dictionary
** Simple point and select search system but with an option for savable advanced query writer custom searches if required.
** Extensive configuration and customisation screens to support tuning the system to do just what you want.
** Dynamic screen captions allowing you to adopt your own terminology, which also appear to the report writer as the names of the fields
** Smooth support for large and small fonts and 96dpi and 120dpi and other screen resolutions
** Works on all versions of windows from W2000 up, including Vista and Version 7.
** Fast fully automated installation and upgrade system.
** Available in single/small work group and enterprise configurations
*Compliance System
** Compliance obligations can be viewed as general risks and compliance modes
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Compliance obligations will support multiple compliance models simultaneously (SOX / Sched7 / General / etc).
** Compliance obligations are stored internally as risks so they roll up smoothly into the general and project risk register
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. An issue can belong to many such relationships at once.
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of compliance ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings and question/assertion ratings rules for automated rating translation.
** Compliance responses automatically convert to risk equivalent ratings so that both compliance issues and risks can be seen on the one heat map, and in comparative tables.
** Unlimited compliance milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Risk System
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers.
** Risk Tolerances (rating and numeric) for differential risk reporting and automated condition reporting.
** Likelihood & consequence trigger points
** Separate audit comment and tracking data for each risk.
** Multiple modelling systems - inherent, current and residual risk ratings (with optional likelihood, impact, control and residual categories for each rating)
** Velocity supported at the impact/consequence level
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of risk ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings
** Confidential risks
** Risk advisory notes for each risk
** Unlimited risk milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Incident Management
** Fully configurable - drop lists, business rules, screens, etc.
** Incident type determines rules and attributes
** Multiple handling steps fully tracked - recorder, assignee, reviewer, responder, escalted to, investigator
** Automatic triggers for review, escalation, investigation, etc based on user configurable rules (triggered by participant information, incident attributes, etc.)
** Configurable unlimited incident attributes with triggers (for reviews, escalation, enhancements, workflow, etc.) to classify incidents
** Unlimited configurable incident types (which determine the set of incident attributes applied to the incident)
** Incidents have a built in workflow – record, assign, review, escalate, resolve, investigate, close
** Unlimited user defined additional fields for storing extra data
** Unlimited text fields details/notes, etc for unstructured data
** Change tracking
** Separate org structure defnition that lives side by side with the risk management org structure (allowing different structures for risk/compliance and incidents)
** Structure and rule driven review, escalation and investigation
** Unlimited incidents per risk/compliance event
** Incidents attached to more than one risk/compliance topic
** Incidents can be created and attached to a risk/compliance topic at a later time
** Notifiers
** Incident Causes – immediate and underlying (mirrors risk causes)
** Incident Actions – Current (done) and future, both proposed and approved + action assignment, progress and tracking
** Proposed actions can be converted to risk / compliance topic controlls
** Large array of location types (even GPS location specification)
** Unlimited partcipants per incident (with user defined roles)
** Particpant records of interview
** Participant injury tracking
** Review and investigation reminders
*Incident Investigations
** Investigations including progress tracking/status / findings / recommendations, etc
** Configurable investigation types with differing investigation team structures
** Investigation external document links
** Configurable and managed signoff models including separate lists for investigation team members and other parties
** Investigation signoffs with qualified and dissenting opinion options
** Investigations build distinct reports
*Internal Audit System
** Separate audit risk ratings and notes per risk/compliance issue
** Separate audit external document links
** Internal-audit remediation register with assignable tasks and remediation progress, status and outcome recording.
** Automated access escalation for user flagged as auditors
** Auditors use the same screens as normal users but have extra fields and facilities
** Automated CSA survey generation
** Full change logs kept of key accountable tables (can be expanded to include additional tables including additional tables added by clients)
*Insurance and claims
** Insurance register with renewal reminders
** Insurance policies link to risk/compliance registers via the strategy and controls register, actions register and document registers.
** Claims management
** Claims link to risks/compliance registers via incident and insurance registers
** Incident/Hazards Register (plus hooks for interfacing into a separate incident management system if desired)
*Causes Register
** Unlimited risk specific causes per risk
** Type-of-Cause allows standardisation of causes while allowing complete flexibility in description and instance of a cause (similar to Type-of-Control)
** Incident and Risk/Compliance causes.
** Causes can have numeric risk event triggers (allowing concepts such as the "likelihood of exceeding x events in a year")
** Direct sub linking between causes and strategies and consequences enables cause and effect strategy design and verifiable coverage of causes
** Causes can be sub linked off Assertions/Questions (the default for compliance screens) allowing low rating compliance questions or analytic steps for remediating breaches to be structured around the causes of each question's failure. This enables the compliance model to be around built around both compliance risk and compliance topics philosophies.
** As there can be an indefinite number of question sets with an indefinite number of questions per risk / compliance issue, cause structuring can get very deep.
** Causes integrate with surveys, the scripting engine and external modelling systems to enable programmatic setting of likelihood ratings using additional fields as part of the interface (like the "risk trigger value").
*Strategies & Controls register
** Strategies and controls with progress notes and tracking
** Register and track unlimited strategies and controls
** Customisable ratings scheme for each control or strategy including any of likelihood, impact, control, (residual) risk over inherent, residual, current self, audit, reviewer, etc ratings groups, as well as five ratings defaulting to authority, reliability, efficiency, economy, and timeliness control assertions.
** Officially mandated Type-of-Control list provides a template for approved control strategies and allows strategies to be both individually described, and structurally grouped and standardised.
** Strategies & Controls directly cross link to individual causes and impacts/consequences allowing you to tie specific strategies to one or more causes and consequences of a risk or compliance item.
** Strategies & Controls can have actions.
** (Coming soon: unlimited assertion/ratable question sets similar to that used for compliance and risk screens).
** Includes Responsible officer, delegate, email reminders, assignment tracking, cost and benefit measures, link to insurance, cyclic and one off controls/strategies, flag where insurance expired, due dates exceeded, user defined categories and subcategories, etc.
** Automatic access rights escalation where read only viewer is accessing a strategy for which they have responsibility
** Fully customisable messages with or without email running.
** Survey question library links surveys to strategies
** Can feed CSA automated surveys
*Financial Elements Register
** Unlimited charts of account
** Account rollup
** Store performance metrics (budget, actual, transaction volumes, etc)
** Store audit assessments for each element
** Link to audit/risk/compliance assertions
** Ownership
** Unlimited risks/compliance obligations per account
** Test plans and test plan scheduling
** Heat maps for each element with drill through to risks and incidents
*Document Register
** Document register for unlimited documents
** Supports multiple document management strategies simultaneously: unmanaged, delegated management and full management.
** Unlimited risk/compliance issues may be linked to each managed or unmanaged document.
** Unlimited unmanaged documents may be linked to a risk-compliance issue
** Document management can be set at the document or section level on a per-document basis
** Managed documents track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Managed document sections track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Full snapshot version control operates on managed documents - a full time-stamped copy of the relevant records is made for each change.
** The document register presents document and section specific lists and heat maps of all risks/compliance issues attached to the document or section and supports export on that basis.
** Main listing screens support dynamically constructed QBE filters and free text search to enable isolation of documents using specific terms or any of the tracking fields.
* Store documents internally or interface to your document management system, web site links available for most objects.
*Work flow engine
** The work flow system supports two purposes (a) documenting processes with flow charts, and (b) automating RM related activities
** Work flow modelling and diagramming tool (with a built-in script-able work-flow diagramming subsystem)
** Work flows can be executed and can invoke RM screens and external applications. Executed work flows can be assigned to individuals and have multiple individuals participating in different steps.
** Work flows steps can have attachments.
*Survey engine
** Full implementation of BPC SurveyManager with customised management client built-in
** Built in survey engine
** A full scale (not limited) survey / web forms engine that is licensed for separate use and can be used for far more than just your risk management requirements. Think of something you need to collect data on the BPC SurveyManager will handle it. The SurveyManager can be used to write entire web sites on its own.
*Access and security
** Single user mode or secured access modes (end user selectable)
** Multiple access security support (LDAP,AD, NTGroups, Internal, Trusted, etc)
** Configurable access rights for access to risk type, business group, business unit, risks over multiple levels of access from none to administration
** Automatic escalation of access to individual records where the user has responsibility assigned, but otherwise would not have access
*People & resources
** People and positions (resources) may be imported in bulk, created individually or automatically created on connection.
** Resources integrate with the access control system
** SurveyManager keeps a separate list if resources mirrored with the RiskManager resource tables
** RiskManager allows for three domains of resources - survey responders (access to specific surveys), risk manager known persons (can be managed by email, assigned responsibilities but do not have access to the system), and risk manager users (access allowed).
** User access control down to individual business unit risks & issues as read / update / create (See access control).
** Resources (people) can be retired (removed from lookup windows, etc) without deletion from system (to preserve risk/compliance history integrity).
*Scalability, Networking and communications
** N-Tier architecture, can be installed on one computer with the database (as in single user mode) or distributed across multiple servers (as in Enterprise/Web mode).
** Networked comms supports simultaneous or individual use of Raw TCP/IP, HTTP and HTTPS (SSL) network communications (all with compression)
** Supports unlimited simultaneous databases ''(subject to license purchased)''
** Supports unlimited simultaneous application servers ''(subject to license purchased)''
** Supports unlimited simultaneous survey engines ''(subject to license purchased)''
** Supports unlimited installed client desktops ''(subject to license purchased)''
*Other
** Cost and benefit tracking
** Full internal scripting language to support end user expansion and external interfacing
** Interfaces for external complex risk assessment (eg Monte-Carlo modelling risk systems such as Benfield / AON Remetrics)
** Single point of update publishing for clients
==BPC RiskManager Express V5.x==
[[image:BPCRiskManagerExpressV5.jpg|539px]]
BPC RiskManager Express has a dramatically simplified and restricted user interface, does not maintain structured causes lists (but does have unlimited "contributing factors" descriptions) and allows one level of responsibility for assignment of issues and actions, and does not have an end-user report writer (although it does support both mail-merge and word / XL template driven reporting). It can be configured as either a compliance or a risk solution running on separate databases through the one application server. Like it's more powerful sibling, it will support an indefinite number of databases.
BPC RiskManager Express is targeted at organisations where simplicity of operation and user input overrides the need for granularity of input and analysis, and where the additional governance sub-systems available in BPC RiskManager are not needed (eg insurance, claims, assertion / question rating models, work-flow, assessments, security, assets, etc.)
This riskwiki focuses on BPC RiskManager (Enrima Edition).
=Additional Resources=
[http://bpc.bishopphillips.com/forum/ BPC Support Forum]<br>
[http://bpc.bishopphillips.com/riskthink/ BPC RiskThink Blog]<br>
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php Request a free fully functional trial copy of BPC RiskManager (Enrima)]
<noinclude>
[[Category:Featured Article]]
[[Category:Bishop Phillips Software]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
{{BackLinks}}
</noinclude>
81bdffb458d1875bbf1156a08c95aa2571f1e615
299
2
2012-08-30T12:14:57Z
Bishopj
1
wikitext
text/x-wiki
=The BPC RiskManager Software Suite - Features=
==What is the BPC RiskManager Software Suite?==
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) in 2008. The latest release is July 2010.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width="100%"
|-width="100%"
|
* BPC RiskManager V5 (Express)
|
|-
|
* BPC RiskManager V6 (Enrima Edition)
|
|}
While there are a lot of similarities between the systems, they are not identical and not data compatible. BPC RiskManager V5 (Express) is maintained on an an annual update cycle, while BPC RiskManager (Enrima Edition) is maintained on a quarterly (every 3 months) update cycle.
In terms of scalability, both systems will handle thousands of simultaneous users, and both model risk management at the enterprise level and project level. Both systems include risk, controls/strategies, consequences, survey, compliance, incident management support and both systems feature customisable screens and field names. Both systems allow multiple simultaneously active databases.
The essential differences are in depth and complexity of issues supported and expandability of the system. Here they have significant differences. Express is designed to be extremely simple and consequently excludes both depth and breadth beyond the functions of a risk and compliance register. It therefore is able to present almost all its risk or compliance record data on a single screen.
In the Enrima V6 series this single screen display is not possible as the both multiple views and considerable anciliary management objects are brought into the system (such as documents, assets, assertions, insurance, claims, etc).
==BPC RiskManager V6.2.5 (Enrima Edition)==
[[image:BPC_RiskManager_V6261_Main_Screen.jpg|539px]]
===BPC RiskManager - Who should use it?===
====User====
BPC RiskManager designed to manage the governance function of an organisation. It therefore fits in audit, risk management, compliance management, insurance risk management, environmental risk management, project risk management, human resources, OHS and strategic planning. It delivers functions covering both ther strategic and the operational functions of these disciplines. For example the claims module actually manages insurance claims (not merely registering them), the document management system is capable of actually managing documents (not merely cataloguing them), the compliance and strategy systems actually manage the remediation of the issue, etc.
It functions best as an integrated solution with multiple governenance teams using the one system. With each release we expand the governance functions in the system.
====Scale====
BPC RiskManager is designed to scale. There are four types of clients using it:
#. Single user or small work groups running off a single user install switched to server mode.
#. Medium scale enterprises with risk and executive seats on an IT group managed server / in-cloud and database.
#. Large scale enterprise with many seats actively managing general risks and compliance issues and project risks, etc
#. Hosting consolidators providing cloud services to many clients in different organisations with many databases.
Every version of BPC RiskManager (from the single user install, up) comes capable of operating in all these modes. For each type of operation there are specific features built in to aid maintenance and management (including multi database bulk operatiions for hosting providers).
===BPC RiskManager Features===
BPC RiskManager V6.2.5 (Enrima Edition) (often referred to as RiskManager V625 or Enrima), is a powerful risk and compliance management solution with an almost unlimited range of end-user configurable solutions. It delivers:
*General
** Totally end-user configurable (change almost any label or caption or search relationship, re-task fields, define your own risk and compliance model, build your own reports, define your own work flows, customisable messages, define your own risk structure, etc)
** Runs out-of-the-box (ready to use immediately after install in single-user or small work group mode).
** Provides an optional fast configure mode (shown on first run of any client and available at any time thereafter).
** An extremely versatile ratings engine support multiple methods of ratings compliance and risk issues. Each item can simultaneously store different ratings for inherent, residual, auditor, reviewer and unlimited current self ratings for each of likelihood, impact and (residual) risk. It also holds additional ratings for compliance breach, compliance rating, and unlimited assertion sets.
** Ratings can be rolled up through trees of risks and compliance issues
*Functional
** Risk Management
** Compliance Management
** Incident Management
** Planning
** Document Management
*Registers
** General Risk register(s) with unlimited risk types and able to distinguish project and general risks
** Project Risk register(s)
** Compliance register(s) with unlimited assertions/questions and assertions/question groups AND pure HTML based compliance surveys / checklists
** Incident & Hazard register
** Insurance register
** Claims register
** Legal register
** Document register
** Causes register
** Consequence & impact register
** Standard strategies register (Type of Control)
** Strategies & control register
** Actions register
** Work flow register
** Asset register
** Business plan register
** Survey register
** Access control
*Evaluation engines
** Risk & compliance rating
** Question & assertion rating
** Assessments engine
** Survey rules engine
** Charting engine
** Email management engine
** Exception tracking engine
*Work flow control systems
** Work flow engine
** Instantaneous internal message engine
** Instant and batched email management engine
** PAX & TMS ScripterStudio scripting engines
** Survey management system
** Exception tracking engine
*Data reporting and access
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. These structures are understood by the search and reporting engines.
** Unlimited risk structuring - risk folders to any depth, risk-linking, risk categorisation, unlimited master-child structures, etc
** Tree, search and flat risk navigation simultaneously supported
** Risks/compliance issues can inhabit any number of tree folders simultaneously (allowing multiple grouping and reporting frameworks with risk roll up)
** Link Objectives, assertions, questions, processes, legislative/regulator obligation, causes, risks, consequences, compliance obligations, controls / strategies, actions, risk history, incidents / hazards, people, supporting documentation, and information web-sites, and more.
** Full live search-able audit trail of all changes
** Storable searches used through-out the application to access and feed data to tables, views, folders and reports
** Multiple reporting engines:
*** Built-in pre-written reports
*** Very powerful, programmable end user report writer and manual (outputs in various formats including HTML and PDF)
*** Word Document (mail-merge) style report engine
*** SurveyManager Instant Reporting engine (maps survey response reports back into the survey layout)
*** BPC SurveyManager operating in web forms mode is a powerful reporting engine in its own right
*** Query Exporter (Administrator only - can cross feed to the import engine creating an excellent method for doing bulk updates based on extracted data)
*** Search based end user export
*** Built-In Charting
*** End-user charting
** End user sample reports
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL and PDF
** Dashboard with drill through to risk collections, risks, assessments and incidents
** Dashboard risk collections configurable via folder tree view system (so any risk/compliance topic can be put to the dashboard with unlimited layers of drill through).
*Messaging
** Built-in automated email messaging based on events and dates for a wide range of scenarios, and occurrences, with email contents able to be fed by custom reports from the report writer.
** Multiple levels of responsibility assignment on all trackable objects
** Risk Message racking and work flow message tracking
*Secretarial, Administration and Desktop Integration
** MS Office compatible
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL
** Spell checking using your MS Word dictionary
** Simple point and select search system but with an option for savable advanced query writer custom searches if required.
** Extensive configuration and customisation screens to support tuning the system to do just what you want.
** Dynamic screen captions allowing you to adopt your own terminology, which also appear to the report writer as the names of the fields
** Smooth support for large and small fonts and 96dpi and 120dpi and other screen resolutions
** Works on all versions of windows from W2000 up, including Vista and Version 7.
** Fast fully automated installation and upgrade system.
** Available in single/small work group and enterprise configurations
*Compliance System
** Compliance obligations can be viewed as general risks and compliance modes
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Compliance obligations will support multiple compliance models simultaneously (SOX / Sched7 / General / etc).
** Compliance obligations are stored internally as risks so they roll up smoothly into the general and project risk register
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. An issue can belong to many such relationships at once.
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of compliance ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings and question/assertion ratings rules for automated rating translation.
** Compliance responses automatically convert to risk equivalent ratings so that both compliance issues and risks can be seen on the one heat map, and in comparative tables.
** Unlimited compliance milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Risk System
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers.
** Risk Tolerances (rating and numeric) for differential risk reporting and automated condition reporting.
** Likelihood & consequence trigger points
** Separate audit comment and tracking data for each risk.
** Multiple modelling systems - inherent, current and residual risk ratings (with optional likelihood, impact, control and residual categories for each rating)
** Velocity supported at the impact/consequence level
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of risk ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings
** Confidential risks
** Risk advisory notes for each risk
** Unlimited risk milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Incident Management
** Fully configurable - drop lists, business rules, screens, etc.
** Incident type determines rules and attributes
** Multiple handling steps fully tracked - recorder, assignee, reviewer, responder, escalted to, investigator
** Automatic triggers for review, escalation, investigation, etc based on user configurable rules (triggered by participant information, incident attributes, etc.)
** Configurable unlimited incident attributes with triggers (for reviews, escalation, enhancements, workflow, etc.) to classify incidents
** Unlimited configurable incident types (which determine the set of incident attributes applied to the incident)
** Incidents have a built in workflow – record, assign, review, escalate, resolve, investigate, close
** Unlimited user defined additional fields for storing extra data
** Unlimited text fields details/notes, etc for unstructured data
** Change tracking
** Separate org structure defnition that lives side by side with the risk management org structure (allowing different structures for risk/compliance and incidents)
** Structure and rule driven review, escalation and investigation
** Unlimited incidents per risk/compliance event
** Incidents attached to more than one risk/compliance topic
** Incidents can be created and attached to a risk/compliance topic at a later time
** Notifiers
** Incident Causes – immediate and underlying (mirrors risk causes)
** Incident Actions – Current (done) and future, both proposed and approved + action assignment, progress and tracking
** Proposed actions can be converted to risk / compliance topic controlls
** Large array of location types (even GPS location specification)
** Unlimited partcipants per incident (with user defined roles)
** Particpant records of interview
** Participant injury tracking
** Review and investigation reminders
*Incident Investigations
** Investigations including progress tracking/status / findings / recommendations, etc
** Configurable investigation types with differing investigation team structures
** Investigation external document links
** Configurable and managed signoff models including separate lists for investigation team members and other parties
** Investigation signoffs with qualified and dissenting opinion options
** Investigations build distinct reports
*Internal Audit System
** Separate audit risk ratings and notes per risk/compliance issue
** Separate audit external document links
** Internal-audit remediation register with assignable tasks and remediation progress, status and outcome recording.
** Automated access escalation for user flagged as auditors
** Auditors use the same screens as normal users but have extra fields and facilities
** Automated CSA survey generation
** Full change logs kept of key accountable tables (can be expanded to include additional tables including additional tables added by clients)
*Insurance and claims
** Insurance register with renewal reminders
** Insurance policies link to risk/compliance registers via the strategy and controls register, actions register and document registers.
** Claims management
** Claims link to risks/compliance registers via incident and insurance registers
** Incident/Hazards Register (plus hooks for interfacing into a separate incident management system if desired)
*Causes Register
** Unlimited risk specific causes per risk
** Type-of-Cause allows standardisation of causes while allowing complete flexibility in description and instance of a cause (similar to Type-of-Control)
** Incident and Risk/Compliance causes.
** Causes can have numeric risk event triggers (allowing concepts such as the "likelihood of exceeding x events in a year")
** Direct sub linking between causes and strategies and consequences enables cause and effect strategy design and verifiable coverage of causes
** Causes can be sub linked off Assertions/Questions (the default for compliance screens) allowing low rating compliance questions or analytic steps for remediating breaches to be structured around the causes of each question's failure. This enables the compliance model to be around built around both compliance risk and compliance topics philosophies.
** As there can be an indefinite number of question sets with an indefinite number of questions per risk / compliance issue, cause structuring can get very deep.
** Causes integrate with surveys, the scripting engine and external modelling systems to enable programmatic setting of likelihood ratings using additional fields as part of the interface (like the "risk trigger value").
*Strategies & Controls register
** Strategies and controls with progress notes and tracking
** Register and track unlimited strategies and controls
** Customisable ratings scheme for each control or strategy including any of likelihood, impact, control, (residual) risk over inherent, residual, current self, audit, reviewer, etc ratings groups, as well as five ratings defaulting to authority, reliability, efficiency, economy, and timeliness control assertions.
** Officially mandated Type-of-Control list provides a template for approved control strategies and allows strategies to be both individually described, and structurally grouped and standardised.
** Strategies & Controls directly cross link to individual causes and impacts/consequences allowing you to tie specific strategies to one or more causes and consequences of a risk or compliance item.
** Strategies & Controls can have actions.
** (Coming soon: unlimited assertion/ratable question sets similar to that used for compliance and risk screens).
** Includes Responsible officer, delegate, email reminders, assignment tracking, cost and benefit measures, link to insurance, cyclic and one off controls/strategies, flag where insurance expired, due dates exceeded, user defined categories and subcategories, etc.
** Automatic access rights escalation where read only viewer is accessing a strategy for which they have responsibility
** Fully customisable messages with or without email running.
** Survey question library links surveys to strategies
** Can feed CSA automated surveys
*Financial Elements Register
** Unlimited charts of account
** Account rollup
** Store performance metrics (budget, actual, transaction volumes, etc)
** Store audit assessments for each element
** Link to audit/risk/compliance assertions
** Ownership
** Unlimited risks/compliance obligations per account
** Test plans and test plan scheduling
** Heat maps for each element with drill through to risks and incidents
*Document Register
** Document register for unlimited documents
** Supports multiple document management strategies simultaneously: unmanaged, delegated management and full management.
** Unlimited risk/compliance issues may be linked to each managed or unmanaged document.
** Unlimited unmanaged documents may be linked to a risk-compliance issue
** Document management can be set at the document or section level on a per-document basis
** Managed documents track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Managed document sections track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Full snapshot version control operates on managed documents - a full time-stamped copy of the relevant records is made for each change.
** The document register presents document and section specific lists and heat maps of all risks/compliance issues attached to the document or section and supports export on that basis.
** Main listing screens support dynamically constructed QBE filters and free text search to enable isolation of documents using specific terms or any of the tracking fields.
* Store documents internally or interface to your document management system, web site links available for most objects.
*Work flow engine
** The work flow system supports two purposes (a) documenting processes with flow charts, and (b) automating RM related activities
** Work flow modelling and diagramming tool (with a built-in script-able work-flow diagramming subsystem)
** Work flows can be executed and can invoke RM screens and external applications. Executed work flows can be assigned to individuals and have multiple individuals participating in different steps.
** Work flows steps can have attachments.
*Survey engine
** Full implementation of BPC SurveyManager with customised management client built-in
** Built in survey engine
** A full scale (not limited) survey / web forms engine that is licensed for separate use and can be used for far more than just your risk management requirements. Think of something you need to collect data on the BPC SurveyManager will handle it. The SurveyManager can be used to write entire web sites on its own.
*Access and security
** Single user mode or secured access modes (end user selectable)
** Multiple access security support (LDAP,AD, NTGroups, Internal, Trusted, etc)
** Configurable access rights for access to risk type, business group, business unit, risks over multiple levels of access from none to administration
** Automatic escalation of access to individual records where the user has responsibility assigned, but otherwise would not have access
*People & resources
** People and positions (resources) may be imported in bulk, created individually or automatically created on connection.
** Resources integrate with the access control system
** SurveyManager keeps a separate list if resources mirrored with the RiskManager resource tables
** RiskManager allows for three domains of resources - survey responders (access to specific surveys), risk manager known persons (can be managed by email, assigned responsibilities but do not have access to the system), and risk manager users (access allowed).
** User access control down to individual business unit risks & issues as read / update / create (See access control).
** Resources (people) can be retired (removed from lookup windows, etc) without deletion from system (to preserve risk/compliance history integrity).
*Scalability, Networking and communications
** N-Tier architecture, can be installed on one computer with the database (as in single user mode) or distributed across multiple servers (as in Enterprise/Web mode).
** Networked comms supports simultaneous or individual use of Raw TCP/IP, HTTP and HTTPS (SSL) network communications (all with compression)
** Supports unlimited simultaneous databases ''(subject to license purchased)''
** Supports unlimited simultaneous application servers ''(subject to license purchased)''
** Supports unlimited simultaneous survey engines ''(subject to license purchased)''
** Supports unlimited installed client desktops ''(subject to license purchased)''
*Other
** Cost and benefit tracking
** Full internal scripting language to support end user expansion and external interfacing
** Interfaces for external complex risk assessment (eg Monte-Carlo modelling risk systems such as Benfield / AON Remetrics)
** Single point of update publishing for clients
==BPC RiskManager Express V5.x==
[[image:BPCRiskManagerExpressV5.jpg|539px]]
BPC RiskManager Express has a dramatically simplified and restricted user interface, does not maintain structured causes lists (but does have unlimited "contributing factors" descriptions) and allows one level of responsibility for assignment of issues and actions, and does not have an end-user report writer (although it does support both mail-merge and word / XL template driven reporting). It can be configured as either a compliance or a risk solution running on separate databases through the one application server. Like it's more powerful sibling, it will support an indefinite number of databases.
BPC RiskManager Express is targeted at organisations where simplicity of operation and user input overrides the need for granularity of input and analysis, and where the additional governance sub-systems available in BPC RiskManager are not needed (eg insurance, claims, assertion / question rating models, work-flow, assessments, security, assets, etc.)
This riskwiki focuses on BPC RiskManager (Enrima Edition).
=Additional Resources=
[http://bpc.bishopphillips.com/forum/ BPC Support Forum]<br>
[http://bpc.bishopphillips.com/riskthink/ BPC RiskThink Blog]<br>
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php Request a free fully functional trial copy of BPC RiskManager (Enrima)]
<noinclude>
[[Category:Featured Article]]
[[Category:Bishop Phillips Software]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
{{BackLinks}}
</noinclude>
81bdffb458d1875bbf1156a08c95aa2571f1e615
337
299
2012-08-30T12:14:57Z
Bishopj
1
wikitext
text/x-wiki
=The BPC RiskManager Software Suite - Features=
==What is the BPC RiskManager Software Suite?==
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) in 2008. The latest release is July 2010.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width="100%"
|-width="100%"
|
* BPC RiskManager V5 (Express)
|
|-
|
* BPC RiskManager V6 (Enrima Edition)
|
|}
While there are a lot of similarities between the systems, they are not identical and not data compatible. BPC RiskManager V5 (Express) is maintained on an an annual update cycle, while BPC RiskManager (Enrima Edition) is maintained on a quarterly (every 3 months) update cycle.
In terms of scalability, both systems will handle thousands of simultaneous users, and both model risk management at the enterprise level and project level. Both systems include risk, controls/strategies, consequences, survey, compliance, incident management support and both systems feature customisable screens and field names. Both systems allow multiple simultaneously active databases.
The essential differences are in depth and complexity of issues supported and expandability of the system. Here they have significant differences. Express is designed to be extremely simple and consequently excludes both depth and breadth beyond the functions of a risk and compliance register. It therefore is able to present almost all its risk or compliance record data on a single screen.
In the Enrima V6 series this single screen display is not possible as the both multiple views and considerable anciliary management objects are brought into the system (such as documents, assets, assertions, insurance, claims, etc).
==BPC RiskManager V6.2.5 (Enrima Edition)==
[[image:BPC_RiskManager_V6261_Main_Screen.jpg|539px]]
===BPC RiskManager - Who should use it?===
====User====
BPC RiskManager designed to manage the governance function of an organisation. It therefore fits in audit, risk management, compliance management, insurance risk management, environmental risk management, project risk management, human resources, OHS and strategic planning. It delivers functions covering both ther strategic and the operational functions of these disciplines. For example the claims module actually manages insurance claims (not merely registering them), the document management system is capable of actually managing documents (not merely cataloguing them), the compliance and strategy systems actually manage the remediation of the issue, etc.
It functions best as an integrated solution with multiple governenance teams using the one system. With each release we expand the governance functions in the system.
====Scale====
BPC RiskManager is designed to scale. There are four types of clients using it:
#. Single user or small work groups running off a single user install switched to server mode.
#. Medium scale enterprises with risk and executive seats on an IT group managed server / in-cloud and database.
#. Large scale enterprise with many seats actively managing general risks and compliance issues and project risks, etc
#. Hosting consolidators providing cloud services to many clients in different organisations with many databases.
Every version of BPC RiskManager (from the single user install, up) comes capable of operating in all these modes. For each type of operation there are specific features built in to aid maintenance and management (including multi database bulk operatiions for hosting providers).
===BPC RiskManager Features===
BPC RiskManager V6.2.5 (Enrima Edition) (often referred to as RiskManager V625 or Enrima), is a powerful risk and compliance management solution with an almost unlimited range of end-user configurable solutions. It delivers:
*General
** Totally end-user configurable (change almost any label or caption or search relationship, re-task fields, define your own risk and compliance model, build your own reports, define your own work flows, customisable messages, define your own risk structure, etc)
** Runs out-of-the-box (ready to use immediately after install in single-user or small work group mode).
** Provides an optional fast configure mode (shown on first run of any client and available at any time thereafter).
** An extremely versatile ratings engine support multiple methods of ratings compliance and risk issues. Each item can simultaneously store different ratings for inherent, residual, auditor, reviewer and unlimited current self ratings for each of likelihood, impact and (residual) risk. It also holds additional ratings for compliance breach, compliance rating, and unlimited assertion sets.
** Ratings can be rolled up through trees of risks and compliance issues
*Functional
** Risk Management
** Compliance Management
** Incident Management
** Planning
** Document Management
*Registers
** General Risk register(s) with unlimited risk types and able to distinguish project and general risks
** Project Risk register(s)
** Compliance register(s) with unlimited assertions/questions and assertions/question groups AND pure HTML based compliance surveys / checklists
** Incident & Hazard register
** Insurance register
** Claims register
** Legal register
** Document register
** Causes register
** Consequence & impact register
** Standard strategies register (Type of Control)
** Strategies & control register
** Actions register
** Work flow register
** Asset register
** Business plan register
** Survey register
** Access control
*Evaluation engines
** Risk & compliance rating
** Question & assertion rating
** Assessments engine
** Survey rules engine
** Charting engine
** Email management engine
** Exception tracking engine
*Work flow control systems
** Work flow engine
** Instantaneous internal message engine
** Instant and batched email management engine
** PAX & TMS ScripterStudio scripting engines
** Survey management system
** Exception tracking engine
*Data reporting and access
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. These structures are understood by the search and reporting engines.
** Unlimited risk structuring - risk folders to any depth, risk-linking, risk categorisation, unlimited master-child structures, etc
** Tree, search and flat risk navigation simultaneously supported
** Risks/compliance issues can inhabit any number of tree folders simultaneously (allowing multiple grouping and reporting frameworks with risk roll up)
** Link Objectives, assertions, questions, processes, legislative/regulator obligation, causes, risks, consequences, compliance obligations, controls / strategies, actions, risk history, incidents / hazards, people, supporting documentation, and information web-sites, and more.
** Full live search-able audit trail of all changes
** Storable searches used through-out the application to access and feed data to tables, views, folders and reports
** Multiple reporting engines:
*** Built-in pre-written reports
*** Very powerful, programmable end user report writer and manual (outputs in various formats including HTML and PDF)
*** Word Document (mail-merge) style report engine
*** SurveyManager Instant Reporting engine (maps survey response reports back into the survey layout)
*** BPC SurveyManager operating in web forms mode is a powerful reporting engine in its own right
*** Query Exporter (Administrator only - can cross feed to the import engine creating an excellent method for doing bulk updates based on extracted data)
*** Search based end user export
*** Built-In Charting
*** End-user charting
** End user sample reports
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL and PDF
** Dashboard with drill through to risk collections, risks, assessments and incidents
** Dashboard risk collections configurable via folder tree view system (so any risk/compliance topic can be put to the dashboard with unlimited layers of drill through).
*Messaging
** Built-in automated email messaging based on events and dates for a wide range of scenarios, and occurrences, with email contents able to be fed by custom reports from the report writer.
** Multiple levels of responsibility assignment on all trackable objects
** Risk Message racking and work flow message tracking
*Secretarial, Administration and Desktop Integration
** MS Office compatible
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL
** Spell checking using your MS Word dictionary
** Simple point and select search system but with an option for savable advanced query writer custom searches if required.
** Extensive configuration and customisation screens to support tuning the system to do just what you want.
** Dynamic screen captions allowing you to adopt your own terminology, which also appear to the report writer as the names of the fields
** Smooth support for large and small fonts and 96dpi and 120dpi and other screen resolutions
** Works on all versions of windows from W2000 up, including Vista and Version 7.
** Fast fully automated installation and upgrade system.
** Available in single/small work group and enterprise configurations
*Compliance System
** Compliance obligations can be viewed as general risks and compliance modes
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Compliance obligations will support multiple compliance models simultaneously (SOX / Sched7 / General / etc).
** Compliance obligations are stored internally as risks so they roll up smoothly into the general and project risk register
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. An issue can belong to many such relationships at once.
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of compliance ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings and question/assertion ratings rules for automated rating translation.
** Compliance responses automatically convert to risk equivalent ratings so that both compliance issues and risks can be seen on the one heat map, and in comparative tables.
** Unlimited compliance milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Risk System
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers.
** Risk Tolerances (rating and numeric) for differential risk reporting and automated condition reporting.
** Likelihood & consequence trigger points
** Separate audit comment and tracking data for each risk.
** Multiple modelling systems - inherent, current and residual risk ratings (with optional likelihood, impact, control and residual categories for each rating)
** Velocity supported at the impact/consequence level
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of risk ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings
** Confidential risks
** Risk advisory notes for each risk
** Unlimited risk milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Incident Management
** Fully configurable - drop lists, business rules, screens, etc.
** Incident type determines rules and attributes
** Multiple handling steps fully tracked - recorder, assignee, reviewer, responder, escalted to, investigator
** Automatic triggers for review, escalation, investigation, etc based on user configurable rules (triggered by participant information, incident attributes, etc.)
** Configurable unlimited incident attributes with triggers (for reviews, escalation, enhancements, workflow, etc.) to classify incidents
** Unlimited configurable incident types (which determine the set of incident attributes applied to the incident)
** Incidents have a built in workflow – record, assign, review, escalate, resolve, investigate, close
** Unlimited user defined additional fields for storing extra data
** Unlimited text fields details/notes, etc for unstructured data
** Change tracking
** Separate org structure defnition that lives side by side with the risk management org structure (allowing different structures for risk/compliance and incidents)
** Structure and rule driven review, escalation and investigation
** Unlimited incidents per risk/compliance event
** Incidents attached to more than one risk/compliance topic
** Incidents can be created and attached to a risk/compliance topic at a later time
** Notifiers
** Incident Causes – immediate and underlying (mirrors risk causes)
** Incident Actions – Current (done) and future, both proposed and approved + action assignment, progress and tracking
** Proposed actions can be converted to risk / compliance topic controlls
** Large array of location types (even GPS location specification)
** Unlimited partcipants per incident (with user defined roles)
** Particpant records of interview
** Participant injury tracking
** Review and investigation reminders
*Incident Investigations
** Investigations including progress tracking/status / findings / recommendations, etc
** Configurable investigation types with differing investigation team structures
** Investigation external document links
** Configurable and managed signoff models including separate lists for investigation team members and other parties
** Investigation signoffs with qualified and dissenting opinion options
** Investigations build distinct reports
*Internal Audit System
** Separate audit risk ratings and notes per risk/compliance issue
** Separate audit external document links
** Internal-audit remediation register with assignable tasks and remediation progress, status and outcome recording.
** Automated access escalation for user flagged as auditors
** Auditors use the same screens as normal users but have extra fields and facilities
** Automated CSA survey generation
** Full change logs kept of key accountable tables (can be expanded to include additional tables including additional tables added by clients)
*Insurance and claims
** Insurance register with renewal reminders
** Insurance policies link to risk/compliance registers via the strategy and controls register, actions register and document registers.
** Claims management
** Claims link to risks/compliance registers via incident and insurance registers
** Incident/Hazards Register (plus hooks for interfacing into a separate incident management system if desired)
*Causes Register
** Unlimited risk specific causes per risk
** Type-of-Cause allows standardisation of causes while allowing complete flexibility in description and instance of a cause (similar to Type-of-Control)
** Incident and Risk/Compliance causes.
** Causes can have numeric risk event triggers (allowing concepts such as the "likelihood of exceeding x events in a year")
** Direct sub linking between causes and strategies and consequences enables cause and effect strategy design and verifiable coverage of causes
** Causes can be sub linked off Assertions/Questions (the default for compliance screens) allowing low rating compliance questions or analytic steps for remediating breaches to be structured around the causes of each question's failure. This enables the compliance model to be around built around both compliance risk and compliance topics philosophies.
** As there can be an indefinite number of question sets with an indefinite number of questions per risk / compliance issue, cause structuring can get very deep.
** Causes integrate with surveys, the scripting engine and external modelling systems to enable programmatic setting of likelihood ratings using additional fields as part of the interface (like the "risk trigger value").
*Strategies & Controls register
** Strategies and controls with progress notes and tracking
** Register and track unlimited strategies and controls
** Customisable ratings scheme for each control or strategy including any of likelihood, impact, control, (residual) risk over inherent, residual, current self, audit, reviewer, etc ratings groups, as well as five ratings defaulting to authority, reliability, efficiency, economy, and timeliness control assertions.
** Officially mandated Type-of-Control list provides a template for approved control strategies and allows strategies to be both individually described, and structurally grouped and standardised.
** Strategies & Controls directly cross link to individual causes and impacts/consequences allowing you to tie specific strategies to one or more causes and consequences of a risk or compliance item.
** Strategies & Controls can have actions.
** (Coming soon: unlimited assertion/ratable question sets similar to that used for compliance and risk screens).
** Includes Responsible officer, delegate, email reminders, assignment tracking, cost and benefit measures, link to insurance, cyclic and one off controls/strategies, flag where insurance expired, due dates exceeded, user defined categories and subcategories, etc.
** Automatic access rights escalation where read only viewer is accessing a strategy for which they have responsibility
** Fully customisable messages with or without email running.
** Survey question library links surveys to strategies
** Can feed CSA automated surveys
*Financial Elements Register
** Unlimited charts of account
** Account rollup
** Store performance metrics (budget, actual, transaction volumes, etc)
** Store audit assessments for each element
** Link to audit/risk/compliance assertions
** Ownership
** Unlimited risks/compliance obligations per account
** Test plans and test plan scheduling
** Heat maps for each element with drill through to risks and incidents
*Document Register
** Document register for unlimited documents
** Supports multiple document management strategies simultaneously: unmanaged, delegated management and full management.
** Unlimited risk/compliance issues may be linked to each managed or unmanaged document.
** Unlimited unmanaged documents may be linked to a risk-compliance issue
** Document management can be set at the document or section level on a per-document basis
** Managed documents track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Managed document sections track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Full snapshot version control operates on managed documents - a full time-stamped copy of the relevant records is made for each change.
** The document register presents document and section specific lists and heat maps of all risks/compliance issues attached to the document or section and supports export on that basis.
** Main listing screens support dynamically constructed QBE filters and free text search to enable isolation of documents using specific terms or any of the tracking fields.
* Store documents internally or interface to your document management system, web site links available for most objects.
*Work flow engine
** The work flow system supports two purposes (a) documenting processes with flow charts, and (b) automating RM related activities
** Work flow modelling and diagramming tool (with a built-in script-able work-flow diagramming subsystem)
** Work flows can be executed and can invoke RM screens and external applications. Executed work flows can be assigned to individuals and have multiple individuals participating in different steps.
** Work flows steps can have attachments.
*Survey engine
** Full implementation of BPC SurveyManager with customised management client built-in
** Built in survey engine
** A full scale (not limited) survey / web forms engine that is licensed for separate use and can be used for far more than just your risk management requirements. Think of something you need to collect data on the BPC SurveyManager will handle it. The SurveyManager can be used to write entire web sites on its own.
*Access and security
** Single user mode or secured access modes (end user selectable)
** Multiple access security support (LDAP,AD, NTGroups, Internal, Trusted, etc)
** Configurable access rights for access to risk type, business group, business unit, risks over multiple levels of access from none to administration
** Automatic escalation of access to individual records where the user has responsibility assigned, but otherwise would not have access
*People & resources
** People and positions (resources) may be imported in bulk, created individually or automatically created on connection.
** Resources integrate with the access control system
** SurveyManager keeps a separate list if resources mirrored with the RiskManager resource tables
** RiskManager allows for three domains of resources - survey responders (access to specific surveys), risk manager known persons (can be managed by email, assigned responsibilities but do not have access to the system), and risk manager users (access allowed).
** User access control down to individual business unit risks & issues as read / update / create (See access control).
** Resources (people) can be retired (removed from lookup windows, etc) without deletion from system (to preserve risk/compliance history integrity).
*Scalability, Networking and communications
** N-Tier architecture, can be installed on one computer with the database (as in single user mode) or distributed across multiple servers (as in Enterprise/Web mode).
** Networked comms supports simultaneous or individual use of Raw TCP/IP, HTTP and HTTPS (SSL) network communications (all with compression)
** Supports unlimited simultaneous databases ''(subject to license purchased)''
** Supports unlimited simultaneous application servers ''(subject to license purchased)''
** Supports unlimited simultaneous survey engines ''(subject to license purchased)''
** Supports unlimited installed client desktops ''(subject to license purchased)''
*Other
** Cost and benefit tracking
** Full internal scripting language to support end user expansion and external interfacing
** Interfaces for external complex risk assessment (eg Monte-Carlo modelling risk systems such as Benfield / AON Remetrics)
** Single point of update publishing for clients
==BPC RiskManager Express V5.x==
[[image:BPCRiskManagerExpressV5.jpg|539px]]
BPC RiskManager Express has a dramatically simplified and restricted user interface, does not maintain structured causes lists (but does have unlimited "contributing factors" descriptions) and allows one level of responsibility for assignment of issues and actions, and does not have an end-user report writer (although it does support both mail-merge and word / XL template driven reporting). It can be configured as either a compliance or a risk solution running on separate databases through the one application server. Like it's more powerful sibling, it will support an indefinite number of databases.
BPC RiskManager Express is targeted at organisations where simplicity of operation and user input overrides the need for granularity of input and analysis, and where the additional governance sub-systems available in BPC RiskManager are not needed (eg insurance, claims, assertion / question rating models, work-flow, assessments, security, assets, etc.)
This riskwiki focuses on BPC RiskManager (Enrima Edition).
=Additional Resources=
[http://bpc.bishopphillips.com/forum/ BPC Support Forum]<br>
[http://bpc.bishopphillips.com/riskthink/ BPC RiskThink Blog]<br>
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php Request a free fully functional trial copy of BPC RiskManager (Enrima)]
<noinclude>
[[Category:Featured Article]]
[[Category:Bishop Phillips Software]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
{{BackLinks}}
</noinclude>
81bdffb458d1875bbf1156a08c95aa2571f1e615
495
337
2012-08-30T12:14:57Z
Bishopj
1
wikitext
text/x-wiki
=The BPC RiskManager Software Suite - Features=
==What is the BPC RiskManager Software Suite?==
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) in 2008. The latest release is July 2010.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width="100%"
|-width="100%"
|
* BPC RiskManager V5 (Express)
|
|-
|
* BPC RiskManager V6 (Enrima Edition)
|
|}
While there are a lot of similarities between the systems, they are not identical and not data compatible. BPC RiskManager V5 (Express) is maintained on an an annual update cycle, while BPC RiskManager (Enrima Edition) is maintained on a quarterly (every 3 months) update cycle.
In terms of scalability, both systems will handle thousands of simultaneous users, and both model risk management at the enterprise level and project level. Both systems include risk, controls/strategies, consequences, survey, compliance, incident management support and both systems feature customisable screens and field names. Both systems allow multiple simultaneously active databases.
The essential differences are in depth and complexity of issues supported and expandability of the system. Here they have significant differences. Express is designed to be extremely simple and consequently excludes both depth and breadth beyond the functions of a risk and compliance register. It therefore is able to present almost all its risk or compliance record data on a single screen.
In the Enrima V6 series this single screen display is not possible as the both multiple views and considerable anciliary management objects are brought into the system (such as documents, assets, assertions, insurance, claims, etc).
==BPC RiskManager V6.2.5 (Enrima Edition)==
[[image:BPC_RiskManager_V6261_Main_Screen.jpg|539px]]
===BPC RiskManager - Who should use it?===
====User====
BPC RiskManager designed to manage the governance function of an organisation. It therefore fits in audit, risk management, compliance management, insurance risk management, environmental risk management, project risk management, human resources, OHS and strategic planning. It delivers functions covering both ther strategic and the operational functions of these disciplines. For example the claims module actually manages insurance claims (not merely registering them), the document management system is capable of actually managing documents (not merely cataloguing them), the compliance and strategy systems actually manage the remediation of the issue, etc.
It functions best as an integrated solution with multiple governenance teams using the one system. With each release we expand the governance functions in the system.
====Scale====
BPC RiskManager is designed to scale. There are four types of clients using it:
#. Single user or small work groups running off a single user install switched to server mode.
#. Medium scale enterprises with risk and executive seats on an IT group managed server / in-cloud and database.
#. Large scale enterprise with many seats actively managing general risks and compliance issues and project risks, etc
#. Hosting consolidators providing cloud services to many clients in different organisations with many databases.
Every version of BPC RiskManager (from the single user install, up) comes capable of operating in all these modes. For each type of operation there are specific features built in to aid maintenance and management (including multi database bulk operatiions for hosting providers).
===BPC RiskManager Features===
BPC RiskManager V6.2.5 (Enrima Edition) (often referred to as RiskManager V625 or Enrima), is a powerful risk and compliance management solution with an almost unlimited range of end-user configurable solutions. It delivers:
*General
** Totally end-user configurable (change almost any label or caption or search relationship, re-task fields, define your own risk and compliance model, build your own reports, define your own work flows, customisable messages, define your own risk structure, etc)
** Runs out-of-the-box (ready to use immediately after install in single-user or small work group mode).
** Provides an optional fast configure mode (shown on first run of any client and available at any time thereafter).
** An extremely versatile ratings engine support multiple methods of ratings compliance and risk issues. Each item can simultaneously store different ratings for inherent, residual, auditor, reviewer and unlimited current self ratings for each of likelihood, impact and (residual) risk. It also holds additional ratings for compliance breach, compliance rating, and unlimited assertion sets.
** Ratings can be rolled up through trees of risks and compliance issues
*Functional
** Risk Management
** Compliance Management
** Incident Management
** Planning
** Document Management
*Registers
** General Risk register(s) with unlimited risk types and able to distinguish project and general risks
** Project Risk register(s)
** Compliance register(s) with unlimited assertions/questions and assertions/question groups AND pure HTML based compliance surveys / checklists
** Incident & Hazard register
** Insurance register
** Claims register
** Legal register
** Document register
** Causes register
** Consequence & impact register
** Standard strategies register (Type of Control)
** Strategies & control register
** Actions register
** Work flow register
** Asset register
** Business plan register
** Survey register
** Access control
*Evaluation engines
** Risk & compliance rating
** Question & assertion rating
** Assessments engine
** Survey rules engine
** Charting engine
** Email management engine
** Exception tracking engine
*Work flow control systems
** Work flow engine
** Instantaneous internal message engine
** Instant and batched email management engine
** PAX & TMS ScripterStudio scripting engines
** Survey management system
** Exception tracking engine
*Data reporting and access
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. These structures are understood by the search and reporting engines.
** Unlimited risk structuring - risk folders to any depth, risk-linking, risk categorisation, unlimited master-child structures, etc
** Tree, search and flat risk navigation simultaneously supported
** Risks/compliance issues can inhabit any number of tree folders simultaneously (allowing multiple grouping and reporting frameworks with risk roll up)
** Link Objectives, assertions, questions, processes, legislative/regulator obligation, causes, risks, consequences, compliance obligations, controls / strategies, actions, risk history, incidents / hazards, people, supporting documentation, and information web-sites, and more.
** Full live search-able audit trail of all changes
** Storable searches used through-out the application to access and feed data to tables, views, folders and reports
** Multiple reporting engines:
*** Built-in pre-written reports
*** Very powerful, programmable end user report writer and manual (outputs in various formats including HTML and PDF)
*** Word Document (mail-merge) style report engine
*** SurveyManager Instant Reporting engine (maps survey response reports back into the survey layout)
*** BPC SurveyManager operating in web forms mode is a powerful reporting engine in its own right
*** Query Exporter (Administrator only - can cross feed to the import engine creating an excellent method for doing bulk updates based on extracted data)
*** Search based end user export
*** Built-In Charting
*** End-user charting
** End user sample reports
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL and PDF
** Dashboard with drill through to risk collections, risks, assessments and incidents
** Dashboard risk collections configurable via folder tree view system (so any risk/compliance topic can be put to the dashboard with unlimited layers of drill through).
*Messaging
** Built-in automated email messaging based on events and dates for a wide range of scenarios, and occurrences, with email contents able to be fed by custom reports from the report writer.
** Multiple levels of responsibility assignment on all trackable objects
** Risk Message racking and work flow message tracking
*Secretarial, Administration and Desktop Integration
** MS Office compatible
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL
** Spell checking using your MS Word dictionary
** Simple point and select search system but with an option for savable advanced query writer custom searches if required.
** Extensive configuration and customisation screens to support tuning the system to do just what you want.
** Dynamic screen captions allowing you to adopt your own terminology, which also appear to the report writer as the names of the fields
** Smooth support for large and small fonts and 96dpi and 120dpi and other screen resolutions
** Works on all versions of windows from W2000 up, including Vista and Version 7.
** Fast fully automated installation and upgrade system.
** Available in single/small work group and enterprise configurations
*Compliance System
** Compliance obligations can be viewed as general risks and compliance modes
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Compliance obligations will support multiple compliance models simultaneously (SOX / Sched7 / General / etc).
** Compliance obligations are stored internally as risks so they roll up smoothly into the general and project risk register
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. An issue can belong to many such relationships at once.
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of compliance ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings and question/assertion ratings rules for automated rating translation.
** Compliance responses automatically convert to risk equivalent ratings so that both compliance issues and risks can be seen on the one heat map, and in comparative tables.
** Unlimited compliance milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Risk System
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers.
** Risk Tolerances (rating and numeric) for differential risk reporting and automated condition reporting.
** Likelihood & consequence trigger points
** Separate audit comment and tracking data for each risk.
** Multiple modelling systems - inherent, current and residual risk ratings (with optional likelihood, impact, control and residual categories for each rating)
** Velocity supported at the impact/consequence level
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of risk ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings
** Confidential risks
** Risk advisory notes for each risk
** Unlimited risk milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Incident Management
** Fully configurable - drop lists, business rules, screens, etc.
** Incident type determines rules and attributes
** Multiple handling steps fully tracked - recorder, assignee, reviewer, responder, escalted to, investigator
** Automatic triggers for review, escalation, investigation, etc based on user configurable rules (triggered by participant information, incident attributes, etc.)
** Configurable unlimited incident attributes with triggers (for reviews, escalation, enhancements, workflow, etc.) to classify incidents
** Unlimited configurable incident types (which determine the set of incident attributes applied to the incident)
** Incidents have a built in workflow – record, assign, review, escalate, resolve, investigate, close
** Unlimited user defined additional fields for storing extra data
** Unlimited text fields details/notes, etc for unstructured data
** Change tracking
** Separate org structure defnition that lives side by side with the risk management org structure (allowing different structures for risk/compliance and incidents)
** Structure and rule driven review, escalation and investigation
** Unlimited incidents per risk/compliance event
** Incidents attached to more than one risk/compliance topic
** Incidents can be created and attached to a risk/compliance topic at a later time
** Notifiers
** Incident Causes – immediate and underlying (mirrors risk causes)
** Incident Actions – Current (done) and future, both proposed and approved + action assignment, progress and tracking
** Proposed actions can be converted to risk / compliance topic controlls
** Large array of location types (even GPS location specification)
** Unlimited partcipants per incident (with user defined roles)
** Particpant records of interview
** Participant injury tracking
** Review and investigation reminders
*Incident Investigations
** Investigations including progress tracking/status / findings / recommendations, etc
** Configurable investigation types with differing investigation team structures
** Investigation external document links
** Configurable and managed signoff models including separate lists for investigation team members and other parties
** Investigation signoffs with qualified and dissenting opinion options
** Investigations build distinct reports
*Internal Audit System
** Separate audit risk ratings and notes per risk/compliance issue
** Separate audit external document links
** Internal-audit remediation register with assignable tasks and remediation progress, status and outcome recording.
** Automated access escalation for user flagged as auditors
** Auditors use the same screens as normal users but have extra fields and facilities
** Automated CSA survey generation
** Full change logs kept of key accountable tables (can be expanded to include additional tables including additional tables added by clients)
*Insurance and claims
** Insurance register with renewal reminders
** Insurance policies link to risk/compliance registers via the strategy and controls register, actions register and document registers.
** Claims management
** Claims link to risks/compliance registers via incident and insurance registers
** Incident/Hazards Register (plus hooks for interfacing into a separate incident management system if desired)
*Causes Register
** Unlimited risk specific causes per risk
** Type-of-Cause allows standardisation of causes while allowing complete flexibility in description and instance of a cause (similar to Type-of-Control)
** Incident and Risk/Compliance causes.
** Causes can have numeric risk event triggers (allowing concepts such as the "likelihood of exceeding x events in a year")
** Direct sub linking between causes and strategies and consequences enables cause and effect strategy design and verifiable coverage of causes
** Causes can be sub linked off Assertions/Questions (the default for compliance screens) allowing low rating compliance questions or analytic steps for remediating breaches to be structured around the causes of each question's failure. This enables the compliance model to be around built around both compliance risk and compliance topics philosophies.
** As there can be an indefinite number of question sets with an indefinite number of questions per risk / compliance issue, cause structuring can get very deep.
** Causes integrate with surveys, the scripting engine and external modelling systems to enable programmatic setting of likelihood ratings using additional fields as part of the interface (like the "risk trigger value").
*Strategies & Controls register
** Strategies and controls with progress notes and tracking
** Register and track unlimited strategies and controls
** Customisable ratings scheme for each control or strategy including any of likelihood, impact, control, (residual) risk over inherent, residual, current self, audit, reviewer, etc ratings groups, as well as five ratings defaulting to authority, reliability, efficiency, economy, and timeliness control assertions.
** Officially mandated Type-of-Control list provides a template for approved control strategies and allows strategies to be both individually described, and structurally grouped and standardised.
** Strategies & Controls directly cross link to individual causes and impacts/consequences allowing you to tie specific strategies to one or more causes and consequences of a risk or compliance item.
** Strategies & Controls can have actions.
** (Coming soon: unlimited assertion/ratable question sets similar to that used for compliance and risk screens).
** Includes Responsible officer, delegate, email reminders, assignment tracking, cost and benefit measures, link to insurance, cyclic and one off controls/strategies, flag where insurance expired, due dates exceeded, user defined categories and subcategories, etc.
** Automatic access rights escalation where read only viewer is accessing a strategy for which they have responsibility
** Fully customisable messages with or without email running.
** Survey question library links surveys to strategies
** Can feed CSA automated surveys
*Financial Elements Register
** Unlimited charts of account
** Account rollup
** Store performance metrics (budget, actual, transaction volumes, etc)
** Store audit assessments for each element
** Link to audit/risk/compliance assertions
** Ownership
** Unlimited risks/compliance obligations per account
** Test plans and test plan scheduling
** Heat maps for each element with drill through to risks and incidents
*Document Register
** Document register for unlimited documents
** Supports multiple document management strategies simultaneously: unmanaged, delegated management and full management.
** Unlimited risk/compliance issues may be linked to each managed or unmanaged document.
** Unlimited unmanaged documents may be linked to a risk-compliance issue
** Document management can be set at the document or section level on a per-document basis
** Managed documents track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Managed document sections track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Full snapshot version control operates on managed documents - a full time-stamped copy of the relevant records is made for each change.
** The document register presents document and section specific lists and heat maps of all risks/compliance issues attached to the document or section and supports export on that basis.
** Main listing screens support dynamically constructed QBE filters and free text search to enable isolation of documents using specific terms or any of the tracking fields.
* Store documents internally or interface to your document management system, web site links available for most objects.
*Work flow engine
** The work flow system supports two purposes (a) documenting processes with flow charts, and (b) automating RM related activities
** Work flow modelling and diagramming tool (with a built-in script-able work-flow diagramming subsystem)
** Work flows can be executed and can invoke RM screens and external applications. Executed work flows can be assigned to individuals and have multiple individuals participating in different steps.
** Work flows steps can have attachments.
*Survey engine
** Full implementation of BPC SurveyManager with customised management client built-in
** Built in survey engine
** A full scale (not limited) survey / web forms engine that is licensed for separate use and can be used for far more than just your risk management requirements. Think of something you need to collect data on the BPC SurveyManager will handle it. The SurveyManager can be used to write entire web sites on its own.
*Access and security
** Single user mode or secured access modes (end user selectable)
** Multiple access security support (LDAP,AD, NTGroups, Internal, Trusted, etc)
** Configurable access rights for access to risk type, business group, business unit, risks over multiple levels of access from none to administration
** Automatic escalation of access to individual records where the user has responsibility assigned, but otherwise would not have access
*People & resources
** People and positions (resources) may be imported in bulk, created individually or automatically created on connection.
** Resources integrate with the access control system
** SurveyManager keeps a separate list if resources mirrored with the RiskManager resource tables
** RiskManager allows for three domains of resources - survey responders (access to specific surveys), risk manager known persons (can be managed by email, assigned responsibilities but do not have access to the system), and risk manager users (access allowed).
** User access control down to individual business unit risks & issues as read / update / create (See access control).
** Resources (people) can be retired (removed from lookup windows, etc) without deletion from system (to preserve risk/compliance history integrity).
*Scalability, Networking and communications
** N-Tier architecture, can be installed on one computer with the database (as in single user mode) or distributed across multiple servers (as in Enterprise/Web mode).
** Networked comms supports simultaneous or individual use of Raw TCP/IP, HTTP and HTTPS (SSL) network communications (all with compression)
** Supports unlimited simultaneous databases ''(subject to license purchased)''
** Supports unlimited simultaneous application servers ''(subject to license purchased)''
** Supports unlimited simultaneous survey engines ''(subject to license purchased)''
** Supports unlimited installed client desktops ''(subject to license purchased)''
*Other
** Cost and benefit tracking
** Full internal scripting language to support end user expansion and external interfacing
** Interfaces for external complex risk assessment (eg Monte-Carlo modelling risk systems such as Benfield / AON Remetrics)
** Single point of update publishing for clients
==BPC RiskManager Express V5.x==
[[image:BPCRiskManagerExpressV5.jpg|539px]]
BPC RiskManager Express has a dramatically simplified and restricted user interface, does not maintain structured causes lists (but does have unlimited "contributing factors" descriptions) and allows one level of responsibility for assignment of issues and actions, and does not have an end-user report writer (although it does support both mail-merge and word / XL template driven reporting). It can be configured as either a compliance or a risk solution running on separate databases through the one application server. Like it's more powerful sibling, it will support an indefinite number of databases.
BPC RiskManager Express is targeted at organisations where simplicity of operation and user input overrides the need for granularity of input and analysis, and where the additional governance sub-systems available in BPC RiskManager are not needed (eg insurance, claims, assertion / question rating models, work-flow, assessments, security, assets, etc.)
This riskwiki focuses on BPC RiskManager (Enrima Edition).
=Additional Resources=
[http://bpc.bishopphillips.com/forum/ BPC Support Forum]<br>
[http://bpc.bishopphillips.com/riskthink/ BPC RiskThink Blog]<br>
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php Request a free fully functional trial copy of BPC RiskManager (Enrima)]
<noinclude>
[[Category:Featured Article]]
[[Category:Bishop Phillips Software]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
{{BackLinks}}
</noinclude>
81bdffb458d1875bbf1156a08c95aa2571f1e615
3
2
2018-10-28T08:48:00Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=The BPC RiskManager Software Suite - Features=
==What is the BPC RiskManager Software Suite?==
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) in 2008. The latest release is July 2010.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width="100%"
|-width="100%"
|
* BPC RiskManager V5 (Express)
|
|-
|
* BPC RiskManager V6 (Enrima Edition)
|
|}
While there are a lot of similarities between the systems, they are not identical and not data compatible. BPC RiskManager V5 (Express) is maintained on an an annual update cycle, while BPC RiskManager (Enrima Edition) is maintained on a quarterly (every 3 months) update cycle.
In terms of scalability, both systems will handle thousands of simultaneous users, and both model risk management at the enterprise level and project level. Both systems include risk, controls/strategies, consequences, survey, compliance, incident management support and both systems feature customisable screens and field names. Both systems allow multiple simultaneously active databases.
The essential differences are in depth and complexity of issues supported and expandability of the system. Here they have significant differences. Express is designed to be extremely simple and consequently excludes both depth and breadth beyond the functions of a risk and compliance register. It therefore is able to present almost all its risk or compliance record data on a single screen.
In the Enrima V6 series this single screen display is not possible as the both multiple views and considerable anciliary management objects are brought into the system (such as documents, assets, assertions, insurance, claims, etc).
==BPC RiskManager V6.2.5 (Enrima Edition)==
[[image:BPC_RiskManager_V6261_Main_Screen.jpg|539px]]
===BPC RiskManager - Who should use it?===
====User====
BPC RiskManager designed to manage the governance function of an organisation. It therefore fits in audit, risk management, compliance management, insurance risk management, environmental risk management, project risk management, human resources, OHS and strategic planning. It delivers functions covering both ther strategic and the operational functions of these disciplines. For example the claims module actually manages insurance claims (not merely registering them), the document management system is capable of actually managing documents (not merely cataloguing them), the compliance and strategy systems actually manage the remediation of the issue, etc.
It functions best as an integrated solution with multiple governenance teams using the one system. With each release we expand the governance functions in the system.
====Scale====
BPC RiskManager is designed to scale. There are four types of clients using it:
#. Single user or small work groups running off a single user install switched to server mode.
#. Medium scale enterprises with risk and executive seats on an IT group managed server / in-cloud and database.
#. Large scale enterprise with many seats actively managing general risks and compliance issues and project risks, etc
#. Hosting consolidators providing cloud services to many clients in different organisations with many databases.
Every version of BPC RiskManager (from the single user install, up) comes capable of operating in all these modes. For each type of operation there are specific features built in to aid maintenance and management (including multi database bulk operatiions for hosting providers).
===BPC RiskManager Features===
BPC RiskManager V6.2.5 (Enrima Edition) (often referred to as RiskManager V625 or Enrima), is a powerful risk and compliance management solution with an almost unlimited range of end-user configurable solutions. It delivers:
*General
** Totally end-user configurable (change almost any label or caption or search relationship, re-task fields, define your own risk and compliance model, build your own reports, define your own work flows, customisable messages, define your own risk structure, etc)
** Runs out-of-the-box (ready to use immediately after install in single-user or small work group mode).
** Provides an optional fast configure mode (shown on first run of any client and available at any time thereafter).
** An extremely versatile ratings engine support multiple methods of ratings compliance and risk issues. Each item can simultaneously store different ratings for inherent, residual, auditor, reviewer and unlimited current self ratings for each of likelihood, impact and (residual) risk. It also holds additional ratings for compliance breach, compliance rating, and unlimited assertion sets.
** Ratings can be rolled up through trees of risks and compliance issues
*Functional
** Risk Management
** Compliance Management
** Incident Management
** Planning
** Document Management
*Registers
** General Risk register(s) with unlimited risk types and able to distinguish project and general risks
** Project Risk register(s)
** Compliance register(s) with unlimited assertions/questions and assertions/question groups AND pure HTML based compliance surveys / checklists
** Incident & Hazard register
** Insurance register
** Claims register
** Legal register
** Document register
** Causes register
** Consequence & impact register
** Standard strategies register (Type of Control)
** Strategies & control register
** Actions register
** Work flow register
** Asset register
** Business plan register
** Survey register
** Access control
*Evaluation engines
** Risk & compliance rating
** Question & assertion rating
** Assessments engine
** Survey rules engine
** Charting engine
** Email management engine
** Exception tracking engine
*Work flow control systems
** Work flow engine
** Instantaneous internal message engine
** Instant and batched email management engine
** PAX & TMS ScripterStudio scripting engines
** Survey management system
** Exception tracking engine
*Data reporting and access
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. These structures are understood by the search and reporting engines.
** Unlimited risk structuring - risk folders to any depth, risk-linking, risk categorisation, unlimited master-child structures, etc
** Tree, search and flat risk navigation simultaneously supported
** Risks/compliance issues can inhabit any number of tree folders simultaneously (allowing multiple grouping and reporting frameworks with risk roll up)
** Link Objectives, assertions, questions, processes, legislative/regulator obligation, causes, risks, consequences, compliance obligations, controls / strategies, actions, risk history, incidents / hazards, people, supporting documentation, and information web-sites, and more.
** Full live search-able audit trail of all changes
** Storable searches used through-out the application to access and feed data to tables, views, folders and reports
** Multiple reporting engines:
*** Built-in pre-written reports
*** Very powerful, programmable end user report writer and manual (outputs in various formats including HTML and PDF)
*** Word Document (mail-merge) style report engine
*** SurveyManager Instant Reporting engine (maps survey response reports back into the survey layout)
*** BPC SurveyManager operating in web forms mode is a powerful reporting engine in its own right
*** Query Exporter (Administrator only - can cross feed to the import engine creating an excellent method for doing bulk updates based on extracted data)
*** Search based end user export
*** Built-In Charting
*** End-user charting
** End user sample reports
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL and PDF
** Dashboard with drill through to risk collections, risks, assessments and incidents
** Dashboard risk collections configurable via folder tree view system (so any risk/compliance topic can be put to the dashboard with unlimited layers of drill through).
*Messaging
** Built-in automated email messaging based on events and dates for a wide range of scenarios, and occurrences, with email contents able to be fed by custom reports from the report writer.
** Multiple levels of responsibility assignment on all trackable objects
** Risk Message racking and work flow message tracking
*Secretarial, Administration and Desktop Integration
** MS Office compatible
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL
** Spell checking using your MS Word dictionary
** Simple point and select search system but with an option for savable advanced query writer custom searches if required.
** Extensive configuration and customisation screens to support tuning the system to do just what you want.
** Dynamic screen captions allowing you to adopt your own terminology, which also appear to the report writer as the names of the fields
** Smooth support for large and small fonts and 96dpi and 120dpi and other screen resolutions
** Works on all versions of windows from W2000 up, including Vista and Version 7.
** Fast fully automated installation and upgrade system.
** Available in single/small work group and enterprise configurations
*Compliance System
** Compliance obligations can be viewed as general risks and compliance modes
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Compliance obligations will support multiple compliance models simultaneously (SOX / Sched7 / General / etc).
** Compliance obligations are stored internally as risks so they roll up smoothly into the general and project risk register
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. An issue can belong to many such relationships at once.
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of compliance ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings and question/assertion ratings rules for automated rating translation.
** Compliance responses automatically convert to risk equivalent ratings so that both compliance issues and risks can be seen on the one heat map, and in comparative tables.
** Unlimited compliance milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Risk System
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers.
** Risk Tolerances (rating and numeric) for differential risk reporting and automated condition reporting.
** Likelihood & consequence trigger points
** Separate audit comment and tracking data for each risk.
** Multiple modelling systems - inherent, current and residual risk ratings (with optional likelihood, impact, control and residual categories for each rating)
** Velocity supported at the impact/consequence level
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of risk ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings
** Confidential risks
** Risk advisory notes for each risk
** Unlimited risk milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Incident Management
** Fully configurable - drop lists, business rules, screens, etc.
** Incident type determines rules and attributes
** Multiple handling steps fully tracked - recorder, assignee, reviewer, responder, escalted to, investigator
** Automatic triggers for review, escalation, investigation, etc based on user configurable rules (triggered by participant information, incident attributes, etc.)
** Configurable unlimited incident attributes with triggers (for reviews, escalation, enhancements, workflow, etc.) to classify incidents
** Unlimited configurable incident types (which determine the set of incident attributes applied to the incident)
** Incidents have a built in workflow – record, assign, review, escalate, resolve, investigate, close
** Unlimited user defined additional fields for storing extra data
** Unlimited text fields details/notes, etc for unstructured data
** Change tracking
** Separate org structure defnition that lives side by side with the risk management org structure (allowing different structures for risk/compliance and incidents)
** Structure and rule driven review, escalation and investigation
** Unlimited incidents per risk/compliance event
** Incidents attached to more than one risk/compliance topic
** Incidents can be created and attached to a risk/compliance topic at a later time
** Notifiers
** Incident Causes – immediate and underlying (mirrors risk causes)
** Incident Actions – Current (done) and future, both proposed and approved + action assignment, progress and tracking
** Proposed actions can be converted to risk / compliance topic controlls
** Large array of location types (even GPS location specification)
** Unlimited partcipants per incident (with user defined roles)
** Particpant records of interview
** Participant injury tracking
** Review and investigation reminders
*Incident Investigations
** Investigations including progress tracking/status / findings / recommendations, etc
** Configurable investigation types with differing investigation team structures
** Investigation external document links
** Configurable and managed signoff models including separate lists for investigation team members and other parties
** Investigation signoffs with qualified and dissenting opinion options
** Investigations build distinct reports
*Internal Audit System
** Separate audit risk ratings and notes per risk/compliance issue
** Separate audit external document links
** Internal-audit remediation register with assignable tasks and remediation progress, status and outcome recording.
** Automated access escalation for user flagged as auditors
** Auditors use the same screens as normal users but have extra fields and facilities
** Automated CSA survey generation
** Full change logs kept of key accountable tables (can be expanded to include additional tables including additional tables added by clients)
*Insurance and claims
** Insurance register with renewal reminders
** Insurance policies link to risk/compliance registers via the strategy and controls register, actions register and document registers.
** Claims management
** Claims link to risks/compliance registers via incident and insurance registers
** Incident/Hazards Register (plus hooks for interfacing into a separate incident management system if desired)
*Causes Register
** Unlimited risk specific causes per risk
** Type-of-Cause allows standardisation of causes while allowing complete flexibility in description and instance of a cause (similar to Type-of-Control)
** Incident and Risk/Compliance causes.
** Causes can have numeric risk event triggers (allowing concepts such as the "likelihood of exceeding x events in a year")
** Direct sub linking between causes and strategies and consequences enables cause and effect strategy design and verifiable coverage of causes
** Causes can be sub linked off Assertions/Questions (the default for compliance screens) allowing low rating compliance questions or analytic steps for remediating breaches to be structured around the causes of each question's failure. This enables the compliance model to be around built around both compliance risk and compliance topics philosophies.
** As there can be an indefinite number of question sets with an indefinite number of questions per risk / compliance issue, cause structuring can get very deep.
** Causes integrate with surveys, the scripting engine and external modelling systems to enable programmatic setting of likelihood ratings using additional fields as part of the interface (like the "risk trigger value").
*Strategies & Controls register
** Strategies and controls with progress notes and tracking
** Register and track unlimited strategies and controls
** Customisable ratings scheme for each control or strategy including any of likelihood, impact, control, (residual) risk over inherent, residual, current self, audit, reviewer, etc ratings groups, as well as five ratings defaulting to authority, reliability, efficiency, economy, and timeliness control assertions.
** Officially mandated Type-of-Control list provides a template for approved control strategies and allows strategies to be both individually described, and structurally grouped and standardised.
** Strategies & Controls directly cross link to individual causes and impacts/consequences allowing you to tie specific strategies to one or more causes and consequences of a risk or compliance item.
** Strategies & Controls can have actions.
** (Coming soon: unlimited assertion/ratable question sets similar to that used for compliance and risk screens).
** Includes Responsible officer, delegate, email reminders, assignment tracking, cost and benefit measures, link to insurance, cyclic and one off controls/strategies, flag where insurance expired, due dates exceeded, user defined categories and subcategories, etc.
** Automatic access rights escalation where read only viewer is accessing a strategy for which they have responsibility
** Fully customisable messages with or without email running.
** Survey question library links surveys to strategies
** Can feed CSA automated surveys
*Financial Elements Register
** Unlimited charts of account
** Account rollup
** Store performance metrics (budget, actual, transaction volumes, etc)
** Store audit assessments for each element
** Link to audit/risk/compliance assertions
** Ownership
** Unlimited risks/compliance obligations per account
** Test plans and test plan scheduling
** Heat maps for each element with drill through to risks and incidents
*Document Register
** Document register for unlimited documents
** Supports multiple document management strategies simultaneously: unmanaged, delegated management and full management.
** Unlimited risk/compliance issues may be linked to each managed or unmanaged document.
** Unlimited unmanaged documents may be linked to a risk-compliance issue
** Document management can be set at the document or section level on a per-document basis
** Managed documents track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Managed document sections track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Full snapshot version control operates on managed documents - a full time-stamped copy of the relevant records is made for each change.
** The document register presents document and section specific lists and heat maps of all risks/compliance issues attached to the document or section and supports export on that basis.
** Main listing screens support dynamically constructed QBE filters and free text search to enable isolation of documents using specific terms or any of the tracking fields.
* Store documents internally or interface to your document management system, web site links available for most objects.
*Work flow engine
** The work flow system supports two purposes (a) documenting processes with flow charts, and (b) automating RM related activities
** Work flow modelling and diagramming tool (with a built-in script-able work-flow diagramming subsystem)
** Work flows can be executed and can invoke RM screens and external applications. Executed work flows can be assigned to individuals and have multiple individuals participating in different steps.
** Work flows steps can have attachments.
*Survey engine
** Full implementation of BPC SurveyManager with customised management client built-in
** Built in survey engine
** A full scale (not limited) survey / web forms engine that is licensed for separate use and can be used for far more than just your risk management requirements. Think of something you need to collect data on the BPC SurveyManager will handle it. The SurveyManager can be used to write entire web sites on its own.
*Access and security
** Single user mode or secured access modes (end user selectable)
** Multiple access security support (LDAP,AD, NTGroups, Internal, Trusted, etc)
** Configurable access rights for access to risk type, business group, business unit, risks over multiple levels of access from none to administration
** Automatic escalation of access to individual records where the user has responsibility assigned, but otherwise would not have access
*People & resources
** People and positions (resources) may be imported in bulk, created individually or automatically created on connection.
** Resources integrate with the access control system
** SurveyManager keeps a separate list if resources mirrored with the RiskManager resource tables
** RiskManager allows for three domains of resources - survey responders (access to specific surveys), risk manager known persons (can be managed by email, assigned responsibilities but do not have access to the system), and risk manager users (access allowed).
** User access control down to individual business unit risks & issues as read / update / create (See access control).
** Resources (people) can be retired (removed from lookup windows, etc) without deletion from system (to preserve risk/compliance history integrity).
*Scalability, Networking and communications
** N-Tier architecture, can be installed on one computer with the database (as in single user mode) or distributed across multiple servers (as in Enterprise/Web mode).
** Networked comms supports simultaneous or individual use of Raw TCP/IP, HTTP and HTTPS (SSL) network communications (all with compression)
** Supports unlimited simultaneous databases ''(subject to license purchased)''
** Supports unlimited simultaneous application servers ''(subject to license purchased)''
** Supports unlimited simultaneous survey engines ''(subject to license purchased)''
** Supports unlimited installed client desktops ''(subject to license purchased)''
*Other
** Cost and benefit tracking
** Full internal scripting language to support end user expansion and external interfacing
** Interfaces for external complex risk assessment (eg Monte-Carlo modelling risk systems such as Benfield / AON Remetrics)
** Single point of update publishing for clients
==BPC RiskManager Express V5.x==
[[image:BPCRiskManagerExpressV5.jpg|539px]]
BPC RiskManager Express has a dramatically simplified and restricted user interface, does not maintain structured causes lists (but does have unlimited "contributing factors" descriptions) and allows one level of responsibility for assignment of issues and actions, and does not have an end-user report writer (although it does support both mail-merge and word / XL template driven reporting). It can be configured as either a compliance or a risk solution running on separate databases through the one application server. Like it's more powerful sibling, it will support an indefinite number of databases.
BPC RiskManager Express is targeted at organisations where simplicity of operation and user input overrides the need for granularity of input and analysis, and where the additional governance sub-systems available in BPC RiskManager are not needed (eg insurance, claims, assertion / question rating models, work-flow, assessments, security, assets, etc.)
This riskwiki focuses on BPC RiskManager (Enrima Edition).
=Additional Resources=
[http://bpc.bishopphillips.com/forum/ BPC Support Forum]<br>
[http://bpc.bishopphillips.com/riskthink/ BPC RiskThink Blog]<br>
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php Request a free fully functional trial copy of BPC RiskManager (Enrima)]
<noinclude>
[[Category:Featured Article]]
[[Category:Bishop Phillips Software]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
{{BackLinks}}
</noinclude>
81bdffb458d1875bbf1156a08c95aa2571f1e615
Business Process Reengineering - Introduction
0
286
319
2012-08-30T13:36:46Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this article. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Reengineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, the style and the detail provided, as the original was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time. As the charting method is a fairly involved, we will also be providing examples of systems charted using the method. This chapter is the introduction chapter, which provides a reasonably good overview of the approach.
</noinclude>
==Definition, Purposes & Outcomes==
=== Definition ===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:BusinessComponentObjectives.png]]
</div>
</td>
</tr>
</table>
Business Process Reengineering (BPR) ''is the method by which the infrastructure, policies, procedures and practices of an organisation are reviewed and redesigned to achieve some predefined purpose and objective.'' Purpose and Objective differ in that the purpose describes the reason for the process while the objective is the reason for the reengineering of the process. The objective is generally the optimisation of the quality - cost relationship, but may be any other objective(s) defined by the stakeholders of the processes revised.
Hammer, a popular author of reengineering texts defines reengineering as : The fundamental rethinking and radical redesign of business processes to bring about dramatic improvements in performance. Essentially, he argues that BPR is about major change in an organisation, yet perhaps this reflects a rather naive preoccupation with “big-is-better”. BPR can be about constrained well focussed small scale redesign as much as about monolithic reconstruction.
BPR is not new, although many consultants in the field try to claim otherwise. It is simply one more evolutionary step in a long stream of management change processes that includes Statistical Quality Control, TQM, Internal Audit, Work & Job Redesign, Goal Focussed Management, Workflow Management, Systems Analysis, etc. The theoretical foundation in BPR is quite old and can be seen particularly in the work in Systems Analysis undertaken at the University of Lancaster since 1969. What is new about BPR is its holistic view of the organisation and its attempt to capture the management philosophies that preceded it into a single integrated method.
Perhaps due in part to its conglomerate nature there is little standardisation among BPR approaches nor agreement on what is, or is not, BPR. With a few notable exceptions, the literature tends to be long on promises and case studies claiming stratospheric success but short on detail. This manual attempts to provide both a definition of BPR and an integrated strategy of analytic methods for performing it.
Although significantly different in approach from the work in systems analysis of the University of Lancaster, the development of our method owes a fundamental debt to the conceptual insight of that team. We have borrowed concepts, however, from a wide domain of disciplines ranging from accounting to computer science, and from psychology to marketing. It is not intended that the analytic tools of the method be cast in stone by this manual. No approach is perfect, and if this method is not seen to embrace its own continuous improvement then it will be as flawed as the business systems it purports to improve.
=== Purpose of BPR ===
In a BPR exercise we consider all aspects of managerial responsibility - from the organisation design through to the procedures and practices adopted. The BPR project does not attempt to define the purpose or the objectives of its systems of the organisation, rather once defined, it provides the machine to deliver that purpose and objective(s).
The method used in the reengineering process must deliver a complete description of that machine. This include the organisational structure, the behavioural paradigm, duties, controls, performance indicators, policies, procedures, data management, continuous improvement procedures, computer systems, etc, etc.
It is easy to confuse the activity of BPR with that of computer systems implementation, since many of the forces driving a BPR exercise beg computerisation as the easiest way to achieve apparently dramatic improvement. This is a mistake. Implementing computerised solutions is not the purpose of BPR, although a computerised solution is one of the tools a reengineer may use to implement some part of the processes and a BPR component .
Nor should we rely on computer solutions to all cases. While it is often true that the computerisation of a process will deliver significant improvement in the ratio of output volume and quality to human effort (input), when viewed from a holistic perspective (which includes infrastructure, investment, opportunity cost, and solution responsiveness to change) the computerised solution may not always be as attractive as first thought. Not withstanding these comments, a planned change in information systems provides a common and sensible catalyst for the BPR programme.
Essentially, the purpose of BPR is to build business systems able to deliver the organisation’s mission while optimising some given combination of objectives. In building the system, we must apply appropriate analytic techniques and appropriate implementation strategies. The weaker the constraints on the process applied by management - ie the wider the range of options left on the table for consideration - the more successful (in terms of optimising the objectives) the outcome is likely to be. The purpose of the system either will or will not be satisfied by the system design options made available - the quality of that delivery is measured by the objectives.
=== Outcomes ===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:BPR Components.png]]
</div>
</td>
</tr>
</table>
The result of the BPR project is a working system tuned to optimise some combination of objectives in delivering the stakeholder’s purpose. It is defined by a set of system descriptions, or views of the system, which consider, categorise and structure the matter from a number of angles.
Illustrated in Figure are the key components of a system description produced by the BPR method detailed in this manual. There are many differences between the approach presented here and the convention literature on BPR both in method and outcome. Henceforth we shall refer to this approach as the Bishop BPR (or BBPR) method.
The method produces a process and organisational rework that is naturally integrated with risk and compliance governance systems and (in its detailed delivery) uses a unique charting system which blends computational and human processes together in a common stuctured and testable form.
We have used and progressively improved the method detailed in this text since the late 1980's and it has been applied in the delivery of consultancies to well over several hundred organisations covering non-profit, government and corporate sectors. It has been applied in its pure form as a process reengineering system, in reduced forms as an internal audit systems audit process, a business systems design model (for design and development of business computing systems), and with various strategy enhancements as a business strategy planning tool. While this author has brought it to each consulting organisation with he has worked or lead over the years, it has benefitted from the ideas and contributions of many collegues.
We shall explore the BBPR method throughout this text and provide the tools and techniques necessary to deliver the BBPR system description. Here we provide a brief introduction to the ten key descriptive outputs in the figure:
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Descriptive Output </th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
Key Performance Indicators & Benchmarks / Targets
</td>
<td>
Performance management - how we directly manage and monitor the achievement of the system’s purposes
</td>
</tr>
<tr>
<td>
Active Control Management System
</td>
<td>
Internal System Integrity - how we directly monitor and manage the achievement of the system’s objectives.
</td>
</tr>
<tr>
<td>
Organisation Design
</td>
<td>
The objects/entities and their roles with their managerial, behavioural and reporting relationships identified.
</td>
</tr>
<tr>
<td>
Decision Tree
</td>
<td>
The tree (or Information Map) charts the decisions required by entities in the system, the relationships between the decisions and their information needs
</td>
</tr>
<tr>
<td>
Process & Workflow Charts
</td>
<td>
The sequence of activities making up the functional components of a system.
</td>
</tr>
<tr>
<td>
Event Calendar
</td>
<td>
The timing of events and their cycles and the processes they trigger
</td>
</tr>
<tr>
<td>
Client Provider Service Agreements
</td>
<td>
The objects/entities comprising the system seen as pairs of clients and providers (of services, data, goods, etc) emphasising their respective duties. The approach establishes notional contracts or service agreements which outline each entity’s responsibilities in the client provider relationship.
</td>
</tr>
<tr>
<td>
Data Management
</td>
<td>
The data stores in the system, what the data represents and how this data is managed
</td>
</tr>
<tr>
<td>
Continuous Improvement System
</td>
<td>
The strategy for delivering system improvement on a continuous basis.
</td>
</tr>
<tr>
<td>
Implementation and Change Strategy
</td>
<td>
The approach to managing the implementation of the reengineered system in the organisation and particularly managing people through the change process.
</td>
</tr>
</table>
The system description is only the ‘record’ of the real outcome of the BBPR approach - that of business performance improvement through better business processes. The ABPR method produces a system designed to optimise certain predefined objectives (such as cost of inputs to quality) while the system description attempts to formalise that system and provide the mechanisms for monitoring performance, and maintaining and tuning that system.
In the model organisation, the approach starts with the strategic plan of the organisation (or unit) being reviewed and uses that plan’s components (vision, mission, key result areas, critical success factors, strategies, key performance indicators, targets and timeframe) to focus the design effort with purpose and objectives. In the real organisation, planning is generally something less than perfect, so we must employ a wider net in defining the focus of the BPR exercise. Once armed with a focus, a wide variety of sources and analytic tools are employed to build a business system which will best achieve management’s plans.
==The Analytic Method & Its Tools==
===The Structure===
At the heart of the ABPR method is a set of ‘analytic tools’ (methods) that help define views of a system that highlight the particular properties in which we are interested. The key components are illustrated in Figure 1
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1"
width="100%" >
<tr>
<td>
<div class="center">
[[Image:BPRAnalyticStructure.png]]
</div>
</td>
<td>
<table >
<tr>
<td>
The analytic method is based on a simple premise:
A System is comprised of Recursive Objects only. Any system can be described by four types of Objects: Entities, Data Stores, Maps (Processes), and Quality Managers (Control/Performance Criteria).
The simple dataflow diagram of Figure 3 shows a basic system. Entity A provides data to Entity B via a single process (under the control of Entity C) which maps the data from one data store to another. The performance of the mapping process is managed by the quality control process under the control of Entity D. The quality control process is approximately equivalent to an engineering feedback loop.
</td>
</tr>
<tr>
<td>
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:BPC4KeyChartObj.png]]
</div>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
</table>
Mapping is a computational and mathematical term which describes the mechanism by which data is transformed from one state or form to another. In a business process that transformation might be a simple as the act of transcribing an invoice from its physical (eg. paper based) state to an electronic record in an accounts parable system through the process of data entry. The data in its input state may be said to have been mapped to another state through some process of transformation.
The computer engineering reader will recognise the similarity of the diagram to a dataflow model.
The logical starting point of a BPR exercise may seem to be the Performance Criteria definition (assuming that the overall purpose if the system being improved is already known), but it is important to note that each of the four definition activities should continue concurrently throughout the project. It is not unusual for the Performance Assumptions to change as a result of the other BPR activities, and virtually certain where the project is a Strategic Planning exercise.
This mixing of strategy planning and BPR may at first seem a little unusual, but in the impact of the BPR analysis can be to cause a fundamental rethink of the business strategy itself. Where the focus is merely to re-design a specific, targetted transactional process such a strategic impact is, perhaps less likely, but where the targeted business process is the core of the business, such an impact is surprisingly common.
In particular the KPI definition both commences and completes a project. The table lists these key analytic tools and provides an overview of the activity. These tool classes are typical of those employed, but not necessarily the only ones appropriate to any given project.
===The Modelling Tools===
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Analytic Tool Class </th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
KPI Assessment
</td>
<td>
In an ideal organisation the planning documents establish the focus for all activity. Our search for the focus of the BPR must therefore begin with the planning and policy role of management. Where available, sources to be reviewed include:
<ul>
<li> Statement of System Objectives
<li> Corporate Plan
<li> Budgets
<li> Benchmarks
</ul>
Organisations are rarely ideal and other techniques will need to be applied, depending on the culture of the organisation being reviewed. Such techniques may include SWOTC (Strengths, Weaknesses, Opportunities, Threats, and Constraints) analysis, benchmarking, corporate goal setting, interview, etc and may need to be undertaken to establish the purpose and key objectives of the system being reviewed.
Armed with this information the first view of Key Performance Indicators (KPI) appropriate to the system should be definable. In a sense, the KPI’s are like the gauges and alarms of an airplane, car or any other mechanical device. They alert the system’s ‘pilot’ to the status of the machinery, and allow rapid identification and adjustment of the system if anything ‘goes wrong’. In this sense the selection of the correct KPI’s is critical: if there is no gauge for a problem occurring it may not be detected until the problem is obvious without the help of a gauge - and possibly too late to be repaired.
In this first, top level assessment the KPIs will generally be whole-of-system measures. As other components of the ABPR are resolved (such as the Process Mapping and the Client Provider Analysis) the process detail level will emerge which becomes organisation’s operational ‘alarm system’. The ABPR has a specific design paradigm called Active Control Management to implement this KPI based control system in a cost efficient manner.
</td>
</tr>
<tr>
<td>
Client-Provider Analysis
</td>
<td>
A technique adopted from TQM which classifies the entities creating, managing and consuming data in the system as clients (data recipients) or providers (data suppliers) of one another. In performing the analysis we turn to information sources such as:
<ul>
<li> External Clients & Providers
<li> Internal Clients & Providers
<li> Organisation Structure
<li> Roles & Duty Statements
<li> Implied Contracts
</ul>
While it is important to understand the organisational structure as it stands - because, among other things, it dictates the client-provider relationships, it should not necessarilly bind the designer. An organisational model reflects legislative, cultural and historic traditions that may be critical to retain, as well as (possibly) many years of legitimate experience among the management team in the industry and market in which you are working. It must not simply be disregarded in the BPR process in favour of radical change.
Indeed, the author generally advises against too ambitious an organisational change, unless change is part of the culture or intended management strategy. In some organisations, frequent re-organisation is part of the management ethos, and such an approach is as legitimate and successful a management model as any other. One must, nevertheless, be careful in taking the existing structure (or management ethos!) as a given - particularly where the organisation is seeking a competitive edge beyond mere marginal improvement in efficiency or quality.
The BBPR method uses its own method of analysing organisational structures called The Organisational Community Network Model (which is one of the reasons that the BPR method frequently impacts organisational design). This approach is appropriate, even where the organisation will aubstantially retain it's orginal shape after the BPR project as it leads to a highly efficient and focussed "desk top" test process architecture, and where the option for organisational redesign is on the table, can lead to a very radical outcome.
</td>
</tr>
<tr>
<td>
Stakeholder Analysis
</td>
<td>
The direct stakeholders are addressed in the Client Provider analysis, while the indirect stakeholders are addressed here - in the Stakeholder Analysis.
Essentially the indirect stakeholders provide the organisation with drivers & constraints. Typical sources include:
<ul>
<li> Legislative Obligations
<li> Cultural Expectations
<li> Reporting Obligations
</ul>
</td>
</tr>
<tr>
<td>
Data Store Catalogue
</td>
<td>
The catalogue is the BPR equivalent to a data base administrator’s data dictionary. It describes all the data stored by the system, and the data stores themselves. It specifies the access rights, custodianship rules, data integrity standards and the static relationships between data stores.
Data stores include all the data managed by the system and methods of temporary or permanent storage. Data stores include electronic (abstract) and physical storage such as documents, files, filing cabinets, in trays, bins, etc.
Data Integrity Standards must be established system wide to which data stores adhere. The standards should be consistent with those applied by quality managers.
</td>
</tr>
<tr>
<td>
Process Mapping
</td>
<td>
Perhaps the most involved of all the activities of the BPR exercise. Process mapping is a general name for a variety of procedural analysis and design activities. The information sources include:
<ul>
<li> Functional Description
<li> Cradle to Grave Tracing - System Walkthrough
<li> Manuals
<li> List of Data Sources & Destinations
<li> Client / Provider Mapping
<li> Data Load Analysis (transaction volumes, processing rates, etc)
</ul>
The key activity during process mapping is the production of the Data flow diagrams and supporting documentation. This is done in two streams simultaneously:
<ol>
<li> Existing systems
<li> Redesigned Systems
</ol>
The data flow charts form the basis to the reengineering. They combine all aspects of the other analytic tools and describe the algorithm of the system.
In process mapping we treat all processes of a system as operating concurrently and control their timing and behaviour through messages, which take the form of either data or events.
The process map is not complete until the system data loading has been assessed for each process. The data load analysis involves examining data volumes and processing times, throughput assessment, reliability rates, etc.
</td>
</tr>
<tr>
<td>
Decision Tree / Information Mapping
</td>
<td>
The system handles not just data but information. Data becomes information when it exhibits certain quality characteristics. Information data must be appropriate to its purpose and reliable (where reliability implies standards of timeliness, accuracy, completeness, etc). Information mapping involves matching the data managed by a system to the decisions that must be made in operating that system. It requires, in part, the construction of a detailed decision tree spanning the entities in the system over time.
Necessarily, it also implies the existence of an events calendar which should link into the data flow diagrams. The information map includes a the information needs of the quality managers, and may be expressed in whole or in part through the Active Control Management design paradigm detailed later in this text.
The information map will require consideration of issue including:
<ul>
<li> Information Requirements
<li> Event Calendar
<li> Reporting Obligations
<li> Performance Control Management System (eg ACM)
</ul>
</td>
</tr>
</table>
===Organisational Representation (Introduction)===
When we think of organisational representation, we traditionally think of the heirarchical organisational chart. Resenbling an inverted tree, the organisational chart provided by almost all charting packages represents a cross between a reproesentation of physical or geographic position and reporting lines - and tells us very little about how a business organisation is really organised. At best it leads to a bureaucratic semi-accurate organisational view, and at worst, it is wildely incorrect such as in Matrix organisations.
As with many traditional diagramming systems it is horrendously inadequate for all but the grossest simplification of an organisation.
In the BBPR, we us a Community Network model which provides far richer analysis and directly represents the positioning of an organisation with its market and community.
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
Community domains can be defined as required for the purpose of the analysis, but in the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
You can read more about the Stakeholder Community Network Model and the Community Network Theory of organisational design and analysis [[The Stakeholder Community Network Model|Stakeholder Community Network Model and the Community Network Theory of organisational design and analysis here]].
===The Process Representation (Introduction)===
The full process charting model forms a language that can be represented either diagrammatically or descriptively. There are a number of different symbols and descriptive encoding rules, but in essence many of thesee enhancements are for diagramtic efficiency. The core of the charting system revolves around only a few symbols and the full model merely expands on these to provide a richer descriptive set, and more analytic detail with fewer diagramatic elements. The full model is described in [[Business Process Reengineering - Process Charting|advanced charting]].
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:DataFlow.png]]
</div>
</td>
</tr>
</table>
In the figure, '''''data flows''''' along, and in the direction of, the arrows between the entities, data stores and maps while control data flows principally into, and out, of the quality manager. The crossed-rectangular shapes are entities while the open ended rectangular shapes are (file) data stores. The maps and quality managers are shown by circles.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:Entity.png]]
</div>
</td>
</tr>
</table>
'''''Entities''''' are equivalent to people, machines, or processes external to the system being examined. In a sense they are givens in the system analysis, in that their functioning is assumed of a fixed standard and excluded from redesign. Those aspects of behaviour that can be redesigned are represented by the other three objects types.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:DataStore.png]]
</div>
</td>
</tr>
</table>
'''''Data Stores''''' are objects in which data resides from time to time. The stores are not the actual data itself, merely a representation of it. In the ‘object oriented analysis world’, data exists in the form of messages between objects. For example, Two people (entities) talking to each other (exchanging messages). Messages are essentially transient and so, for data available to be available for any length of time, it must be stored. Data Stores include documents, files, database records, and desk in-trays, etc.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:Map.png]]
</div>
</td>
</tr>
</table>
'''''Maps''''' are objects which perform an operation on data other than storing it. They transport data, change data, analyse data, update a database record, produce a report, authorise a transaction, etc. The term ‘map’ means ‘mapping data from one state to another’. Maps perform the transformations of a system, but they are concerned with data. For data to become information it must have the added dimension of quality.
'''''Quality Managers''''' are objects which administer the performance of the system. The quality manager does not transform the data handled by the system, but rather manages the system itself. Quality managers rely on the KPIs of the system and its component parts measuring variance from plan and performing the appropriate remedial action such as tuning Map parameters or escalating the problem.
In one sense the '''''Quality Manager''''' is a kind of process, but its responsibility is to modify the behaviour of the system in accordance with the purpose and objectives of the system and is therefore fundamentally different from a Map which represents the embodiment of that purpose. In another sense the Quality Manager is a kind of reactive data store - it both stores data and responds to it. The quality manager deals principally with control data, although this is by no means exclusive or necessary.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:RecursiveShapes.png]]
</div>
</td>
</tr>
</table>
'''''Objects are recursive''''', and therefore may contain more objects of the same or different type. For example, a file contain documents (both data stores), a document contains fields (more data stores), an organisation may contain people (both entities), an organisation (entity) may contain functions (maps) while a business cycle such as Purchasing (a map) may contain an entire system of roles (entities), procedures (maps), KPI’s measures (quality managers) and documents (data stores).
Processes (maps & quality managers) are concurrent. This means that, unless restrained by a lack of input (data to process) or awaiting an event, each process is trying to operate at the same time as every other process. This reflects reality - people do not follow a neat sequential order when interacting with one another unless explicitly constrained to do so. Instead, they operate simultaneously, at different speeds to one another, and in self chosen patterns. To model the world correctly we must also model this behaviour
You can read more about the process charting method in [[Business Process Reengineering - Process Charting]]
===The Analysis Tools===
The designed system will be documented with data flow charts, client-provider “performance agreements”, ACM control checklists, a decision to data source matrix, task schedule sheets cross-referenced to the data-flow diagrams. These facilities can be provided both electronically or on paper as desire by the client. The degree to which the processes and documentation can be automated is restricted only by the client’s computer system capabilities and software.
====Process Representation Using Software====
There are a number of practical charting tools that can be used. For 2D representation, we recommend either ABC Flowcharter or Visio, while for 3D client walkthrough of a designed system we recommend a MMORG such as SecondLife (http://SecondLife.com), or TrueSpace (http://www.caligari.com/).
With respest to to the 2D tools, both suggested tools have their strengths and weaknesses. Visio has excellent microsoft integration desktop application, and is directly supported by a number finance and business applications as a business process modelling environment. ABC Flowcharter, has (in our view) a shorter learning curve) and and excellent interface, and good integration into MS documentation tools.
In choosing a 2D tool you should consider:
The tool should support diagrammes:
* consisting of many linked pages
* with recursive (self referential) structures
* graphic object drill through (ie. you can select an object such as a process which summarises many sub-processes and link to one or more pages that represent the steps in the process
* containing graphic objects with unique id's, text descriptions, and other user defined data attributes that can be stored with them (eg transaction volumes, costs, probabilities, risk assessment, etc)
* editable splines for connecting shapes (bendable curved lines)
* with point and click editing
* with user defined shapes and image import
* that represent the Bishop Phillips Process Modelling shapes.
* containing URL links at at least the graphical object (including lines) level (ie. linking an object to a internet/intranet page)
* that can be imported into text documentation and presentation tools (MS Word / MS PowerPoint, etc) compatible with your business environment (standard desktop)
* that ideally can be scripted with a scripting language that allows active simulation or calculations of events and transactions occurring (optional - but a good idea)
* that can be generated driectly from an electronic drafting whiteboard (optional, but saves you a lot of time).
3D tools are a much newer approach. The biggest advantage of a 3D modelling tool is that you can 'walk' the client through the business process. Possibly the only practical & right-priced ones available at the moment are SecondLife and Caligari TrueSpace. Over the years we have tried a number of approaches to this idea, until the advent of SecondLife, we built our 3D models in TrueSpace. TrueSpace is a serious 3D modelling environment, and while simple to learn, as 3D graphical modelling environments go, it is not a tool for novices, although it does produce spectacular 3D models, it is not so suited to walking the client through the model as presenting a canned 3D visualisation of the business model. Recently it has gained a MORG add-on/representation and linked with one of a number of games engines it can be used quite successfully as a walk-through environment.
With the advent of second life (and the growing number of similar MORG systems that are either appearing now or soon to appear on the market), and more practical and faster solution is available (all be it, less visually stunning in production quality). A SecondLife based model allows you and your client to literally enter the model as people and walk or fly arround the components of your system, watching transactions visual flow through the process, event occur, control systems filter errors, and output being produced at varying transaction rates. The building interface is fast and simple to learn, and the scripting environment allows you to rapidly simulate many different scenarios.
With such an approach you can literally have your client see the transactions flow through a virtual representation of a system (a bit like the movie 'Tron'), or build a representation of their physical environment (such as a building, or office floor) and simulate the behaviour of the people and the control system operating. The world-wide scale of MORG users means you can contract the development work to inexpensive professional builders, instead of building it yourself.
The great weakness of these environments is that they are not yet real time in terms of construction (where as a 2D chart can (almost) be built in real-time as your client describes their processes, and documentation in conventional 2D media is not a natural consequence of a 3D simulation (whereas 2D charts can be included in text based documentation with ease).
In choosing a 3D tool you should consider:
* speed of construction of 3D elements (ideally you will need a 'primitive' rather than 'mesh' or 'nurbs' based building solution for speed.
* scripting language and partical system support (essential)
* ability to script primitives (objects) concurrently on a massive scale
* message passing support
* ability to create avatars (or primitars) that can interact with the model (ie. walk around inside it)
* availability of low cost developers/builders
* ease of installation of appropriate client software
* ownership and permanence of the 3D models built
* support for importation of textures (graphic images), sounds, animations, 3D objects, movies, etc.
* real time in-world multi participant speech support
* simplicity of visitor navigation (i.e. how hard is it for a first-time user to just walk around in the 3D environment)
* URL (web page) linking
* URL (web server data sending and receiving - eg Can you request and receive data from an off system database).
* web page display on objects (not commonly available)
====Analysis Support====
A number of analytic tools or design paradigms are incorporated into the ABPR. A few of these are introduced in the table:
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Analytic Tool Or Design Paradigm
</th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
Data Flow analysis
</td>
<td>
A method of charting systems enhanced by BPC with concepts drawn from process mapping, predicate calculus, TQM, CPM (Operations Research), Entity-Relationship modelling, and a number of other analytic methods. This method excels at depicting simply, complex data flows and process interactions. It traps control issues, timing constraints, events and information flows.
</td>
</tr>
<tr>
<td>
Active Control Management System
</td>
<td>
Although not critical to the process, ACM provides significant advantages in process efficiency. An BPC specific control design philosophy based on experiences in the areas of Corporate Governance and organisations adopting control devolution and/or multi-skilling. ACM represents a significant shift from the control paradigm of periodic audit review with heavy transaction based testing conventionally adopted by Internal Audit and traditional views of control system design relying on segregation of duties.
To build an ACM control system, we begin by expanding the definition of controls beyond accuracy, authorisation, completeness (,etc) to include process timeliness, achievement of business plan targets and other business objectives. Next we identify the controls appropriate for monitoring and we collect all the associated control data into a common recording format (and ideally automated storage system - such as MS-Access). Lastly we build a reporting framework for system performance monitoring built on the quality managers.
ACM produces control compliance information in a steady stream for the senior executive and board rather than intermittent or cyclic audit reviews often used. The compliance component of any Internal Audit unit is re-focussed to ensuring the ongoing reliability of the control compliance reports. The control system is integrated into the business processes using the Client-Provider model developed at the start of the project. ACM reporting can be automated, if desired.
</td>
</tr>
<tr>
<td>
Network Organisation Reduction
</td>
<td>
The process of defining the organisation into the community network structure forces the reduction of many diverse strategies and procedures into a clearly identifiable set of activities required for one of 11 broad service communities. The networks implie the stakeholders in an enumeratable set of collective Client Provider Service Agreements.
</td>
</tr>
<tr>
<td>
Process Dictionary
</td>
<td>
Used to assist in the identification of opportunities for streamlining cross and intra organisation systems, the a Process Dictionary catalogues and describes each process within any business function in accordance with an agreed selection of descriptive terms.
In this way, assists in highlighting common processes and assess whether it is possible and appropriate for these to be combined or shared in some suitable form.
</td>
</tr>
</table>
==Summary: Characteristics of the BBPR Method==
Business Process Reengineering (BPR) is the method by which the infrastructure, policies, procedures and practices of an organisation are reviewed and redesigned to achieve some predefined purpose and objective. This chapter has provided an introduction to the concept of BPR and an over view of the ABPR method. Both of these will be developed throughout the text.
Essentially BPR represents the focussing of an enormous body of theory and expertise underpinning management science into a single - all powerful redesign strategy. Such a panacea does not exist, and we must be careful to use BPR where the fundamental organisational characteristics are present. These might include:
<ul>
<li> A discernible consistent set of purpose(s) and objective(s) exist
<li> Design options are not restricted out of the solution set (ie. an acceptable solution is achievable despite imposed constraints)
<li> Senior management authorise and staff support the project and the process
<li> The analytic tools match the problem set
<li> BPR Consultant has credibility with the staff
</ul>
The BPR process is best seen as a framework encompassing a wide array of analytic tools and organisation/management design paradigms. Many of these tools and paradigms can be expected to change over time as management theory is revised, while some are central to the BBPR framework. The central tools and paradigms include:
<ul>
<li> KPI’s & Quality Management
<li> Data Flow Analysis
<li> Object Oriented Process Engineering
<li> Client Provider Analysis
<li> Information Mapping
<li> Data Cataloguing
</ul>
As an extremely simplified explanation, the BBPR method uses KPI’s to focus the system, and classifies the proponents in the system as clients and/or providers of data (etc) to one another. The client/provider relationships, are revised using a separate information (decision) map reflecting the information needs of the direct and indirect stakeholders. With the revised client/provider relationships defined and the data and information needs catalogued, process maps can be defined which reflect only what is needed to implement the system.
For the sake of clarity, in this introductory chapter, we have excluded many of the more complex issues facing BPR. One of these is the positioning of organisation design in a BPR exercise. It is a significant issue as it is inextricably linked to the culture of the organisation being reengineered. I t usually included to some extent in the design options, but rarely is the organisation design entirely at the discretion of the reengineer. Accordingly we must treat it as both a given structural component of the client provider analysis and an output of the process mapping (design phase).
Clearly the process mapping will impact the organisation structure which will in turn affect the client provider relationships while the client provider relationships affect the process mapping, etc. It is for this reason and a number of similar circular relationships among analytic components that necessitates the simultaneous analysis & design activity of the ABPR method.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
{{BackLinks}}
</noinclude>
c126b1fba94204e4a7ae2559e26d3d67bbc90f9e
331
319
2012-08-30T13:36:46Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this article. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Reengineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, the style and the detail provided, as the original was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time. As the charting method is a fairly involved, we will also be providing examples of systems charted using the method. This chapter is the introduction chapter, which provides a reasonably good overview of the approach.
</noinclude>
==Definition, Purposes & Outcomes==
=== Definition ===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:BusinessComponentObjectives.png]]
</div>
</td>
</tr>
</table>
Business Process Reengineering (BPR) ''is the method by which the infrastructure, policies, procedures and practices of an organisation are reviewed and redesigned to achieve some predefined purpose and objective.'' Purpose and Objective differ in that the purpose describes the reason for the process while the objective is the reason for the reengineering of the process. The objective is generally the optimisation of the quality - cost relationship, but may be any other objective(s) defined by the stakeholders of the processes revised.
Hammer, a popular author of reengineering texts defines reengineering as : The fundamental rethinking and radical redesign of business processes to bring about dramatic improvements in performance. Essentially, he argues that BPR is about major change in an organisation, yet perhaps this reflects a rather naive preoccupation with “big-is-better”. BPR can be about constrained well focussed small scale redesign as much as about monolithic reconstruction.
BPR is not new, although many consultants in the field try to claim otherwise. It is simply one more evolutionary step in a long stream of management change processes that includes Statistical Quality Control, TQM, Internal Audit, Work & Job Redesign, Goal Focussed Management, Workflow Management, Systems Analysis, etc. The theoretical foundation in BPR is quite old and can be seen particularly in the work in Systems Analysis undertaken at the University of Lancaster since 1969. What is new about BPR is its holistic view of the organisation and its attempt to capture the management philosophies that preceded it into a single integrated method.
Perhaps due in part to its conglomerate nature there is little standardisation among BPR approaches nor agreement on what is, or is not, BPR. With a few notable exceptions, the literature tends to be long on promises and case studies claiming stratospheric success but short on detail. This manual attempts to provide both a definition of BPR and an integrated strategy of analytic methods for performing it.
Although significantly different in approach from the work in systems analysis of the University of Lancaster, the development of our method owes a fundamental debt to the conceptual insight of that team. We have borrowed concepts, however, from a wide domain of disciplines ranging from accounting to computer science, and from psychology to marketing. It is not intended that the analytic tools of the method be cast in stone by this manual. No approach is perfect, and if this method is not seen to embrace its own continuous improvement then it will be as flawed as the business systems it purports to improve.
=== Purpose of BPR ===
In a BPR exercise we consider all aspects of managerial responsibility - from the organisation design through to the procedures and practices adopted. The BPR project does not attempt to define the purpose or the objectives of its systems of the organisation, rather once defined, it provides the machine to deliver that purpose and objective(s).
The method used in the reengineering process must deliver a complete description of that machine. This include the organisational structure, the behavioural paradigm, duties, controls, performance indicators, policies, procedures, data management, continuous improvement procedures, computer systems, etc, etc.
It is easy to confuse the activity of BPR with that of computer systems implementation, since many of the forces driving a BPR exercise beg computerisation as the easiest way to achieve apparently dramatic improvement. This is a mistake. Implementing computerised solutions is not the purpose of BPR, although a computerised solution is one of the tools a reengineer may use to implement some part of the processes and a BPR component .
Nor should we rely on computer solutions to all cases. While it is often true that the computerisation of a process will deliver significant improvement in the ratio of output volume and quality to human effort (input), when viewed from a holistic perspective (which includes infrastructure, investment, opportunity cost, and solution responsiveness to change) the computerised solution may not always be as attractive as first thought. Not withstanding these comments, a planned change in information systems provides a common and sensible catalyst for the BPR programme.
Essentially, the purpose of BPR is to build business systems able to deliver the organisation’s mission while optimising some given combination of objectives. In building the system, we must apply appropriate analytic techniques and appropriate implementation strategies. The weaker the constraints on the process applied by management - ie the wider the range of options left on the table for consideration - the more successful (in terms of optimising the objectives) the outcome is likely to be. The purpose of the system either will or will not be satisfied by the system design options made available - the quality of that delivery is measured by the objectives.
=== Outcomes ===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:BPR Components.png]]
</div>
</td>
</tr>
</table>
The result of the BPR project is a working system tuned to optimise some combination of objectives in delivering the stakeholder’s purpose. It is defined by a set of system descriptions, or views of the system, which consider, categorise and structure the matter from a number of angles.
Illustrated in Figure are the key components of a system description produced by the BPR method detailed in this manual. There are many differences between the approach presented here and the convention literature on BPR both in method and outcome. Henceforth we shall refer to this approach as the Bishop BPR (or BBPR) method.
The method produces a process and organisational rework that is naturally integrated with risk and compliance governance systems and (in its detailed delivery) uses a unique charting system which blends computational and human processes together in a common stuctured and testable form.
We have used and progressively improved the method detailed in this text since the late 1980's and it has been applied in the delivery of consultancies to well over several hundred organisations covering non-profit, government and corporate sectors. It has been applied in its pure form as a process reengineering system, in reduced forms as an internal audit systems audit process, a business systems design model (for design and development of business computing systems), and with various strategy enhancements as a business strategy planning tool. While this author has brought it to each consulting organisation with he has worked or lead over the years, it has benefitted from the ideas and contributions of many collegues.
We shall explore the BBPR method throughout this text and provide the tools and techniques necessary to deliver the BBPR system description. Here we provide a brief introduction to the ten key descriptive outputs in the figure:
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Descriptive Output </th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
Key Performance Indicators & Benchmarks / Targets
</td>
<td>
Performance management - how we directly manage and monitor the achievement of the system’s purposes
</td>
</tr>
<tr>
<td>
Active Control Management System
</td>
<td>
Internal System Integrity - how we directly monitor and manage the achievement of the system’s objectives.
</td>
</tr>
<tr>
<td>
Organisation Design
</td>
<td>
The objects/entities and their roles with their managerial, behavioural and reporting relationships identified.
</td>
</tr>
<tr>
<td>
Decision Tree
</td>
<td>
The tree (or Information Map) charts the decisions required by entities in the system, the relationships between the decisions and their information needs
</td>
</tr>
<tr>
<td>
Process & Workflow Charts
</td>
<td>
The sequence of activities making up the functional components of a system.
</td>
</tr>
<tr>
<td>
Event Calendar
</td>
<td>
The timing of events and their cycles and the processes they trigger
</td>
</tr>
<tr>
<td>
Client Provider Service Agreements
</td>
<td>
The objects/entities comprising the system seen as pairs of clients and providers (of services, data, goods, etc) emphasising their respective duties. The approach establishes notional contracts or service agreements which outline each entity’s responsibilities in the client provider relationship.
</td>
</tr>
<tr>
<td>
Data Management
</td>
<td>
The data stores in the system, what the data represents and how this data is managed
</td>
</tr>
<tr>
<td>
Continuous Improvement System
</td>
<td>
The strategy for delivering system improvement on a continuous basis.
</td>
</tr>
<tr>
<td>
Implementation and Change Strategy
</td>
<td>
The approach to managing the implementation of the reengineered system in the organisation and particularly managing people through the change process.
</td>
</tr>
</table>
The system description is only the ‘record’ of the real outcome of the BBPR approach - that of business performance improvement through better business processes. The ABPR method produces a system designed to optimise certain predefined objectives (such as cost of inputs to quality) while the system description attempts to formalise that system and provide the mechanisms for monitoring performance, and maintaining and tuning that system.
In the model organisation, the approach starts with the strategic plan of the organisation (or unit) being reviewed and uses that plan’s components (vision, mission, key result areas, critical success factors, strategies, key performance indicators, targets and timeframe) to focus the design effort with purpose and objectives. In the real organisation, planning is generally something less than perfect, so we must employ a wider net in defining the focus of the BPR exercise. Once armed with a focus, a wide variety of sources and analytic tools are employed to build a business system which will best achieve management’s plans.
==The Analytic Method & Its Tools==
===The Structure===
At the heart of the ABPR method is a set of ‘analytic tools’ (methods) that help define views of a system that highlight the particular properties in which we are interested. The key components are illustrated in Figure 1
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1"
width="100%" >
<tr>
<td>
<div class="center">
[[Image:BPRAnalyticStructure.png]]
</div>
</td>
<td>
<table >
<tr>
<td>
The analytic method is based on a simple premise:
A System is comprised of Recursive Objects only. Any system can be described by four types of Objects: Entities, Data Stores, Maps (Processes), and Quality Managers (Control/Performance Criteria).
The simple dataflow diagram of Figure 3 shows a basic system. Entity A provides data to Entity B via a single process (under the control of Entity C) which maps the data from one data store to another. The performance of the mapping process is managed by the quality control process under the control of Entity D. The quality control process is approximately equivalent to an engineering feedback loop.
</td>
</tr>
<tr>
<td>
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:BPC4KeyChartObj.png]]
</div>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
</table>
Mapping is a computational and mathematical term which describes the mechanism by which data is transformed from one state or form to another. In a business process that transformation might be a simple as the act of transcribing an invoice from its physical (eg. paper based) state to an electronic record in an accounts parable system through the process of data entry. The data in its input state may be said to have been mapped to another state through some process of transformation.
The computer engineering reader will recognise the similarity of the diagram to a dataflow model.
The logical starting point of a BPR exercise may seem to be the Performance Criteria definition (assuming that the overall purpose if the system being improved is already known), but it is important to note that each of the four definition activities should continue concurrently throughout the project. It is not unusual for the Performance Assumptions to change as a result of the other BPR activities, and virtually certain where the project is a Strategic Planning exercise.
This mixing of strategy planning and BPR may at first seem a little unusual, but in the impact of the BPR analysis can be to cause a fundamental rethink of the business strategy itself. Where the focus is merely to re-design a specific, targetted transactional process such a strategic impact is, perhaps less likely, but where the targeted business process is the core of the business, such an impact is surprisingly common.
In particular the KPI definition both commences and completes a project. The table lists these key analytic tools and provides an overview of the activity. These tool classes are typical of those employed, but not necessarily the only ones appropriate to any given project.
===The Modelling Tools===
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Analytic Tool Class </th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
KPI Assessment
</td>
<td>
In an ideal organisation the planning documents establish the focus for all activity. Our search for the focus of the BPR must therefore begin with the planning and policy role of management. Where available, sources to be reviewed include:
<ul>
<li> Statement of System Objectives
<li> Corporate Plan
<li> Budgets
<li> Benchmarks
</ul>
Organisations are rarely ideal and other techniques will need to be applied, depending on the culture of the organisation being reviewed. Such techniques may include SWOTC (Strengths, Weaknesses, Opportunities, Threats, and Constraints) analysis, benchmarking, corporate goal setting, interview, etc and may need to be undertaken to establish the purpose and key objectives of the system being reviewed.
Armed with this information the first view of Key Performance Indicators (KPI) appropriate to the system should be definable. In a sense, the KPI’s are like the gauges and alarms of an airplane, car or any other mechanical device. They alert the system’s ‘pilot’ to the status of the machinery, and allow rapid identification and adjustment of the system if anything ‘goes wrong’. In this sense the selection of the correct KPI’s is critical: if there is no gauge for a problem occurring it may not be detected until the problem is obvious without the help of a gauge - and possibly too late to be repaired.
In this first, top level assessment the KPIs will generally be whole-of-system measures. As other components of the ABPR are resolved (such as the Process Mapping and the Client Provider Analysis) the process detail level will emerge which becomes organisation’s operational ‘alarm system’. The ABPR has a specific design paradigm called Active Control Management to implement this KPI based control system in a cost efficient manner.
</td>
</tr>
<tr>
<td>
Client-Provider Analysis
</td>
<td>
A technique adopted from TQM which classifies the entities creating, managing and consuming data in the system as clients (data recipients) or providers (data suppliers) of one another. In performing the analysis we turn to information sources such as:
<ul>
<li> External Clients & Providers
<li> Internal Clients & Providers
<li> Organisation Structure
<li> Roles & Duty Statements
<li> Implied Contracts
</ul>
While it is important to understand the organisational structure as it stands - because, among other things, it dictates the client-provider relationships, it should not necessarilly bind the designer. An organisational model reflects legislative, cultural and historic traditions that may be critical to retain, as well as (possibly) many years of legitimate experience among the management team in the industry and market in which you are working. It must not simply be disregarded in the BPR process in favour of radical change.
Indeed, the author generally advises against too ambitious an organisational change, unless change is part of the culture or intended management strategy. In some organisations, frequent re-organisation is part of the management ethos, and such an approach is as legitimate and successful a management model as any other. One must, nevertheless, be careful in taking the existing structure (or management ethos!) as a given - particularly where the organisation is seeking a competitive edge beyond mere marginal improvement in efficiency or quality.
The BBPR method uses its own method of analysing organisational structures called The Organisational Community Network Model (which is one of the reasons that the BPR method frequently impacts organisational design). This approach is appropriate, even where the organisation will aubstantially retain it's orginal shape after the BPR project as it leads to a highly efficient and focussed "desk top" test process architecture, and where the option for organisational redesign is on the table, can lead to a very radical outcome.
</td>
</tr>
<tr>
<td>
Stakeholder Analysis
</td>
<td>
The direct stakeholders are addressed in the Client Provider analysis, while the indirect stakeholders are addressed here - in the Stakeholder Analysis.
Essentially the indirect stakeholders provide the organisation with drivers & constraints. Typical sources include:
<ul>
<li> Legislative Obligations
<li> Cultural Expectations
<li> Reporting Obligations
</ul>
</td>
</tr>
<tr>
<td>
Data Store Catalogue
</td>
<td>
The catalogue is the BPR equivalent to a data base administrator’s data dictionary. It describes all the data stored by the system, and the data stores themselves. It specifies the access rights, custodianship rules, data integrity standards and the static relationships between data stores.
Data stores include all the data managed by the system and methods of temporary or permanent storage. Data stores include electronic (abstract) and physical storage such as documents, files, filing cabinets, in trays, bins, etc.
Data Integrity Standards must be established system wide to which data stores adhere. The standards should be consistent with those applied by quality managers.
</td>
</tr>
<tr>
<td>
Process Mapping
</td>
<td>
Perhaps the most involved of all the activities of the BPR exercise. Process mapping is a general name for a variety of procedural analysis and design activities. The information sources include:
<ul>
<li> Functional Description
<li> Cradle to Grave Tracing - System Walkthrough
<li> Manuals
<li> List of Data Sources & Destinations
<li> Client / Provider Mapping
<li> Data Load Analysis (transaction volumes, processing rates, etc)
</ul>
The key activity during process mapping is the production of the Data flow diagrams and supporting documentation. This is done in two streams simultaneously:
<ol>
<li> Existing systems
<li> Redesigned Systems
</ol>
The data flow charts form the basis to the reengineering. They combine all aspects of the other analytic tools and describe the algorithm of the system.
In process mapping we treat all processes of a system as operating concurrently and control their timing and behaviour through messages, which take the form of either data or events.
The process map is not complete until the system data loading has been assessed for each process. The data load analysis involves examining data volumes and processing times, throughput assessment, reliability rates, etc.
</td>
</tr>
<tr>
<td>
Decision Tree / Information Mapping
</td>
<td>
The system handles not just data but information. Data becomes information when it exhibits certain quality characteristics. Information data must be appropriate to its purpose and reliable (where reliability implies standards of timeliness, accuracy, completeness, etc). Information mapping involves matching the data managed by a system to the decisions that must be made in operating that system. It requires, in part, the construction of a detailed decision tree spanning the entities in the system over time.
Necessarily, it also implies the existence of an events calendar which should link into the data flow diagrams. The information map includes a the information needs of the quality managers, and may be expressed in whole or in part through the Active Control Management design paradigm detailed later in this text.
The information map will require consideration of issue including:
<ul>
<li> Information Requirements
<li> Event Calendar
<li> Reporting Obligations
<li> Performance Control Management System (eg ACM)
</ul>
</td>
</tr>
</table>
===Organisational Representation (Introduction)===
When we think of organisational representation, we traditionally think of the heirarchical organisational chart. Resenbling an inverted tree, the organisational chart provided by almost all charting packages represents a cross between a reproesentation of physical or geographic position and reporting lines - and tells us very little about how a business organisation is really organised. At best it leads to a bureaucratic semi-accurate organisational view, and at worst, it is wildely incorrect such as in Matrix organisations.
As with many traditional diagramming systems it is horrendously inadequate for all but the grossest simplification of an organisation.
In the BBPR, we us a Community Network model which provides far richer analysis and directly represents the positioning of an organisation with its market and community.
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
Community domains can be defined as required for the purpose of the analysis, but in the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
You can read more about the Stakeholder Community Network Model and the Community Network Theory of organisational design and analysis [[The Stakeholder Community Network Model|Stakeholder Community Network Model and the Community Network Theory of organisational design and analysis here]].
===The Process Representation (Introduction)===
The full process charting model forms a language that can be represented either diagrammatically or descriptively. There are a number of different symbols and descriptive encoding rules, but in essence many of thesee enhancements are for diagramtic efficiency. The core of the charting system revolves around only a few symbols and the full model merely expands on these to provide a richer descriptive set, and more analytic detail with fewer diagramatic elements. The full model is described in [[Business Process Reengineering - Process Charting|advanced charting]].
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:DataFlow.png]]
</div>
</td>
</tr>
</table>
In the figure, '''''data flows''''' along, and in the direction of, the arrows between the entities, data stores and maps while control data flows principally into, and out, of the quality manager. The crossed-rectangular shapes are entities while the open ended rectangular shapes are (file) data stores. The maps and quality managers are shown by circles.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:Entity.png]]
</div>
</td>
</tr>
</table>
'''''Entities''''' are equivalent to people, machines, or processes external to the system being examined. In a sense they are givens in the system analysis, in that their functioning is assumed of a fixed standard and excluded from redesign. Those aspects of behaviour that can be redesigned are represented by the other three objects types.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:DataStore.png]]
</div>
</td>
</tr>
</table>
'''''Data Stores''''' are objects in which data resides from time to time. The stores are not the actual data itself, merely a representation of it. In the ‘object oriented analysis world’, data exists in the form of messages between objects. For example, Two people (entities) talking to each other (exchanging messages). Messages are essentially transient and so, for data available to be available for any length of time, it must be stored. Data Stores include documents, files, database records, and desk in-trays, etc.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:Map.png]]
</div>
</td>
</tr>
</table>
'''''Maps''''' are objects which perform an operation on data other than storing it. They transport data, change data, analyse data, update a database record, produce a report, authorise a transaction, etc. The term ‘map’ means ‘mapping data from one state to another’. Maps perform the transformations of a system, but they are concerned with data. For data to become information it must have the added dimension of quality.
'''''Quality Managers''''' are objects which administer the performance of the system. The quality manager does not transform the data handled by the system, but rather manages the system itself. Quality managers rely on the KPIs of the system and its component parts measuring variance from plan and performing the appropriate remedial action such as tuning Map parameters or escalating the problem.
In one sense the '''''Quality Manager''''' is a kind of process, but its responsibility is to modify the behaviour of the system in accordance with the purpose and objectives of the system and is therefore fundamentally different from a Map which represents the embodiment of that purpose. In another sense the Quality Manager is a kind of reactive data store - it both stores data and responds to it. The quality manager deals principally with control data, although this is by no means exclusive or necessary.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:RecursiveShapes.png]]
</div>
</td>
</tr>
</table>
'''''Objects are recursive''''', and therefore may contain more objects of the same or different type. For example, a file contain documents (both data stores), a document contains fields (more data stores), an organisation may contain people (both entities), an organisation (entity) may contain functions (maps) while a business cycle such as Purchasing (a map) may contain an entire system of roles (entities), procedures (maps), KPI’s measures (quality managers) and documents (data stores).
Processes (maps & quality managers) are concurrent. This means that, unless restrained by a lack of input (data to process) or awaiting an event, each process is trying to operate at the same time as every other process. This reflects reality - people do not follow a neat sequential order when interacting with one another unless explicitly constrained to do so. Instead, they operate simultaneously, at different speeds to one another, and in self chosen patterns. To model the world correctly we must also model this behaviour
You can read more about the process charting method in [[Business Process Reengineering - Process Charting]]
===The Analysis Tools===
The designed system will be documented with data flow charts, client-provider “performance agreements”, ACM control checklists, a decision to data source matrix, task schedule sheets cross-referenced to the data-flow diagrams. These facilities can be provided both electronically or on paper as desire by the client. The degree to which the processes and documentation can be automated is restricted only by the client’s computer system capabilities and software.
====Process Representation Using Software====
There are a number of practical charting tools that can be used. For 2D representation, we recommend either ABC Flowcharter or Visio, while for 3D client walkthrough of a designed system we recommend a MMORG such as SecondLife (http://SecondLife.com), or TrueSpace (http://www.caligari.com/).
With respest to to the 2D tools, both suggested tools have their strengths and weaknesses. Visio has excellent microsoft integration desktop application, and is directly supported by a number finance and business applications as a business process modelling environment. ABC Flowcharter, has (in our view) a shorter learning curve) and and excellent interface, and good integration into MS documentation tools.
In choosing a 2D tool you should consider:
The tool should support diagrammes:
* consisting of many linked pages
* with recursive (self referential) structures
* graphic object drill through (ie. you can select an object such as a process which summarises many sub-processes and link to one or more pages that represent the steps in the process
* containing graphic objects with unique id's, text descriptions, and other user defined data attributes that can be stored with them (eg transaction volumes, costs, probabilities, risk assessment, etc)
* editable splines for connecting shapes (bendable curved lines)
* with point and click editing
* with user defined shapes and image import
* that represent the Bishop Phillips Process Modelling shapes.
* containing URL links at at least the graphical object (including lines) level (ie. linking an object to a internet/intranet page)
* that can be imported into text documentation and presentation tools (MS Word / MS PowerPoint, etc) compatible with your business environment (standard desktop)
* that ideally can be scripted with a scripting language that allows active simulation or calculations of events and transactions occurring (optional - but a good idea)
* that can be generated driectly from an electronic drafting whiteboard (optional, but saves you a lot of time).
3D tools are a much newer approach. The biggest advantage of a 3D modelling tool is that you can 'walk' the client through the business process. Possibly the only practical & right-priced ones available at the moment are SecondLife and Caligari TrueSpace. Over the years we have tried a number of approaches to this idea, until the advent of SecondLife, we built our 3D models in TrueSpace. TrueSpace is a serious 3D modelling environment, and while simple to learn, as 3D graphical modelling environments go, it is not a tool for novices, although it does produce spectacular 3D models, it is not so suited to walking the client through the model as presenting a canned 3D visualisation of the business model. Recently it has gained a MORG add-on/representation and linked with one of a number of games engines it can be used quite successfully as a walk-through environment.
With the advent of second life (and the growing number of similar MORG systems that are either appearing now or soon to appear on the market), and more practical and faster solution is available (all be it, less visually stunning in production quality). A SecondLife based model allows you and your client to literally enter the model as people and walk or fly arround the components of your system, watching transactions visual flow through the process, event occur, control systems filter errors, and output being produced at varying transaction rates. The building interface is fast and simple to learn, and the scripting environment allows you to rapidly simulate many different scenarios.
With such an approach you can literally have your client see the transactions flow through a virtual representation of a system (a bit like the movie 'Tron'), or build a representation of their physical environment (such as a building, or office floor) and simulate the behaviour of the people and the control system operating. The world-wide scale of MORG users means you can contract the development work to inexpensive professional builders, instead of building it yourself.
The great weakness of these environments is that they are not yet real time in terms of construction (where as a 2D chart can (almost) be built in real-time as your client describes their processes, and documentation in conventional 2D media is not a natural consequence of a 3D simulation (whereas 2D charts can be included in text based documentation with ease).
In choosing a 3D tool you should consider:
* speed of construction of 3D elements (ideally you will need a 'primitive' rather than 'mesh' or 'nurbs' based building solution for speed.
* scripting language and partical system support (essential)
* ability to script primitives (objects) concurrently on a massive scale
* message passing support
* ability to create avatars (or primitars) that can interact with the model (ie. walk around inside it)
* availability of low cost developers/builders
* ease of installation of appropriate client software
* ownership and permanence of the 3D models built
* support for importation of textures (graphic images), sounds, animations, 3D objects, movies, etc.
* real time in-world multi participant speech support
* simplicity of visitor navigation (i.e. how hard is it for a first-time user to just walk around in the 3D environment)
* URL (web page) linking
* URL (web server data sending and receiving - eg Can you request and receive data from an off system database).
* web page display on objects (not commonly available)
====Analysis Support====
A number of analytic tools or design paradigms are incorporated into the ABPR. A few of these are introduced in the table:
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Analytic Tool Or Design Paradigm
</th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
Data Flow analysis
</td>
<td>
A method of charting systems enhanced by BPC with concepts drawn from process mapping, predicate calculus, TQM, CPM (Operations Research), Entity-Relationship modelling, and a number of other analytic methods. This method excels at depicting simply, complex data flows and process interactions. It traps control issues, timing constraints, events and information flows.
</td>
</tr>
<tr>
<td>
Active Control Management System
</td>
<td>
Although not critical to the process, ACM provides significant advantages in process efficiency. An BPC specific control design philosophy based on experiences in the areas of Corporate Governance and organisations adopting control devolution and/or multi-skilling. ACM represents a significant shift from the control paradigm of periodic audit review with heavy transaction based testing conventionally adopted by Internal Audit and traditional views of control system design relying on segregation of duties.
To build an ACM control system, we begin by expanding the definition of controls beyond accuracy, authorisation, completeness (,etc) to include process timeliness, achievement of business plan targets and other business objectives. Next we identify the controls appropriate for monitoring and we collect all the associated control data into a common recording format (and ideally automated storage system - such as MS-Access). Lastly we build a reporting framework for system performance monitoring built on the quality managers.
ACM produces control compliance information in a steady stream for the senior executive and board rather than intermittent or cyclic audit reviews often used. The compliance component of any Internal Audit unit is re-focussed to ensuring the ongoing reliability of the control compliance reports. The control system is integrated into the business processes using the Client-Provider model developed at the start of the project. ACM reporting can be automated, if desired.
</td>
</tr>
<tr>
<td>
Network Organisation Reduction
</td>
<td>
The process of defining the organisation into the community network structure forces the reduction of many diverse strategies and procedures into a clearly identifiable set of activities required for one of 11 broad service communities. The networks implie the stakeholders in an enumeratable set of collective Client Provider Service Agreements.
</td>
</tr>
<tr>
<td>
Process Dictionary
</td>
<td>
Used to assist in the identification of opportunities for streamlining cross and intra organisation systems, the a Process Dictionary catalogues and describes each process within any business function in accordance with an agreed selection of descriptive terms.
In this way, assists in highlighting common processes and assess whether it is possible and appropriate for these to be combined or shared in some suitable form.
</td>
</tr>
</table>
==Summary: Characteristics of the BBPR Method==
Business Process Reengineering (BPR) is the method by which the infrastructure, policies, procedures and practices of an organisation are reviewed and redesigned to achieve some predefined purpose and objective. This chapter has provided an introduction to the concept of BPR and an over view of the ABPR method. Both of these will be developed throughout the text.
Essentially BPR represents the focussing of an enormous body of theory and expertise underpinning management science into a single - all powerful redesign strategy. Such a panacea does not exist, and we must be careful to use BPR where the fundamental organisational characteristics are present. These might include:
<ul>
<li> A discernible consistent set of purpose(s) and objective(s) exist
<li> Design options are not restricted out of the solution set (ie. an acceptable solution is achievable despite imposed constraints)
<li> Senior management authorise and staff support the project and the process
<li> The analytic tools match the problem set
<li> BPR Consultant has credibility with the staff
</ul>
The BPR process is best seen as a framework encompassing a wide array of analytic tools and organisation/management design paradigms. Many of these tools and paradigms can be expected to change over time as management theory is revised, while some are central to the BBPR framework. The central tools and paradigms include:
<ul>
<li> KPI’s & Quality Management
<li> Data Flow Analysis
<li> Object Oriented Process Engineering
<li> Client Provider Analysis
<li> Information Mapping
<li> Data Cataloguing
</ul>
As an extremely simplified explanation, the BBPR method uses KPI’s to focus the system, and classifies the proponents in the system as clients and/or providers of data (etc) to one another. The client/provider relationships, are revised using a separate information (decision) map reflecting the information needs of the direct and indirect stakeholders. With the revised client/provider relationships defined and the data and information needs catalogued, process maps can be defined which reflect only what is needed to implement the system.
For the sake of clarity, in this introductory chapter, we have excluded many of the more complex issues facing BPR. One of these is the positioning of organisation design in a BPR exercise. It is a significant issue as it is inextricably linked to the culture of the organisation being reengineered. I t usually included to some extent in the design options, but rarely is the organisation design entirely at the discretion of the reengineer. Accordingly we must treat it as both a given structural component of the client provider analysis and an output of the process mapping (design phase).
Clearly the process mapping will impact the organisation structure which will in turn affect the client provider relationships while the client provider relationships affect the process mapping, etc. It is for this reason and a number of similar circular relationships among analytic components that necessitates the simultaneous analysis & design activity of the ABPR method.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
{{BackLinks}}
</noinclude>
c126b1fba94204e4a7ae2559e26d3d67bbc90f9e
373
331
2012-08-30T13:36:46Z
Bishopj
1
wikitext
text/x-wiki
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this article. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Reengineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, the style and the detail provided, as the original was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time. As the charting method is a fairly involved, we will also be providing examples of systems charted using the method. This chapter is the introduction chapter, which provides a reasonably good overview of the approach.
</noinclude>
==Definition, Purposes & Outcomes==
=== Definition ===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:BusinessComponentObjectives.png]]
</div>
</td>
</tr>
</table>
Business Process Reengineering (BPR) ''is the method by which the infrastructure, policies, procedures and practices of an organisation are reviewed and redesigned to achieve some predefined purpose and objective.'' Purpose and Objective differ in that the purpose describes the reason for the process while the objective is the reason for the reengineering of the process. The objective is generally the optimisation of the quality - cost relationship, but may be any other objective(s) defined by the stakeholders of the processes revised.
Hammer, a popular author of reengineering texts defines reengineering as : The fundamental rethinking and radical redesign of business processes to bring about dramatic improvements in performance. Essentially, he argues that BPR is about major change in an organisation, yet perhaps this reflects a rather naive preoccupation with “big-is-better”. BPR can be about constrained well focussed small scale redesign as much as about monolithic reconstruction.
BPR is not new, although many consultants in the field try to claim otherwise. It is simply one more evolutionary step in a long stream of management change processes that includes Statistical Quality Control, TQM, Internal Audit, Work & Job Redesign, Goal Focussed Management, Workflow Management, Systems Analysis, etc. The theoretical foundation in BPR is quite old and can be seen particularly in the work in Systems Analysis undertaken at the University of Lancaster since 1969. What is new about BPR is its holistic view of the organisation and its attempt to capture the management philosophies that preceded it into a single integrated method.
Perhaps due in part to its conglomerate nature there is little standardisation among BPR approaches nor agreement on what is, or is not, BPR. With a few notable exceptions, the literature tends to be long on promises and case studies claiming stratospheric success but short on detail. This manual attempts to provide both a definition of BPR and an integrated strategy of analytic methods for performing it.
Although significantly different in approach from the work in systems analysis of the University of Lancaster, the development of our method owes a fundamental debt to the conceptual insight of that team. We have borrowed concepts, however, from a wide domain of disciplines ranging from accounting to computer science, and from psychology to marketing. It is not intended that the analytic tools of the method be cast in stone by this manual. No approach is perfect, and if this method is not seen to embrace its own continuous improvement then it will be as flawed as the business systems it purports to improve.
=== Purpose of BPR ===
In a BPR exercise we consider all aspects of managerial responsibility - from the organisation design through to the procedures and practices adopted. The BPR project does not attempt to define the purpose or the objectives of its systems of the organisation, rather once defined, it provides the machine to deliver that purpose and objective(s).
The method used in the reengineering process must deliver a complete description of that machine. This include the organisational structure, the behavioural paradigm, duties, controls, performance indicators, policies, procedures, data management, continuous improvement procedures, computer systems, etc, etc.
It is easy to confuse the activity of BPR with that of computer systems implementation, since many of the forces driving a BPR exercise beg computerisation as the easiest way to achieve apparently dramatic improvement. This is a mistake. Implementing computerised solutions is not the purpose of BPR, although a computerised solution is one of the tools a reengineer may use to implement some part of the processes and a BPR component .
Nor should we rely on computer solutions to all cases. While it is often true that the computerisation of a process will deliver significant improvement in the ratio of output volume and quality to human effort (input), when viewed from a holistic perspective (which includes infrastructure, investment, opportunity cost, and solution responsiveness to change) the computerised solution may not always be as attractive as first thought. Not withstanding these comments, a planned change in information systems provides a common and sensible catalyst for the BPR programme.
Essentially, the purpose of BPR is to build business systems able to deliver the organisation’s mission while optimising some given combination of objectives. In building the system, we must apply appropriate analytic techniques and appropriate implementation strategies. The weaker the constraints on the process applied by management - ie the wider the range of options left on the table for consideration - the more successful (in terms of optimising the objectives) the outcome is likely to be. The purpose of the system either will or will not be satisfied by the system design options made available - the quality of that delivery is measured by the objectives.
=== Outcomes ===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:BPR Components.png]]
</div>
</td>
</tr>
</table>
The result of the BPR project is a working system tuned to optimise some combination of objectives in delivering the stakeholder’s purpose. It is defined by a set of system descriptions, or views of the system, which consider, categorise and structure the matter from a number of angles.
Illustrated in Figure are the key components of a system description produced by the BPR method detailed in this manual. There are many differences between the approach presented here and the convention literature on BPR both in method and outcome. Henceforth we shall refer to this approach as the Bishop BPR (or BBPR) method.
The method produces a process and organisational rework that is naturally integrated with risk and compliance governance systems and (in its detailed delivery) uses a unique charting system which blends computational and human processes together in a common stuctured and testable form.
We have used and progressively improved the method detailed in this text since the late 1980's and it has been applied in the delivery of consultancies to well over several hundred organisations covering non-profit, government and corporate sectors. It has been applied in its pure form as a process reengineering system, in reduced forms as an internal audit systems audit process, a business systems design model (for design and development of business computing systems), and with various strategy enhancements as a business strategy planning tool. While this author has brought it to each consulting organisation with he has worked or lead over the years, it has benefitted from the ideas and contributions of many collegues.
We shall explore the BBPR method throughout this text and provide the tools and techniques necessary to deliver the BBPR system description. Here we provide a brief introduction to the ten key descriptive outputs in the figure:
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Descriptive Output </th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
Key Performance Indicators & Benchmarks / Targets
</td>
<td>
Performance management - how we directly manage and monitor the achievement of the system’s purposes
</td>
</tr>
<tr>
<td>
Active Control Management System
</td>
<td>
Internal System Integrity - how we directly monitor and manage the achievement of the system’s objectives.
</td>
</tr>
<tr>
<td>
Organisation Design
</td>
<td>
The objects/entities and their roles with their managerial, behavioural and reporting relationships identified.
</td>
</tr>
<tr>
<td>
Decision Tree
</td>
<td>
The tree (or Information Map) charts the decisions required by entities in the system, the relationships between the decisions and their information needs
</td>
</tr>
<tr>
<td>
Process & Workflow Charts
</td>
<td>
The sequence of activities making up the functional components of a system.
</td>
</tr>
<tr>
<td>
Event Calendar
</td>
<td>
The timing of events and their cycles and the processes they trigger
</td>
</tr>
<tr>
<td>
Client Provider Service Agreements
</td>
<td>
The objects/entities comprising the system seen as pairs of clients and providers (of services, data, goods, etc) emphasising their respective duties. The approach establishes notional contracts or service agreements which outline each entity’s responsibilities in the client provider relationship.
</td>
</tr>
<tr>
<td>
Data Management
</td>
<td>
The data stores in the system, what the data represents and how this data is managed
</td>
</tr>
<tr>
<td>
Continuous Improvement System
</td>
<td>
The strategy for delivering system improvement on a continuous basis.
</td>
</tr>
<tr>
<td>
Implementation and Change Strategy
</td>
<td>
The approach to managing the implementation of the reengineered system in the organisation and particularly managing people through the change process.
</td>
</tr>
</table>
The system description is only the ‘record’ of the real outcome of the BBPR approach - that of business performance improvement through better business processes. The ABPR method produces a system designed to optimise certain predefined objectives (such as cost of inputs to quality) while the system description attempts to formalise that system and provide the mechanisms for monitoring performance, and maintaining and tuning that system.
In the model organisation, the approach starts with the strategic plan of the organisation (or unit) being reviewed and uses that plan’s components (vision, mission, key result areas, critical success factors, strategies, key performance indicators, targets and timeframe) to focus the design effort with purpose and objectives. In the real organisation, planning is generally something less than perfect, so we must employ a wider net in defining the focus of the BPR exercise. Once armed with a focus, a wide variety of sources and analytic tools are employed to build a business system which will best achieve management’s plans.
==The Analytic Method & Its Tools==
===The Structure===
At the heart of the ABPR method is a set of ‘analytic tools’ (methods) that help define views of a system that highlight the particular properties in which we are interested. The key components are illustrated in Figure 1
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1"
width="100%" >
<tr>
<td>
<div class="center">
[[Image:BPRAnalyticStructure.png]]
</div>
</td>
<td>
<table >
<tr>
<td>
The analytic method is based on a simple premise:
A System is comprised of Recursive Objects only. Any system can be described by four types of Objects: Entities, Data Stores, Maps (Processes), and Quality Managers (Control/Performance Criteria).
The simple dataflow diagram of Figure 3 shows a basic system. Entity A provides data to Entity B via a single process (under the control of Entity C) which maps the data from one data store to another. The performance of the mapping process is managed by the quality control process under the control of Entity D. The quality control process is approximately equivalent to an engineering feedback loop.
</td>
</tr>
<tr>
<td>
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:BPC4KeyChartObj.png]]
</div>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
</table>
Mapping is a computational and mathematical term which describes the mechanism by which data is transformed from one state or form to another. In a business process that transformation might be a simple as the act of transcribing an invoice from its physical (eg. paper based) state to an electronic record in an accounts parable system through the process of data entry. The data in its input state may be said to have been mapped to another state through some process of transformation.
The computer engineering reader will recognise the similarity of the diagram to a dataflow model.
The logical starting point of a BPR exercise may seem to be the Performance Criteria definition (assuming that the overall purpose if the system being improved is already known), but it is important to note that each of the four definition activities should continue concurrently throughout the project. It is not unusual for the Performance Assumptions to change as a result of the other BPR activities, and virtually certain where the project is a Strategic Planning exercise.
This mixing of strategy planning and BPR may at first seem a little unusual, but in the impact of the BPR analysis can be to cause a fundamental rethink of the business strategy itself. Where the focus is merely to re-design a specific, targetted transactional process such a strategic impact is, perhaps less likely, but where the targeted business process is the core of the business, such an impact is surprisingly common.
In particular the KPI definition both commences and completes a project. The table lists these key analytic tools and provides an overview of the activity. These tool classes are typical of those employed, but not necessarily the only ones appropriate to any given project.
===The Modelling Tools===
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Analytic Tool Class </th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
KPI Assessment
</td>
<td>
In an ideal organisation the planning documents establish the focus for all activity. Our search for the focus of the BPR must therefore begin with the planning and policy role of management. Where available, sources to be reviewed include:
<ul>
<li> Statement of System Objectives
<li> Corporate Plan
<li> Budgets
<li> Benchmarks
</ul>
Organisations are rarely ideal and other techniques will need to be applied, depending on the culture of the organisation being reviewed. Such techniques may include SWOTC (Strengths, Weaknesses, Opportunities, Threats, and Constraints) analysis, benchmarking, corporate goal setting, interview, etc and may need to be undertaken to establish the purpose and key objectives of the system being reviewed.
Armed with this information the first view of Key Performance Indicators (KPI) appropriate to the system should be definable. In a sense, the KPI’s are like the gauges and alarms of an airplane, car or any other mechanical device. They alert the system’s ‘pilot’ to the status of the machinery, and allow rapid identification and adjustment of the system if anything ‘goes wrong’. In this sense the selection of the correct KPI’s is critical: if there is no gauge for a problem occurring it may not be detected until the problem is obvious without the help of a gauge - and possibly too late to be repaired.
In this first, top level assessment the KPIs will generally be whole-of-system measures. As other components of the ABPR are resolved (such as the Process Mapping and the Client Provider Analysis) the process detail level will emerge which becomes organisation’s operational ‘alarm system’. The ABPR has a specific design paradigm called Active Control Management to implement this KPI based control system in a cost efficient manner.
</td>
</tr>
<tr>
<td>
Client-Provider Analysis
</td>
<td>
A technique adopted from TQM which classifies the entities creating, managing and consuming data in the system as clients (data recipients) or providers (data suppliers) of one another. In performing the analysis we turn to information sources such as:
<ul>
<li> External Clients & Providers
<li> Internal Clients & Providers
<li> Organisation Structure
<li> Roles & Duty Statements
<li> Implied Contracts
</ul>
While it is important to understand the organisational structure as it stands - because, among other things, it dictates the client-provider relationships, it should not necessarilly bind the designer. An organisational model reflects legislative, cultural and historic traditions that may be critical to retain, as well as (possibly) many years of legitimate experience among the management team in the industry and market in which you are working. It must not simply be disregarded in the BPR process in favour of radical change.
Indeed, the author generally advises against too ambitious an organisational change, unless change is part of the culture or intended management strategy. In some organisations, frequent re-organisation is part of the management ethos, and such an approach is as legitimate and successful a management model as any other. One must, nevertheless, be careful in taking the existing structure (or management ethos!) as a given - particularly where the organisation is seeking a competitive edge beyond mere marginal improvement in efficiency or quality.
The BBPR method uses its own method of analysing organisational structures called The Organisational Community Network Model (which is one of the reasons that the BPR method frequently impacts organisational design). This approach is appropriate, even where the organisation will aubstantially retain it's orginal shape after the BPR project as it leads to a highly efficient and focussed "desk top" test process architecture, and where the option for organisational redesign is on the table, can lead to a very radical outcome.
</td>
</tr>
<tr>
<td>
Stakeholder Analysis
</td>
<td>
The direct stakeholders are addressed in the Client Provider analysis, while the indirect stakeholders are addressed here - in the Stakeholder Analysis.
Essentially the indirect stakeholders provide the organisation with drivers & constraints. Typical sources include:
<ul>
<li> Legislative Obligations
<li> Cultural Expectations
<li> Reporting Obligations
</ul>
</td>
</tr>
<tr>
<td>
Data Store Catalogue
</td>
<td>
The catalogue is the BPR equivalent to a data base administrator’s data dictionary. It describes all the data stored by the system, and the data stores themselves. It specifies the access rights, custodianship rules, data integrity standards and the static relationships between data stores.
Data stores include all the data managed by the system and methods of temporary or permanent storage. Data stores include electronic (abstract) and physical storage such as documents, files, filing cabinets, in trays, bins, etc.
Data Integrity Standards must be established system wide to which data stores adhere. The standards should be consistent with those applied by quality managers.
</td>
</tr>
<tr>
<td>
Process Mapping
</td>
<td>
Perhaps the most involved of all the activities of the BPR exercise. Process mapping is a general name for a variety of procedural analysis and design activities. The information sources include:
<ul>
<li> Functional Description
<li> Cradle to Grave Tracing - System Walkthrough
<li> Manuals
<li> List of Data Sources & Destinations
<li> Client / Provider Mapping
<li> Data Load Analysis (transaction volumes, processing rates, etc)
</ul>
The key activity during process mapping is the production of the Data flow diagrams and supporting documentation. This is done in two streams simultaneously:
<ol>
<li> Existing systems
<li> Redesigned Systems
</ol>
The data flow charts form the basis to the reengineering. They combine all aspects of the other analytic tools and describe the algorithm of the system.
In process mapping we treat all processes of a system as operating concurrently and control their timing and behaviour through messages, which take the form of either data or events.
The process map is not complete until the system data loading has been assessed for each process. The data load analysis involves examining data volumes and processing times, throughput assessment, reliability rates, etc.
</td>
</tr>
<tr>
<td>
Decision Tree / Information Mapping
</td>
<td>
The system handles not just data but information. Data becomes information when it exhibits certain quality characteristics. Information data must be appropriate to its purpose and reliable (where reliability implies standards of timeliness, accuracy, completeness, etc). Information mapping involves matching the data managed by a system to the decisions that must be made in operating that system. It requires, in part, the construction of a detailed decision tree spanning the entities in the system over time.
Necessarily, it also implies the existence of an events calendar which should link into the data flow diagrams. The information map includes a the information needs of the quality managers, and may be expressed in whole or in part through the Active Control Management design paradigm detailed later in this text.
The information map will require consideration of issue including:
<ul>
<li> Information Requirements
<li> Event Calendar
<li> Reporting Obligations
<li> Performance Control Management System (eg ACM)
</ul>
</td>
</tr>
</table>
===Organisational Representation (Introduction)===
When we think of organisational representation, we traditionally think of the heirarchical organisational chart. Resenbling an inverted tree, the organisational chart provided by almost all charting packages represents a cross between a reproesentation of physical or geographic position and reporting lines - and tells us very little about how a business organisation is really organised. At best it leads to a bureaucratic semi-accurate organisational view, and at worst, it is wildely incorrect such as in Matrix organisations.
As with many traditional diagramming systems it is horrendously inadequate for all but the grossest simplification of an organisation.
In the BBPR, we us a Community Network model which provides far richer analysis and directly represents the positioning of an organisation with its market and community.
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
Community domains can be defined as required for the purpose of the analysis, but in the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
You can read more about the Stakeholder Community Network Model and the Community Network Theory of organisational design and analysis [[The Stakeholder Community Network Model|Stakeholder Community Network Model and the Community Network Theory of organisational design and analysis here]].
===The Process Representation (Introduction)===
The full process charting model forms a language that can be represented either diagrammatically or descriptively. There are a number of different symbols and descriptive encoding rules, but in essence many of thesee enhancements are for diagramtic efficiency. The core of the charting system revolves around only a few symbols and the full model merely expands on these to provide a richer descriptive set, and more analytic detail with fewer diagramatic elements. The full model is described in [[Business Process Reengineering - Process Charting|advanced charting]].
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:DataFlow.png]]
</div>
</td>
</tr>
</table>
In the figure, '''''data flows''''' along, and in the direction of, the arrows between the entities, data stores and maps while control data flows principally into, and out, of the quality manager. The crossed-rectangular shapes are entities while the open ended rectangular shapes are (file) data stores. The maps and quality managers are shown by circles.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:Entity.png]]
</div>
</td>
</tr>
</table>
'''''Entities''''' are equivalent to people, machines, or processes external to the system being examined. In a sense they are givens in the system analysis, in that their functioning is assumed of a fixed standard and excluded from redesign. Those aspects of behaviour that can be redesigned are represented by the other three objects types.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:DataStore.png]]
</div>
</td>
</tr>
</table>
'''''Data Stores''''' are objects in which data resides from time to time. The stores are not the actual data itself, merely a representation of it. In the ‘object oriented analysis world’, data exists in the form of messages between objects. For example, Two people (entities) talking to each other (exchanging messages). Messages are essentially transient and so, for data available to be available for any length of time, it must be stored. Data Stores include documents, files, database records, and desk in-trays, etc.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:Map.png]]
</div>
</td>
</tr>
</table>
'''''Maps''''' are objects which perform an operation on data other than storing it. They transport data, change data, analyse data, update a database record, produce a report, authorise a transaction, etc. The term ‘map’ means ‘mapping data from one state to another’. Maps perform the transformations of a system, but they are concerned with data. For data to become information it must have the added dimension of quality.
'''''Quality Managers''''' are objects which administer the performance of the system. The quality manager does not transform the data handled by the system, but rather manages the system itself. Quality managers rely on the KPIs of the system and its component parts measuring variance from plan and performing the appropriate remedial action such as tuning Map parameters or escalating the problem.
In one sense the '''''Quality Manager''''' is a kind of process, but its responsibility is to modify the behaviour of the system in accordance with the purpose and objectives of the system and is therefore fundamentally different from a Map which represents the embodiment of that purpose. In another sense the Quality Manager is a kind of reactive data store - it both stores data and responds to it. The quality manager deals principally with control data, although this is by no means exclusive or necessary.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:RecursiveShapes.png]]
</div>
</td>
</tr>
</table>
'''''Objects are recursive''''', and therefore may contain more objects of the same or different type. For example, a file contain documents (both data stores), a document contains fields (more data stores), an organisation may contain people (both entities), an organisation (entity) may contain functions (maps) while a business cycle such as Purchasing (a map) may contain an entire system of roles (entities), procedures (maps), KPI’s measures (quality managers) and documents (data stores).
Processes (maps & quality managers) are concurrent. This means that, unless restrained by a lack of input (data to process) or awaiting an event, each process is trying to operate at the same time as every other process. This reflects reality - people do not follow a neat sequential order when interacting with one another unless explicitly constrained to do so. Instead, they operate simultaneously, at different speeds to one another, and in self chosen patterns. To model the world correctly we must also model this behaviour
You can read more about the process charting method in [[Business Process Reengineering - Process Charting]]
===The Analysis Tools===
The designed system will be documented with data flow charts, client-provider “performance agreements”, ACM control checklists, a decision to data source matrix, task schedule sheets cross-referenced to the data-flow diagrams. These facilities can be provided both electronically or on paper as desire by the client. The degree to which the processes and documentation can be automated is restricted only by the client’s computer system capabilities and software.
====Process Representation Using Software====
There are a number of practical charting tools that can be used. For 2D representation, we recommend either ABC Flowcharter or Visio, while for 3D client walkthrough of a designed system we recommend a MMORG such as SecondLife (http://SecondLife.com), or TrueSpace (http://www.caligari.com/).
With respest to to the 2D tools, both suggested tools have their strengths and weaknesses. Visio has excellent microsoft integration desktop application, and is directly supported by a number finance and business applications as a business process modelling environment. ABC Flowcharter, has (in our view) a shorter learning curve) and and excellent interface, and good integration into MS documentation tools.
In choosing a 2D tool you should consider:
The tool should support diagrammes:
* consisting of many linked pages
* with recursive (self referential) structures
* graphic object drill through (ie. you can select an object such as a process which summarises many sub-processes and link to one or more pages that represent the steps in the process
* containing graphic objects with unique id's, text descriptions, and other user defined data attributes that can be stored with them (eg transaction volumes, costs, probabilities, risk assessment, etc)
* editable splines for connecting shapes (bendable curved lines)
* with point and click editing
* with user defined shapes and image import
* that represent the Bishop Phillips Process Modelling shapes.
* containing URL links at at least the graphical object (including lines) level (ie. linking an object to a internet/intranet page)
* that can be imported into text documentation and presentation tools (MS Word / MS PowerPoint, etc) compatible with your business environment (standard desktop)
* that ideally can be scripted with a scripting language that allows active simulation or calculations of events and transactions occurring (optional - but a good idea)
* that can be generated driectly from an electronic drafting whiteboard (optional, but saves you a lot of time).
3D tools are a much newer approach. The biggest advantage of a 3D modelling tool is that you can 'walk' the client through the business process. Possibly the only practical & right-priced ones available at the moment are SecondLife and Caligari TrueSpace. Over the years we have tried a number of approaches to this idea, until the advent of SecondLife, we built our 3D models in TrueSpace. TrueSpace is a serious 3D modelling environment, and while simple to learn, as 3D graphical modelling environments go, it is not a tool for novices, although it does produce spectacular 3D models, it is not so suited to walking the client through the model as presenting a canned 3D visualisation of the business model. Recently it has gained a MORG add-on/representation and linked with one of a number of games engines it can be used quite successfully as a walk-through environment.
With the advent of second life (and the growing number of similar MORG systems that are either appearing now or soon to appear on the market), and more practical and faster solution is available (all be it, less visually stunning in production quality). A SecondLife based model allows you and your client to literally enter the model as people and walk or fly arround the components of your system, watching transactions visual flow through the process, event occur, control systems filter errors, and output being produced at varying transaction rates. The building interface is fast and simple to learn, and the scripting environment allows you to rapidly simulate many different scenarios.
With such an approach you can literally have your client see the transactions flow through a virtual representation of a system (a bit like the movie 'Tron'), or build a representation of their physical environment (such as a building, or office floor) and simulate the behaviour of the people and the control system operating. The world-wide scale of MORG users means you can contract the development work to inexpensive professional builders, instead of building it yourself.
The great weakness of these environments is that they are not yet real time in terms of construction (where as a 2D chart can (almost) be built in real-time as your client describes their processes, and documentation in conventional 2D media is not a natural consequence of a 3D simulation (whereas 2D charts can be included in text based documentation with ease).
In choosing a 3D tool you should consider:
* speed of construction of 3D elements (ideally you will need a 'primitive' rather than 'mesh' or 'nurbs' based building solution for speed.
* scripting language and partical system support (essential)
* ability to script primitives (objects) concurrently on a massive scale
* message passing support
* ability to create avatars (or primitars) that can interact with the model (ie. walk around inside it)
* availability of low cost developers/builders
* ease of installation of appropriate client software
* ownership and permanence of the 3D models built
* support for importation of textures (graphic images), sounds, animations, 3D objects, movies, etc.
* real time in-world multi participant speech support
* simplicity of visitor navigation (i.e. how hard is it for a first-time user to just walk around in the 3D environment)
* URL (web page) linking
* URL (web server data sending and receiving - eg Can you request and receive data from an off system database).
* web page display on objects (not commonly available)
====Analysis Support====
A number of analytic tools or design paradigms are incorporated into the ABPR. A few of these are introduced in the table:
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Analytic Tool Or Design Paradigm
</th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
Data Flow analysis
</td>
<td>
A method of charting systems enhanced by BPC with concepts drawn from process mapping, predicate calculus, TQM, CPM (Operations Research), Entity-Relationship modelling, and a number of other analytic methods. This method excels at depicting simply, complex data flows and process interactions. It traps control issues, timing constraints, events and information flows.
</td>
</tr>
<tr>
<td>
Active Control Management System
</td>
<td>
Although not critical to the process, ACM provides significant advantages in process efficiency. An BPC specific control design philosophy based on experiences in the areas of Corporate Governance and organisations adopting control devolution and/or multi-skilling. ACM represents a significant shift from the control paradigm of periodic audit review with heavy transaction based testing conventionally adopted by Internal Audit and traditional views of control system design relying on segregation of duties.
To build an ACM control system, we begin by expanding the definition of controls beyond accuracy, authorisation, completeness (,etc) to include process timeliness, achievement of business plan targets and other business objectives. Next we identify the controls appropriate for monitoring and we collect all the associated control data into a common recording format (and ideally automated storage system - such as MS-Access). Lastly we build a reporting framework for system performance monitoring built on the quality managers.
ACM produces control compliance information in a steady stream for the senior executive and board rather than intermittent or cyclic audit reviews often used. The compliance component of any Internal Audit unit is re-focussed to ensuring the ongoing reliability of the control compliance reports. The control system is integrated into the business processes using the Client-Provider model developed at the start of the project. ACM reporting can be automated, if desired.
</td>
</tr>
<tr>
<td>
Network Organisation Reduction
</td>
<td>
The process of defining the organisation into the community network structure forces the reduction of many diverse strategies and procedures into a clearly identifiable set of activities required for one of 11 broad service communities. The networks implie the stakeholders in an enumeratable set of collective Client Provider Service Agreements.
</td>
</tr>
<tr>
<td>
Process Dictionary
</td>
<td>
Used to assist in the identification of opportunities for streamlining cross and intra organisation systems, the a Process Dictionary catalogues and describes each process within any business function in accordance with an agreed selection of descriptive terms.
In this way, assists in highlighting common processes and assess whether it is possible and appropriate for these to be combined or shared in some suitable form.
</td>
</tr>
</table>
==Summary: Characteristics of the BBPR Method==
Business Process Reengineering (BPR) is the method by which the infrastructure, policies, procedures and practices of an organisation are reviewed and redesigned to achieve some predefined purpose and objective. This chapter has provided an introduction to the concept of BPR and an over view of the ABPR method. Both of these will be developed throughout the text.
Essentially BPR represents the focussing of an enormous body of theory and expertise underpinning management science into a single - all powerful redesign strategy. Such a panacea does not exist, and we must be careful to use BPR where the fundamental organisational characteristics are present. These might include:
<ul>
<li> A discernible consistent set of purpose(s) and objective(s) exist
<li> Design options are not restricted out of the solution set (ie. an acceptable solution is achievable despite imposed constraints)
<li> Senior management authorise and staff support the project and the process
<li> The analytic tools match the problem set
<li> BPR Consultant has credibility with the staff
</ul>
The BPR process is best seen as a framework encompassing a wide array of analytic tools and organisation/management design paradigms. Many of these tools and paradigms can be expected to change over time as management theory is revised, while some are central to the BBPR framework. The central tools and paradigms include:
<ul>
<li> KPI’s & Quality Management
<li> Data Flow Analysis
<li> Object Oriented Process Engineering
<li> Client Provider Analysis
<li> Information Mapping
<li> Data Cataloguing
</ul>
As an extremely simplified explanation, the BBPR method uses KPI’s to focus the system, and classifies the proponents in the system as clients and/or providers of data (etc) to one another. The client/provider relationships, are revised using a separate information (decision) map reflecting the information needs of the direct and indirect stakeholders. With the revised client/provider relationships defined and the data and information needs catalogued, process maps can be defined which reflect only what is needed to implement the system.
For the sake of clarity, in this introductory chapter, we have excluded many of the more complex issues facing BPR. One of these is the positioning of organisation design in a BPR exercise. It is a significant issue as it is inextricably linked to the culture of the organisation being reengineered. I t usually included to some extent in the design options, but rarely is the organisation design entirely at the discretion of the reengineer. Accordingly we must treat it as both a given structural component of the client provider analysis and an output of the process mapping (design phase).
Clearly the process mapping will impact the organisation structure which will in turn affect the client provider relationships while the client provider relationships affect the process mapping, etc. It is for this reason and a number of similar circular relationships among analytic components that necessitates the simultaneous analysis & design activity of the ABPR method.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
{{BackLinks}}
</noinclude>
c126b1fba94204e4a7ae2559e26d3d67bbc90f9e
Business Process Reengineering - Process Charting
0
289
325
2012-08-30T13:39:34Z
Bishopj
1
wikitext
text/x-wiki
=Introduction - Business Process Charting=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
==Charting the Business Process - A Unified and Holistic Approach==
===Why Chart?===
There are many reasons we may wish to chart a business and its businesses processes including mapping of data flows, documenting process steps, designing automated and hybrid systems, defining intra and inter-organisational relationships, defining or analysing service agreements, etc.
===What is a (Business) Process Chart?===
A process chart is a diagramatic representation of a set of processes, that models the enveloping organisations as if it were a machine with a functional domain that encompassed the diagrammed processes.
From a computational perspective, a business process chart is a diagramatic program describing human, machine, natural, organisational, functional and non-functional systems using digraphs.
===What are the Characterisitics of a Good Process Charting Method?===
====Objectives====
This author proposes that the objectives of a good process charting system should be to:
* improve the understanding and clarity of the data represented in the chart,
* enable domain specific analysis (such as efficiency, economy, effectiveness, reliability, etc),
* enable viewing of the processes at multiple levels of detail simultaneously,
* chart the target analysis domain completely,
* seemlessly represent both automated and non automated processes in the same chart,
* enable the automated modelling of the system directly from the chart (which implies the charting "meta-language" should have a consistent "syntax" and semantics - similar to an "ideal" computer language),
* represent processes across diverse operations, industries, products and services without context specific modification of the syntax or semantics,
* produce charts from unfamiliar industries (etc) that are understandable to a moderately experienced chart reader, with no prior background in the subject charted, and
* enable the construction of "proofs" of the processes.
In this author's view these objectives are assisted when the charting system assumes the properties and conventions of well designed computer programming language - albeit a visual one. These properties include the grammatic (semantic and syntactic) consistency, structured functional encapsulation, object reuse and polymorphism, conceptual inheritance, simplicity and functional expansion.
====Consistent Identifiable Grammar====
The grammar of a process charting method defines the symbols, their meaning, and the rules for "legal" combinations of these symbols and meaning of such combinations.
In computational languages the atomic element in a programming language's grammar is called a token. In a text based computational language these tokens are strings of one or more characters, some of which are defined in the language with a special meaning. The tokens comprise the syntactic elements of the grammar. The grammar itself defines a consistent semantic interpretation of the syntactic elements when combined in pre-defined combinations.
In a process chart the atomic element is a symbol that maps to a real world object such as an organisation, a person, a data element, a process (or function), a data store, etc. These symbols comprise the syntactic elements of the charting method's grammar, and the charting rules document a grammar which delivers a consistent semantic interpretation of the syntactic elements when combined in the pre-defined combinations.
====Completeness====
A well designed charting system is internally consistent in atomic structure and behaviours, while mapping completely (in a mathematical sense) to the real world scenario being modelled.
To be conceptually useful, "completeness" chould be able to be "proven" - at least theoretically. This explanation implies an algebraic representation (eg predicate calculus) of the process charted should be derivable from the charting language. Having said that, it should be noted that few computing languages have such a mathematical validity test available (SQL being one notable exception).
====Minimal Syntactic Complexity====
Completeness in oricess modelling is a complex topic, and one fraught with some potentially counter productive implied solutions.
For example, a charting system with a unique symbol for every-process might achieve completeness, but it would achieve this at the expense of very high grammatic complexity.
The strength of process charting approach lies specifically in its ability to categorise, simplyify, and standardise our view of a social system. If one measure of language complexity lies in the number of rules in a grammar, then the greater the range of predefined (or reserved) symbols in the language, the greater the number of rulee that will be required to define their use.
Complexity, under such a measure, is minimised when the number of unique predifined "terms" is minimised. The mover restricted is symbol set, however, the more symbols must be used to represent simple everyday-repeating processes.
===The BPC Business Process Charting Method===
The core symbols of the process charting language are defined in the BPR overview. This author postulates that all human-machine processes can be documented with this minimum set of symbols. The simplicty of its symbol set (and therefore grammar) can lead to diagramatic complexity.
Certain objects and their processes occur with such rapidity, that diagrammatic complexity is reduced significantly by expamding the core set of symbols as shown in [[Business Process Reengineering - Chart Key]].
==Charting Example - Electronic Grants Management System==
The process charting method included on the following pages demonstrates the business process charting method as designed by this author and improved with input from clients and staff of BPC over 24 years. The example charts represent the BPC Process Reengineering Modelling and the BPC Stakeholder Community model in action in a real world situation. The resulting demonstration is a fully functional government grants management process for whole-of-government administration of government grants to the public.
*[[Business Process Reengineering - Chart Key]]
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
73ac152a4b245146897670bf781740106b14b9ef
347
325
2012-08-30T13:39:34Z
Bishopj
1
wikitext
text/x-wiki
=Introduction - Business Process Charting=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
==Charting the Business Process - A Unified and Holistic Approach==
===Why Chart?===
There are many reasons we may wish to chart a business and its businesses processes including mapping of data flows, documenting process steps, designing automated and hybrid systems, defining intra and inter-organisational relationships, defining or analysing service agreements, etc.
===What is a (Business) Process Chart?===
A process chart is a diagramatic representation of a set of processes, that models the enveloping organisations as if it were a machine with a functional domain that encompassed the diagrammed processes.
From a computational perspective, a business process chart is a diagramatic program describing human, machine, natural, organisational, functional and non-functional systems using digraphs.
===What are the Characterisitics of a Good Process Charting Method?===
====Objectives====
This author proposes that the objectives of a good process charting system should be to:
* improve the understanding and clarity of the data represented in the chart,
* enable domain specific analysis (such as efficiency, economy, effectiveness, reliability, etc),
* enable viewing of the processes at multiple levels of detail simultaneously,
* chart the target analysis domain completely,
* seemlessly represent both automated and non automated processes in the same chart,
* enable the automated modelling of the system directly from the chart (which implies the charting "meta-language" should have a consistent "syntax" and semantics - similar to an "ideal" computer language),
* represent processes across diverse operations, industries, products and services without context specific modification of the syntax or semantics,
* produce charts from unfamiliar industries (etc) that are understandable to a moderately experienced chart reader, with no prior background in the subject charted, and
* enable the construction of "proofs" of the processes.
In this author's view these objectives are assisted when the charting system assumes the properties and conventions of well designed computer programming language - albeit a visual one. These properties include the grammatic (semantic and syntactic) consistency, structured functional encapsulation, object reuse and polymorphism, conceptual inheritance, simplicity and functional expansion.
====Consistent Identifiable Grammar====
The grammar of a process charting method defines the symbols, their meaning, and the rules for "legal" combinations of these symbols and meaning of such combinations.
In computational languages the atomic element in a programming language's grammar is called a token. In a text based computational language these tokens are strings of one or more characters, some of which are defined in the language with a special meaning. The tokens comprise the syntactic elements of the grammar. The grammar itself defines a consistent semantic interpretation of the syntactic elements when combined in pre-defined combinations.
In a process chart the atomic element is a symbol that maps to a real world object such as an organisation, a person, a data element, a process (or function), a data store, etc. These symbols comprise the syntactic elements of the charting method's grammar, and the charting rules document a grammar which delivers a consistent semantic interpretation of the syntactic elements when combined in the pre-defined combinations.
====Completeness====
A well designed charting system is internally consistent in atomic structure and behaviours, while mapping completely (in a mathematical sense) to the real world scenario being modelled.
To be conceptually useful, "completeness" chould be able to be "proven" - at least theoretically. This explanation implies an algebraic representation (eg predicate calculus) of the process charted should be derivable from the charting language. Having said that, it should be noted that few computing languages have such a mathematical validity test available (SQL being one notable exception).
====Minimal Syntactic Complexity====
Completeness in oricess modelling is a complex topic, and one fraught with some potentially counter productive implied solutions.
For example, a charting system with a unique symbol for every-process might achieve completeness, but it would achieve this at the expense of very high grammatic complexity.
The strength of process charting approach lies specifically in its ability to categorise, simplyify, and standardise our view of a social system. If one measure of language complexity lies in the number of rules in a grammar, then the greater the range of predefined (or reserved) symbols in the language, the greater the number of rulee that will be required to define their use.
Complexity, under such a measure, is minimised when the number of unique predifined "terms" is minimised. The mover restricted is symbol set, however, the more symbols must be used to represent simple everyday-repeating processes.
===The BPC Business Process Charting Method===
The core symbols of the process charting language are defined in the BPR overview. This author postulates that all human-machine processes can be documented with this minimum set of symbols. The simplicty of its symbol set (and therefore grammar) can lead to diagramatic complexity.
Certain objects and their processes occur with such rapidity, that diagrammatic complexity is reduced significantly by expamding the core set of symbols as shown in [[Business Process Reengineering - Chart Key]].
==Charting Example - Electronic Grants Management System==
The process charting method included on the following pages demonstrates the business process charting method as designed by this author and improved with input from clients and staff of BPC over 24 years. The example charts represent the BPC Process Reengineering Modelling and the BPC Stakeholder Community model in action in a real world situation. The resulting demonstration is a fully functional government grants management process for whole-of-government administration of government grants to the public.
*[[Business Process Reengineering - Chart Key]]
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
73ac152a4b245146897670bf781740106b14b9ef
395
347
2012-08-30T13:39:34Z
Bishopj
1
wikitext
text/x-wiki
=Introduction - Business Process Charting=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
==Charting the Business Process - A Unified and Holistic Approach==
===Why Chart?===
There are many reasons we may wish to chart a business and its businesses processes including mapping of data flows, documenting process steps, designing automated and hybrid systems, defining intra and inter-organisational relationships, defining or analysing service agreements, etc.
===What is a (Business) Process Chart?===
A process chart is a diagramatic representation of a set of processes, that models the enveloping organisations as if it were a machine with a functional domain that encompassed the diagrammed processes.
From a computational perspective, a business process chart is a diagramatic program describing human, machine, natural, organisational, functional and non-functional systems using digraphs.
===What are the Characterisitics of a Good Process Charting Method?===
====Objectives====
This author proposes that the objectives of a good process charting system should be to:
* improve the understanding and clarity of the data represented in the chart,
* enable domain specific analysis (such as efficiency, economy, effectiveness, reliability, etc),
* enable viewing of the processes at multiple levels of detail simultaneously,
* chart the target analysis domain completely,
* seemlessly represent both automated and non automated processes in the same chart,
* enable the automated modelling of the system directly from the chart (which implies the charting "meta-language" should have a consistent "syntax" and semantics - similar to an "ideal" computer language),
* represent processes across diverse operations, industries, products and services without context specific modification of the syntax or semantics,
* produce charts from unfamiliar industries (etc) that are understandable to a moderately experienced chart reader, with no prior background in the subject charted, and
* enable the construction of "proofs" of the processes.
In this author's view these objectives are assisted when the charting system assumes the properties and conventions of well designed computer programming language - albeit a visual one. These properties include the grammatic (semantic and syntactic) consistency, structured functional encapsulation, object reuse and polymorphism, conceptual inheritance, simplicity and functional expansion.
====Consistent Identifiable Grammar====
The grammar of a process charting method defines the symbols, their meaning, and the rules for "legal" combinations of these symbols and meaning of such combinations.
In computational languages the atomic element in a programming language's grammar is called a token. In a text based computational language these tokens are strings of one or more characters, some of which are defined in the language with a special meaning. The tokens comprise the syntactic elements of the grammar. The grammar itself defines a consistent semantic interpretation of the syntactic elements when combined in pre-defined combinations.
In a process chart the atomic element is a symbol that maps to a real world object such as an organisation, a person, a data element, a process (or function), a data store, etc. These symbols comprise the syntactic elements of the charting method's grammar, and the charting rules document a grammar which delivers a consistent semantic interpretation of the syntactic elements when combined in the pre-defined combinations.
====Completeness====
A well designed charting system is internally consistent in atomic structure and behaviours, while mapping completely (in a mathematical sense) to the real world scenario being modelled.
To be conceptually useful, "completeness" chould be able to be "proven" - at least theoretically. This explanation implies an algebraic representation (eg predicate calculus) of the process charted should be derivable from the charting language. Having said that, it should be noted that few computing languages have such a mathematical validity test available (SQL being one notable exception).
====Minimal Syntactic Complexity====
Completeness in oricess modelling is a complex topic, and one fraught with some potentially counter productive implied solutions.
For example, a charting system with a unique symbol for every-process might achieve completeness, but it would achieve this at the expense of very high grammatic complexity.
The strength of process charting approach lies specifically in its ability to categorise, simplyify, and standardise our view of a social system. If one measure of language complexity lies in the number of rules in a grammar, then the greater the range of predefined (or reserved) symbols in the language, the greater the number of rulee that will be required to define their use.
Complexity, under such a measure, is minimised when the number of unique predifined "terms" is minimised. The mover restricted is symbol set, however, the more symbols must be used to represent simple everyday-repeating processes.
===The BPC Business Process Charting Method===
The core symbols of the process charting language are defined in the BPR overview. This author postulates that all human-machine processes can be documented with this minimum set of symbols. The simplicty of its symbol set (and therefore grammar) can lead to diagramatic complexity.
Certain objects and their processes occur with such rapidity, that diagrammatic complexity is reduced significantly by expamding the core set of symbols as shown in [[Business Process Reengineering - Chart Key]].
==Charting Example - Electronic Grants Management System==
The process charting method included on the following pages demonstrates the business process charting method as designed by this author and improved with input from clients and staff of BPC over 24 years. The example charts represent the BPC Process Reengineering Modelling and the BPC Stakeholder Community model in action in a real world situation. The resulting demonstration is a fully functional government grants management process for whole-of-government administration of government grants to the public.
*[[Business Process Reengineering - Chart Key]]
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
73ac152a4b245146897670bf781740106b14b9ef
505
395
2012-08-30T13:39:34Z
Bishopj
1
wikitext
text/x-wiki
=Introduction - Business Process Charting=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
==Charting the Business Process - A Unified and Holistic Approach==
===Why Chart?===
There are many reasons we may wish to chart a business and its businesses processes including mapping of data flows, documenting process steps, designing automated and hybrid systems, defining intra and inter-organisational relationships, defining or analysing service agreements, etc.
===What is a (Business) Process Chart?===
A process chart is a diagramatic representation of a set of processes, that models the enveloping organisations as if it were a machine with a functional domain that encompassed the diagrammed processes.
From a computational perspective, a business process chart is a diagramatic program describing human, machine, natural, organisational, functional and non-functional systems using digraphs.
===What are the Characterisitics of a Good Process Charting Method?===
====Objectives====
This author proposes that the objectives of a good process charting system should be to:
* improve the understanding and clarity of the data represented in the chart,
* enable domain specific analysis (such as efficiency, economy, effectiveness, reliability, etc),
* enable viewing of the processes at multiple levels of detail simultaneously,
* chart the target analysis domain completely,
* seemlessly represent both automated and non automated processes in the same chart,
* enable the automated modelling of the system directly from the chart (which implies the charting "meta-language" should have a consistent "syntax" and semantics - similar to an "ideal" computer language),
* represent processes across diverse operations, industries, products and services without context specific modification of the syntax or semantics,
* produce charts from unfamiliar industries (etc) that are understandable to a moderately experienced chart reader, with no prior background in the subject charted, and
* enable the construction of "proofs" of the processes.
In this author's view these objectives are assisted when the charting system assumes the properties and conventions of well designed computer programming language - albeit a visual one. These properties include the grammatic (semantic and syntactic) consistency, structured functional encapsulation, object reuse and polymorphism, conceptual inheritance, simplicity and functional expansion.
====Consistent Identifiable Grammar====
The grammar of a process charting method defines the symbols, their meaning, and the rules for "legal" combinations of these symbols and meaning of such combinations.
In computational languages the atomic element in a programming language's grammar is called a token. In a text based computational language these tokens are strings of one or more characters, some of which are defined in the language with a special meaning. The tokens comprise the syntactic elements of the grammar. The grammar itself defines a consistent semantic interpretation of the syntactic elements when combined in pre-defined combinations.
In a process chart the atomic element is a symbol that maps to a real world object such as an organisation, a person, a data element, a process (or function), a data store, etc. These symbols comprise the syntactic elements of the charting method's grammar, and the charting rules document a grammar which delivers a consistent semantic interpretation of the syntactic elements when combined in the pre-defined combinations.
====Completeness====
A well designed charting system is internally consistent in atomic structure and behaviours, while mapping completely (in a mathematical sense) to the real world scenario being modelled.
To be conceptually useful, "completeness" chould be able to be "proven" - at least theoretically. This explanation implies an algebraic representation (eg predicate calculus) of the process charted should be derivable from the charting language. Having said that, it should be noted that few computing languages have such a mathematical validity test available (SQL being one notable exception).
====Minimal Syntactic Complexity====
Completeness in oricess modelling is a complex topic, and one fraught with some potentially counter productive implied solutions.
For example, a charting system with a unique symbol for every-process might achieve completeness, but it would achieve this at the expense of very high grammatic complexity.
The strength of process charting approach lies specifically in its ability to categorise, simplyify, and standardise our view of a social system. If one measure of language complexity lies in the number of rules in a grammar, then the greater the range of predefined (or reserved) symbols in the language, the greater the number of rulee that will be required to define their use.
Complexity, under such a measure, is minimised when the number of unique predifined "terms" is minimised. The mover restricted is symbol set, however, the more symbols must be used to represent simple everyday-repeating processes.
===The BPC Business Process Charting Method===
The core symbols of the process charting language are defined in the BPR overview. This author postulates that all human-machine processes can be documented with this minimum set of symbols. The simplicty of its symbol set (and therefore grammar) can lead to diagramatic complexity.
Certain objects and their processes occur with such rapidity, that diagrammatic complexity is reduced significantly by expamding the core set of symbols as shown in [[Business Process Reengineering - Chart Key]].
==Charting Example - Electronic Grants Management System==
The process charting method included on the following pages demonstrates the business process charting method as designed by this author and improved with input from clients and staff of BPC over 24 years. The example charts represent the BPC Process Reengineering Modelling and the BPC Stakeholder Community model in action in a real world situation. The resulting demonstration is a fully functional government grants management process for whole-of-government administration of government grants to the public.
*[[Business Process Reengineering - Chart Key]]
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
73ac152a4b245146897670bf781740106b14b9ef
The Stakeholder Community Network Model
0
288
323
2012-08-30T15:53:24Z
Bishopj
1
wikitext
text/x-wiki
=Introduction - What is the Stakeholder Community Network Model?=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this and other topics. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Re engineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, style and the detail provided, as the original text was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time.
'''''Author's Note:''''' The stakeholder community network concept was originally mapped out in the mid to late 1990's and reflected both my own search for a paradigm for online and virtual corporations. It effectively pre-dates the rise of cloud computing and social network sites as a component of business (for which it almost seems to have been designed) by some five to eight years. It did, however, benefit from existence of the fore-runners of these concepts. It was developed in the context of the observed behaviours of successful online ventures such as DELL and CISCO, the Victorian whole of government reform agenda, the tail end of the TQM experiment, shift from paper to online work flow both intra and inter business, rise of risk management, progressive adoption of balanced score cards, appearance of network trading organisations (groups of independent complementary businesses that traded together as a unit cross-feeding work and niching away from each other through specialisation - they flourished briefly locally in the mid-1990's), and the rise of on-line portals, peer managed corporate forums, application service providers, enterprise scale ERP and CRM systems, and web based B2B systems and the emergence of cataloguing standards. I have used it heavily over the years. It has been modified over time, to accommodate learnings from organisations that survived economic, technological social and political reversals and fertilised throughout by proven tactical and management philosophies, the stakeholder community network model would now seem to have come of age.
</noinclude>
==What and Why==
===What is the Community Network Theory of Organisations?===
====Organisational Community Network Theory====
'''''Organisational Community Network Theory premises that an organisation is a network of one or more communities existing in a network of other communities. The network links communities along lines of exchange such as communication, dependence, and obligation. Communities are collections autonomous agents and/or other communities that interact and share a sense of group identity, or share at least one purpose in common.'''''
Agents are essentially people, but the category could easily accommodate AI devices as these develop appropriate capabilities.
====Characteristics of a Community in Organisational Design====
Communities provide a natural, spontaneously-forming, self-organising, and evolving human organisational structure that forms because something is shared by the participants. Through the things the participants share in common, the community unit provides a framework for standardisation, streamlining, automating, and specialising in delivery of services and products to meet the shared purposes and operational needs of the individual community, and groups of communities.
Communities form initially because there is one or more needs in common among the participants (possibly only the need to identify and classify each other!). They are not inherently permanent structures, however there are some communities, that because of their survival through multiple generations or over multiple business cycles are effectively permanent. Such a list might include cities, countries, religions, professional associations, sporting clubs, and some government agencies, for example. At the other end of the continuum are communities that form spontaneously and last for little longer than the span of the first and only meeting. Examples might include emergency assemblies, concerts, demonstrations, staff inductions and rallies, etc.
Members of a community may be individuals or other communities. Communities contain eight non-exclusive classes of participant:
# Members - All participants are members, regardless of whether they are also members of the other classes.
# Beneficiaries - Information, goods and services consumers
# Suppliers - Information, goods and services providers
# Patrons - Funding providers who therefore also tend to direct
# Governors - Providers who administer, moderate, direct, control access, monitor, and tune.
# Custodians - Provide the infrastructure, durable assets, information warehouse, community tools.
# Partners - Provide compatible, complementary non competitive services or goods consumed by members in association with those of the community, but not as part of the community.
# Public - Comprised of potential participants, and participants who may also spontaneously form communities that compete with or otherwise influence the context of the community.
The more mature the community, the more clearly these roles are differentiated and actively operating. For a community to reach stability over an extended time the more important it is for these duties implied in these roles to be fulfilled.
Members of a community:
*share in a communal identity,
*have a shared purpose with other members,
*need similar access to information, and
*draw from a common set of tools.
The community will interact with other communities both individually and as a group. The more cohesive and mature the community is, however, the more likely it is that it will interact as community with other communities through nominated representatives.
The community is the fundamental building block of an organisation, but communities are structurally recursive and fluid. Communities themselves naturally subdivide into teams that service particular interests or needs of the community. These teams from their own communities, and together these internal communities form a network of interacting communities. The larger and more heterogeneous the parent community the more noticeable, numerous, segregated, larger and autonomous these internal communities become.
These internal communities may also interact directly with external communities, and have external participants in otherwise internal communities. The more predominant the external participation is, the more likely is the internal community to transition though the parent community boundary to become an external community (with respect to the originating parent community). Similarly the higher the proportion of community participation from a single community in an external community, the more likely that external community will transition to an internal contextually constrained community.
Each community is, therefore, comprised of a fluid network of communities contextually constrained by, and in some way supporting the activities of the parent community.
Community based organisational structures extend horizontally through unconstrained networks of interactions and vertically through community subdivision and absorption into constrained networks of specialised communities.
====Making and Strengthening a Community====
The longer a community survivies - the more mature it becomes - the more clearly the community identity, roles and rules become. For example, a group of people with a common interest in a playing of cricket meet by chance through visits to a local field - perhaps looking for a game being played. Over time they tend to arrive more regularly and predictably at around the same time in greater numbers. Some start bringing equipment and start a game, while others join in fielding or watching. As the predictability of the presence of other interested parties grows, participants start arriving in the expectation that others will also be present, while other participants bring supporting material - like refreshments, etc. Gradually, a community is forming with self nominated and perhaps suggested or allocated roles.
Eventually the group might suggest a common name - the Sometimes Cricket Club - and others might attempt to organise more sophisticated or permanent resources, and eventually the funding needs of the group might dictate an expansion in its membership and the need to more formally manage finances on behalf of the group, etc. Rules might initially be common-sense and unspoken (like not stealing the bat and ball from the guy that supplied it), others may be agreed through shared experience. Sharing or common interests and the need to improve predicatbility of participants in games will encourage the group members to share contact details and channels of communication. The more individuals invest their time, energy and resources on behalf of the group, the more they will expect later joining members to make a catch-up contribution for the existing investment - and the community may start placing barriers to entry in the form of membership criteria and fees.
As the group grows handshake agreements may need to be formally agreed and recorded, and individuals will be formally allocated roles and leadership agreed. Along the way as disagreements arise (like who should bat first) dispute resolution mechanisms will be required.
Thus a community has been formed and gradually self-organised. If the initial casual group fails to ever define roles, find equipment supplier(s), it will be most unlikely to ever get to the stage of even the first game. If it fails to agree its meeting place and times of meetings it will probably not achieve the second game. If it fails to identify its membership and establish an identity (and therefore a brand) and all the other functions of a cricket club it will be unlikely to last out a season.
To make an effective long term community we need to pay attention to the characteristics that form a community and ensure that these characteristics are serviced. From the simple example above we see that a community has:
*Members
*Shared resources
*Identity / Brand
*Communication
*Define and shared purpose
*Location - a meeting place (which may be virtual)
*Roles
*Rules
*Governance structure
*Barriers to entry (note this might be as small as deciding to participate)
*Patron (implied or formal)
We grow and strengthen a community by addressing these characteristics directly. Ignoring any one of these will result in the failure of the community over time. For a community that assembles for a single purpose for only a short period of time - such as a demonstration, or an entertainment event this may not be a concern. If we wish the community to have any kind of longevity we will need to consider how we enable the defining charcteristices of the community.
It is with some surprise that we note that when we look at the permanent communities within many organisations we will find that several of these characteristics are only weakly addressed - if at all - rarely understood, and even more rarely considered. Herein lies the key to the internal structural failure of many organisations that have grown much beyond the oversite of their founders splitting into many semi-autonomous communities.
====The Organisation as a Community====
Here we distinguish a physical organisation from the organisation of its operations and resources.
A physical organisation - such as a company, government agency, not-for-profit, or even a political party - is:
# a community containing a network of communities,
# a patron of both internal and external communities
# a custodian of information and provider of infrastructure for communities
# a governor of community mandate, direction, performance, and culture, etc.
The physical organisation is, by definition, a community, but its boundaries may be so fuzzily defined that as a community it is little more than a container for a network of communities, whose primary allegiances are directed outside of the physical organisational boundary. Some communities in the organisation's network are planned and facilitated communities, while others are not planned but facilitated (such as professional associations, unions, standards bodies) and others are neither planned nor facilitated (but, perhaps, accommodated) (such as schools, sporting clubs, arts groups, social movements, etc.).
As a patron the physical organisation plays it primary role. Patronage is provided through a funded pool of resources that can be applied to communities as participants and enablers of community infrastructure, and through direct funding of community operations, or through funding infrastructure provision, etc. Patronage is about funding, and every gift "in kind" of resources or equipment, etc is an implied gift of funding as well. Patronage is accompanied by some ability to influence direction - if only from the implied threat of future funding cessation.
As a custodian, the physical organisation will also provide services to communities of storing knowledge, providing and maintaining technical and physical infrastructure used by communities, and management of liquid assets, etc. These are called custodian functions because they are about the preservation of assets, wealth, capability and capacity.
In its governance function the physical organisation imposes accountability for patronage, standards, policy compliance, legal compliance, strategic direction, performance measurement, financial control and resource utilisation, etc
All organisations are simultaneously intersected by many special interest communities:
*The average workforce is riddled with communities some intersecting the organisation, some not - union(s), professional bodies, schools (if staff have school age children), political, sporting, social, OHS cases, divisional, project, etc.
*Industrial associations, standards committees, regulators, etc.
*The company is surrounded by public interest groups, political and semi political groups, consumer advocacy groups, and the public relations industries.
*Internally the organisation might have communities of buyers, marketing and sales, logistics, process & quality improvement, governance, safety, research and development, financial control, etc.
Communities do not respect the conventional boundaries of corporate or governmental agencies. Communities that interact with external stakeholders, for example, draw in members of the public and convert them into organisational stakeholders in the process, but not employees (at least in the conventional sense).
====The Advantages from using Communities to Model Organisations====
In some organisational theories, communities are represented as external and internal forces or drivers, but are not directly modelled into the organisational structure. The organisation is seen as a collection of consumer-provider relationships - whether those relationships are about transmitting instructions, funding, goods, services, resources, etc. The relationships are essentially hierarchical - even in matrix organisations - and feed back and feed forward control systems have to be imposed on the structures to make them work. Structural entropy gradually causes the structure to disassemble without constant maintenance on the organisation structure itself.
The community is an advance on the classic consumer-provider interactive model, because it:
*assumes most business relationships are multi-directional exchanges between the provider and the consumer and other providers and consumers extending over a period of time;
*recognises that all transactions between parties involve a series of micro exchanges going in both directions, not a single uni-directional exchange. For example, a purchase involves the consumer providing information (identity, location, preferences, competitor data, demand level, buying cycle, etc.) and possibly funding, a sales team matching the need to available offerings and defining and providing the promise, a legal team defining the obligations, a delivery team to deliver the good or service, a quality and support team providing quality management, logistics team providing transport, etc. All of these are participants of the same community involved in meeting client needs.
*delivers the benefits of the one-stop-shop process models, without the training cost, and inherent quality variability, by forming a community of specialists to collectively provide the single point solution.
*provides a model for structuring the online presence of an organisation.
*provides an organisational architecture that distributes the costs of providing and consuming goods and services across the community rather than exclusively concentrated in the larger party. For example, a buying community might assume some of the costs of sales by providing their details online directly into the client database, select from available product (by watching videos, reading information and product comparisons provided from central location), or submit special orders online, respond to questions from other clients in hosted forums, and advertise the organisation's products and quality in organised reviewer sites, or social networking sites.
*places the provider and consumer into the same "team" and positions them as jointly trying to meet a need. The community model facilitates all participants contributing jointly and sharing ownership of the outcome - rather than one party meeting the needs of the other.
Each community is a collection of participants (members) who share common operational characteristics, goals, interests and/or functional needs. The greater the extent to which the participants share characteristics, interests, needs and goals in common the greater the cohesion in and resilience of the community - in simple terms the community is active, "tight", involved, and the members share a sense of identity, belonging and, most importantly, ownership.
Communities are semi-autonomous, self-selecting, self directed, and inclusive. This does not mean communities are necessarily "open-access". In fact communities with higher barriers to entry often have the highest sense of cohesion because membership is something hard to attain and therefore something of value. Cohesion does not necessarily mean active, however, and lack of activity generally makes a community less interesting organisationally. Communities survive by exchanging things. The greater the volume of services, tangible goods or intangible goods (such as information), that flows through and around the community the stronger the community becomes. In the community model an organisation therefore benefits by fostering participation and particularly communication among all its members.
===What is the Stakeholder Community Network Model?===
'''''The stakeholder community network model is an organisational design and analysis paradigm that sees the organisation as a network of co-dependent stakeholder communities positioned in a larger network of interacting (but not necessarily co-dependent) communities. Within this paradigm, all of an organisation's services, functions and facilities exist to service the needs of the various stakeholder communities in the network.'''''
It should be noted from the outset, that co-dependent does not mean cooperative. As with domestic co-dependent relationships, the community network may include some positively destructive co-dependent community relationships.
The model defines an organisation as consisting of a network of operations that may extend beyond the boundaries of the organisation's body corporate. One such situation might arise in franchised operations or trading networks where an external entity provides critical services on which the corporate organisation depends.
The model works as an organisational design paradigm, a process design framework, an IT strategic design paradigm and a risk and performance analysis framework. It is directly suited to modern network, online, virtual, service operational models as well as bricks and mortar industries including utilities, government, general and project manufacturing, and education. It has not been tested in the resources sector or transport sector.
As an analysis tool identification and labelling of existing implicit and explicit communities and the physical and virtual flows between them against current planning, score cards, policies, performance measurement systems, service agreements, compliance frameworks, risk models, quality, control and feedback systems highlight areas of dysfunction, duplication, redundant effort, counter-productive strategies, missed opportunities, and structural inefficiency and ineffectiveness.
As a design tool it results in the alignment of organisation wide activities to identifiable purposes with targeted participants and measurable performance. It facilitates structurally many different and potentially divergent simultaneous strategies while painting a boundary and direction for such divergence. Such support in organisational design is essential for dealing in global, highly cyclic, or political markets where cultures, rules and geographic features may require the ability to operate as "her to him and him to her", and to retire and replace entire limbs rapidly.
As a customer, partner and supplier service process model it results in bound customers and suppliers and well integrated partners while distributing a significant portion of the organisations costs to the participants.
As an IT systems framework it provides an efficient protocol for defining shared services, community portal service architectures, intra-cloud and cloud services, virtualisation clusters, etc.
==Definitions==
===The Organisation===
Organisations are networks of communities. These communities are comprised of members drawn from inside and outside the organisation's corporate legal identity, and may include communities of which the organisation has no effective control (in traditional terms).
Under the stakeholder community network model we view an organisation as a community comprised exclusively of interconnected sub-communities of people providing and consuming goods and services. Each sub-community forms multiple sub-sub-communities within it, and the community subdivision continues recursively until the costs of organising communities out way the benefits gained from the additional community.
Contrast this view of an organisation with that of other models that classify organisations in terms of bureaucratic, divisional, matrix, and similar structures. Under the stakeholder network view all of these structures can coexist in an organisation simultaneously as they are simply overlapping communities defined around structural paradigms. The stakeholder community network model does not replace such paradigms - it absorbs them.
In the stakeholder community view an organisation is a free-flowing evolving network of teams forming and disbanding as required, with some acquiring near-permanent status, while others enjoy but a single day in the sunshine. Community membership is not exclusive and it is normal for members of one community to also be members of other communities.
===The Community===
The model first defines a structural unit (the community) that possesses identifiable and comparable characteristics, such as focus, information need, functional need, etc. Secondly, the model looks to the mechanisms of facilitating stakeholder communities in a cost effective and consistently reliable and predictable way, utilising common services designed to enable and utilise the shared or distinguishing characteristics. So initially, at least, the model is community structure agnostic.
Communities form for multiple reasons, including:
*shared geographic proximity
*shared heritage
*shared communications technology
*shared language
*shared interests
*shared skills
The things we share are like gravitational attractors around which people cluster in self organising social units we are calling communities.
As communities grow beyond a few members they form sub communities whose members service the parent community or concentrate in some specialised capacity in addition their other roles as members of the community.
The communities in which we are most commonly interested (in the general organisational performance improvement context) are those forming around shared interests and skills. Within an organisation the geographic, and language communities may be crucially important, and in some contexts would be directly accommodated but they will also usually need some form of communities formed around skills and interests (like, at the very least, consuming or providing something), in order to assist the organisation achieve its purpose.
Within each community formed around shared interests or skills are a further set shared interests such as membership, meeting space, information, branding, commercial services, engagement, arbitration, and support. As these needs are common (with minor variations) across all communities they are an attractive first target for shared service provision across all communities. In designing these shared services one should remember that a properly harnessed community can be self managing, peer supporting and self selecting. Shared service provided to communities should be designed to encourage this ownership by the community membership.
A community model assumes a multi-way conversation within the community among the community members - not a massively parallel bilateral conversation between the community members and the organisation. The latter is a client-supplier relationship and by excluding inter-member interaction it embeds the costly push model of marketing, sales and service delivery. By encouraging intra-community conversation we harness the consumers in the community into one or more of the many supply roles in the community. In a customer/client oriented community supply roles span such things as marketing assistance with reviews, discussions and forum participation to support assistance in peer help spaces, and even product improvement and testing such as in software Beta programmes. On the supplier and partner side, supplier side community roles include online supply of certifications, supplier self-registration of details, self selection of available contracts, online invoice entry directly by suppliers, and suppliers providing new product information feeds matching community standardised classifications and measures, etc.
===The Stakeholder Community===
A stakeholder community, is a collection of people, agencies, or units of an agency, that share three traits in common:
# They have an interest in the organisation being modelled or analysed (IE: they are stakeholders).
# As a group, they are co-dependent with other groups of the same organisation. (IE: the groups can not operate with complete autonomy as they depend on each other for their functioning and survival).
# They possess additional distinguishing dimensions of their interest in the organisation that allow them to be functionally separated from some members of the collection and similarly grouped with others (IE: they form an identifiable and functionally similar subgroup of stakeholders).
A stakeholder community of an organisation might be defined as geographically based, and representing all customers within a geographic area, or it might be an enterprise wide collection of staff injured in forklift truck accidents, or a worldwide extra net of ECL policy advisers, or suppliers and corporate buyers for raw materials,... or any one of a long list of possible organisation specific or related groupings.
We call the members of a community "Resources". A resource may be a person or another collection of resources such an organisation, a unit of an organisation, another community. In all cases where a collection of resources is a member of a community, that collection will participate through one or more "community representatives". So in a sense resources can be seen as ultimately comprising people (even though they may be members fulfilling constrained roles).
===The Stakeholder Community Network===
A stakeholder community network is a collection of stakeholder communities that form a network of loosely co-dependent communities.
The communities comprising the network preserve the rules of membership of a stakeholder community domain (as defined above). The links between member communities represent the co-dependencies. The dependencies are functional in nature and may be about information, goods or services - provision or supply, etc. They therefore represent the first layer of potential service level agreements in an organisation.
Technically speaking, the graph connecting all members of the stakeholder network is a digraph (directed graph) when the functional attribute of the network relationship is included in the inter-community link definition.
===The Well-formed Stakeholder Network===
In the universe consisting of all possible stakeholder communities of an organisation, a complete network would include all communities in the network topography. Such a network is said to be "theoretically complete".
Theoretical completeness is neither practical nor possible to achieve in practice. We can not know, and thus enumerate, every possible stakeholder community as each resource and every possible combination of two or more resources up to and including the entire membership of the organisation's stakeholder domain is potentially a community.
Another way of viewing completeness is to first test to ensure that all members of the stakeholder community are also members in one or more of the other communities in the network. This network is then complete in terms of a organisation's resource coverage.
It is worth noting that an organisation's stakeholder resource list may include both members of the public and entities that have no direct dealing with the organisation as well as staff, clients and suppliers (etc.) of an organisation.
===The Stakeholder Community Network Model===
The stakeholder community network model views an organisation in terms of stakeholder communities with shared needs, interests and/or purposes.
The model is a government and business meta-organisational model for organisational design, performance analysis and competitive strategy. It founded on a theory of operational design that embraces networked co-dependent business structures (such as outsourcing, join-ventures and social networking), while not mandating them. The step into communities, however, fundamentally changes the organisational focus from internal structure management to external service delivery. By rejecting all activity not designed to service an identifiable community it forces the entire enterprise to embrace a service culture at every level - everybody is a client of somebody else and in a stakeholder relationship (and usually responsible to someone, or responsible for something) with many other people.
The community structure inherently distributes some of the costs of marketing, sales and servicing, from the net providers to the net consumers within the community, but is effectively a premium willingly paid by community net consumers for greater influence over service form, more relevant and timely information, improved service speed, and risk perception confirmation (the role of public forums), etc.
Communities are essentially self determining and semi-autonomous so a community network modelled organisation naturally accommodates multiple value streams simultaneously. The ability for a community to recursively sub-divide into smaller overlapping specialised communities means the enclosing community structure can accommodate not only multiple value streams internally, but also multiple agendas. Thus financial performance can be enhanced, while quality improvement, social policy or research (and other long term strategies) are driven with equal priority. Further, new value streams can be added to the structure without compromising the integrity or culture of the existing structure.
The semi autonomous nature of communities means that both competitive and and non-competitive business architectures are compatible with the community network model.
We say it is a "meta-organisational model" because, while you might design your physical organisational structure around the model (particularly at the business unit level, or in the online context), it is more common to use it to redesign the roles, service agreements and strategies of existing organisational structures in an organisation. The meta-organisational model is one that floats through a physical organisation providing a new virtualisation of the organisation by re-engineering the service agreements, social networks and logistical networks in an organisation.
One way to think of this is that the impact of applying the community stakeholder thought process is to rearrange the plumbing, the lifts, the corridors and the internal doorways inside a heritage listed building. It is still the same building on the outside, but now you don't get lost inside it, and clients and customers start sharing your destination, not just what you do.
Sure you could tear down the building and replace it with a campus that modelled your stakeholder community structure exactly, but you do not need to do so to get the benefits, and in fact doing so might be counter productive to your market.
The model does tend to have certain organisational impacts - even as a thought exercise:
*The model encourages networked structures and specialisation of semi-autonomous co-dependent internal units.
*The communities share common servicing needs and efficiency dictates some form of shared service provision for these common needs. These structures imply additional cost, which in a zero-sum change process implies that resources will have to be transferred from somewhere else.
*The network model will tend to reach across multiple divisions of an organisation in defining communities.
In the normal entity (government or business) an individual or even business unit might participate in multiple stakeholder communities at once. So the communities are not necessarily defining an organisational structure as much as a set of interlocking co-dependence structures around which services can be consolidated and streamlined, duplication identified and removed, and context specific organisational purposes can be clearly articulated.
=Applying the Stakeholder Community Network in Practice=
==Step 1. Identifying and Defining Stakeholder Communities==
We must fist decide whether we a looking for a directed outcome such "quality improvement" or an undirected (normal) outcome. This impacts the design of every community.
In a directed outcome model the directed outcome becomes a community in its own right that is automatically a participant in every other community. This allows consideration of the requirements of the directed outcome community to be capture and implemented in every other community structure.
In the undirected model no such imposed membership is mandated and the community architecture is left to optimise the framework with which it has been equipped.
In most situations we use the undirected model for analysis and the directed model in conceptual design (refactoring into an undirected model once the directed redesign has been finished).
==Step 2. Identifying and Defining the Community Ennoblement Functions==
In the model, the central object of the organisation is to ensure communities are facilitated, serviced, and harnessed for the purposes of the organisation as best it can, or otherwise "actively managed". The model sees only communities - so every participant within and without the organisation must be able to be defined as falling into one or more stakeholder communities if the model is to be considered "well-formed" (read "complete").
Within the model, the aim of the enterprise is to facilitate communities (generally) and a defined set of communities specifically - which translates into:
*identifying stakeholder communities
*mapping new and existing stakeholder communities to organisations objectives, mandate and purpose as they change
*mapping inter-community work flows testing and identifying duplicated communities, duplicated flows, and under resourcing, etc
*seeding communities as required
*funding stakeholder communities (eg seed capital, cross charging, external billing, etc)
*organising stakeholder communities
*branding stakeholder communities
*fostering community participation and outcome ownership
*providing the and possibly managing the infrastructure for community self organisation
*liaising/interfacing between stakeholder communities (eg. client community versus customer community)
*delivering the community's requested service or goods
*harnessing community ownership of the service/product improvement process
*trapping and archiving expert knowledge from both internal (to the organisation) and external community participants over time
Within an organisation adopting the stakeholder community network paradigm operationally, the stakeholder community network must be actively managed. This means it must be facilitated, moderated and funded. Resourcing is required to make it fast and efficient to implement and equip new communities and retire existing ones. Part of equipping a community is establishing its charter, budget, performance measures, governance, operating rules (constitution), core membership, decision model, meeting space, common (shared) tools and specialised applications or services need.
This necessitates the creation of a new centralised or distributed role of community facilitator(s) and a central role of community registrar (manager). The former is about equipping and assisting new communities, identifying and seed communities as required and advising and improving existing communities. The latter is about containing, policing, funding, planning, judging and budgeting communities.
==Step 3. Considerations in Designing the Stakeholder Community Analytical Structure==
Once we have a standard definition of the community concept as it applies in our analysis and organisation, the next step is to define a framework of communities through which to analyse the organisation.
As each community shares facilities between their members, the fewer top level communities there are the better the efficiency gains in the entire model will be. Unless, of course, their are too few and the resulting groupings are not homogeneous over sufficient characteristics, or the communities are badly chosen with many shared characteristics between the groups rather than within the groups.
Secondly, the choice of communities can slant the servicing view internally or externally, or indeed could simply mirror existing organisation structures. None of these effects are likely to produce efficiency gains sufficient to justify the operational overhead of the stakeholder community support systems. The gain comes from achieving 100% coverage of participants, with communities comprised of both external and internal participants, with the minimum need for intra-community process or system customisation. By demanding the mixing of internal and external members aim to eliminate duplication between external and internal systems and processes servicing the same need.
So, ultimately, the choice of top level stakeholder communities proves to be crucial to the outcome of the model - on all fronts.
In our experience, if the model is well designed the chosen top level community groups will tend to be highly co-dependent which automatically provides a structure and focus for service level agreements, and intra-community risk profiles will be highly consistent.
The choice of stakeholder communities used is prima-facie up to the organisation and the purpose of the analysis. While generalisation is possible at the highest level, as the view descends through the communities into their member sub-communities the groupings become quite specific to an organisation.
After many years of using and refining the concept we have settled on a standard top level stakeholder community model we call SCNM03. It has proven to be work predictably in both government and commercial agencies in both physical (eg manufacturing) and virtual (eg software) organisations. Alternative models include the groupings under Porter's Theory of Competitive Advantage.
=Standard Stakeholder Community Network Model: SCNM03 in Practice=
==SCNM03: Bishop's Model Stakeholder Network==
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
In the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
==Risk and the Stakeholder Community Network Model==
Risk in the model tends to vary with time and the degree of influence the organsiation (the meta-community) has in the specific community being examined. This influence will vary over time.
Consequently, in the longer time frames (ie. the strategic time frame) the Public and Governance communities are usually the highest inherent strategic risk communities in the model. The organisation tends to have the least influence over the sub-communities contained there-in and may participate only as a guest (information receiver, price-taking customer, subject of legislation, etc.), or not at all. Public attitudes can swing against the activities of the organisation, and influence the legislaters, who, in turn, can legislate the marketplace or the organisation out of existance. COnsumer preferences can change as technology progresses, making the organisation's business model irrelevant. The stakeholder network model therefore naturally tends to encourage both lobbying and active public relations management (or the exact opposite: invisibility!), and participation in external communities for information gathering.
Where timeframes being considered are shorter, ie. from an operational or tactical risk perspective, Workforce will rank as one of the highest risk spaces. If we think of Workforce as being comprised of smaller communities - say contractors and employees, and then each of these in turn being comprised of even smaller communities - say divisions, units and ultimately individuals we see that the more we subdivide the group the closer we get to a community of one member - the individual. In the very short term humans thus represent a highly variable factor.
In the micro-community of one person. the only member of the community that exists inside the employee's head is him or her self. All the risk minimisation and behaviour modification controls naturally present in a larger commnunity are dependent on that one member. In that community one person fulfills all the roles of the multi-member community. Strategies such as training, and standard processes work over an extended time frame to reduce the probability of incidents and create predictability across the workforce as a group, but in the very short or immediate timeframe the individual is still entirely responsible for each action with little chance for other community members to intercede (because there aren't any!). In the instant, this micro-community can make an unsafe decision that impacts the well being of the larger organisation (as well as themselves). Planning, thorough and extended training, careful member selection, 'idiot-proof' machine and user interface design will improve the predictability of the individual - but all these strategies take time to design, implement and achieve their effects. So, over the shortest unit of time - say, a second into the future - the individual can make a very bad decision with disastrous outcomes. This is a technical way of saying that people do dumb things that can be prevented with enough preparation and training - but only if enough time is available.
==Competition and the Stakeholder Community Network Model==
The SCNM03 model captures a deliberately divergent view of competitive strategy from that presented by many earlier authors. In this model, competitors are seen as potential suppliers, partners, clients, customers or workforce and strategies to bring them into one or more of those communities would be pursued.
Crucial to understanding the SCNM03 stakeholder model is that, purely applied, the model sees the entire universe in terms of these communities. It starts with the ideal vision built-in and therefore models a best fit to that scenario.
One obvious issue, then, is that there is clearly no community of "competitors". Under the pure SCNM03 stakeholder network model our aim is to make competitors a member of one or more of the other communities. We are therefore encouraged to both define our service offering away from competition and structure ourselves as complementary to another's offering or needs. The extent to which we are not able to achieve this influences the inherent risk that lays in the public communities.
We do not loose the unresolved participants, instead they appear as sub-communities of the public community and are subject to a range of risk mitigation strategies.
==Stakeholder Communities and Sub-Communities in SCNM03==
Each of these 8 communities is comprised of smaller communities with more specialised shared needs. For example, workforce is comprised of two specialised communities: contractors and staff (or other appropriate terminology). While many requirements of these groups are the same, there are specific differences in engagement, management, ancillary services, social interaction and disclosure levels between these groups to warrant separate community identities.
Conceptually the stakeholder network organisation is (almost) a franchiser of community management systems within a defined product/service space and in a given organisational cultural context. An organisation adopting this model will naturally look to standardise the managerial and technological profile of the communities it manages.
Applying the stakeholder network model in process design, performance analysis, compliance management or risk assessment often results in process structures and views that differ dramatically from the Divisional, Matrix, Hierarchical and Service models under which the organisation may operate. The community network model is agnostic when it comes to organisational structure (with the one exception being an organisation exactly mirroring the network model itself).
By way of example, an organisation that produces widgets, might traditionally see itself in terms of functions and processes concerning widgets. It has widget raw materials planning and acquisition, inventory management, widget production, widget distribution, widget order management and sales, etc. The same organisation in the stakeholder network model would see the world in terms of satisfying the needs of defined stakeholder groups first - not the things they were manufacturing.
In the SCNM03 stakeholder network model the natural home of the manufacturing functions is in the customer community where they are firmly focused to the customer (note - not client) desires, and materials acquisition function might be seen to contract the services of both the partner and supplier communities to satisfy material demand.
A couple of outcomes of the model are immediately apparent from this example, the model blurs the distinction between internal sourcing and external sourcing,
From a computing perspective, the model automatically leads to service portal based architectures, systems consolidation, cloud structuring (whether internal or externally hosted), and highlights the places where inter-system integration and system standardisation are needed. From an operations perspective it leads to service focused organisational architectures with defined client groups and document service standard agreements.
==The SCNM03 Communities Explained==
An individual is often a member of multiple communities (eg Customers and Clients). Our standard stakeholder communities (which in 12 years have yet to be wrong) are:
{|
|-
|Clients
|style="padding-bottom: 10px; padding-top: 10px; border-bottom: 1px solid black;bottommargin:10px;"|Stakeholders who receive or deliver services Clients are interested in rapidly finding information, requesting service, reporting hazards / incidents / events / ideas.
A classic result of the client stakeholder focus are client portals. In a local government these might take the form of a resident portal, where a city rate payer can find in one spot all the online systems for garbage collection, events, bylaws, parking permits, voting, pet registration, planning applications and objection lodgment, etc. In a direct-to-customer manufacturer the client might have access to a portal with product information, product enhancements, support, manuals, training, online-store, peer forums, product reviews, newsletter/blog, and peer/expert hints and suggestions all in one spot.
|-
|Customers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Stakeholders who pay for services that clients receive. This is separation is very common.
Customers want to pay for things in as convenient and consolidated a way as possible, and have mechanisms available for enquiring, revoking or monitoring services for which they pay. Companies that send multiple bills for the different services they provide are examples of firms that seriously need to look at their customers as a stakeholder group.
Governments provide the classic examples of customer and client separation: A State Government might pay for (or part-pay for) some services that are received by citizens of a city government. The state government is the customer, while the citizen is the client.
|-
|Suppliers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Suppliers of services and materials to the organisation. Suppliers have common service interests such as finding tenders, quotes, interfacing supply catalogues to purchase order systems, checking on payment status, locating standard contracts, etc.
|-
|Partners
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Partners are providers of complementary services. A “meals on wheels” charity provider may function as a partner to a local government, delivering services complementary to those of the city government, but funded by non-City sources.
Partners are mainly interested in ensuring their services stay complementary and not competitive with the organisation. So information on strategies, management of joint projects, identification of opportunities, etc are of interest.
Roads constructions authorities are partners who provide accident minimisation services, and traffic impact control services, etc. that complement those of the local or city government roads teams.
The relationship between insurance companies and the fire service is another example of a partnering structure. Insurance companies have an interest in facilitating the fire control services as they reduce their insured risks.
Franchised sales teams for a retailer, independent software manufacturers for a computer or games console manufacturer, and joint-ventures are all examples of partner community networks.
|-
|Workforce
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Workforce include both employees, contractors and consultants. HR systems, payroll, contract management, OHS, incident management, etc. are examples of services needed by this community.
|-
|Treasury/Custodians
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Treasury & other custodians are always an internal community. Their members are charged with maintaining assets and lowest level enabling systems for the other communities.
IT/IS, Building Management, Maintenance and Treasury are always members of the custodians group. They protect assets and provide the infrastructure on which the community specific applications reside.
Email, communications, data storage, server management clearly fit under this group.
|-
|Governance
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The governance community, like the workforce community includes multiple sub-communities, such as the executive, regulators, government bodies, risk management, compliance management, etc. These communities use services related to the provision of control and performance monitoring. Finance, council management, boards, executive team, performance review committee, inter-government reporting, risk, and compliance systems, and planning/budgeting systems are typically included here. Governance community members are both internal and external bodies with which the organisation has an accounting and reporting relationship.
|-
|The Public
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The public includes everyone else. This is a very important community as it has the ultimate power to remove the entire organisation from existence, or cause government to legislate it out of existence.
It is also the group from which all the other stakeholders originally come. From a strategic perspective, the aim of every organisation should be to get every member of the public community to transition to one of the other stakeholder groups.
The public need to know about the services an organisation provides, its ethics, and social performance.
While most membership of this community is reasonably obvious, the presence of public relations teams, lobbying and marketing in this community may be less so.
An organisation is always a member of the public stakeholder communities of all other organisations.
|}
=Applying the Stakeholder Network Model=
The stakeholder networks model is recursive. It applies organisation wide and through each sub grouping down to the individual business unit level (in fact it can also work at the individual level – but not generally in an IS context). Just as the organisation has these broad stakeholder groups, each business unit has the same stakeholder breakdown, all be it with most stakeholders in the various communities being internal to the organisation – rather than external to the City.
The stakeholder community network has clear relationships between the elements - particularly as realised in SCNM03 - and provides a model under which social networking and portal systems naturally fit. The model leads naturally to both network organisations (those using mixed in and out-sourcing, shared service models and joint-ventures as their standard business model.
The stakeholder community model has a number of applications:
#As an IT system design paradigm and idea promoter.
#As a full organisational modelling paradigm. In this form it results in dramatically different organisation models from those in general usage and is thus often too radical for executive comfort.
#As an analytic “best practice” benchmark it is outstanding, and even when only partly applied results in improved and more cost efficient process design.
#In designing and online and web service business presence. With a little thought it should be apparent how effective the stakeholder model is in designing an online presence and structuring of mutual obligation social networks.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
f54d609f8d240d7ba168fd4d101ce36b7edfe76b
345
323
2012-08-30T15:53:24Z
Bishopj
1
wikitext
text/x-wiki
=Introduction - What is the Stakeholder Community Network Model?=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this and other topics. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Re engineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, style and the detail provided, as the original text was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time.
'''''Author's Note:''''' The stakeholder community network concept was originally mapped out in the mid to late 1990's and reflected both my own search for a paradigm for online and virtual corporations. It effectively pre-dates the rise of cloud computing and social network sites as a component of business (for which it almost seems to have been designed) by some five to eight years. It did, however, benefit from existence of the fore-runners of these concepts. It was developed in the context of the observed behaviours of successful online ventures such as DELL and CISCO, the Victorian whole of government reform agenda, the tail end of the TQM experiment, shift from paper to online work flow both intra and inter business, rise of risk management, progressive adoption of balanced score cards, appearance of network trading organisations (groups of independent complementary businesses that traded together as a unit cross-feeding work and niching away from each other through specialisation - they flourished briefly locally in the mid-1990's), and the rise of on-line portals, peer managed corporate forums, application service providers, enterprise scale ERP and CRM systems, and web based B2B systems and the emergence of cataloguing standards. I have used it heavily over the years. It has been modified over time, to accommodate learnings from organisations that survived economic, technological social and political reversals and fertilised throughout by proven tactical and management philosophies, the stakeholder community network model would now seem to have come of age.
</noinclude>
==What and Why==
===What is the Community Network Theory of Organisations?===
====Organisational Community Network Theory====
'''''Organisational Community Network Theory premises that an organisation is a network of one or more communities existing in a network of other communities. The network links communities along lines of exchange such as communication, dependence, and obligation. Communities are collections autonomous agents and/or other communities that interact and share a sense of group identity, or share at least one purpose in common.'''''
Agents are essentially people, but the category could easily accommodate AI devices as these develop appropriate capabilities.
====Characteristics of a Community in Organisational Design====
Communities provide a natural, spontaneously-forming, self-organising, and evolving human organisational structure that forms because something is shared by the participants. Through the things the participants share in common, the community unit provides a framework for standardisation, streamlining, automating, and specialising in delivery of services and products to meet the shared purposes and operational needs of the individual community, and groups of communities.
Communities form initially because there is one or more needs in common among the participants (possibly only the need to identify and classify each other!). They are not inherently permanent structures, however there are some communities, that because of their survival through multiple generations or over multiple business cycles are effectively permanent. Such a list might include cities, countries, religions, professional associations, sporting clubs, and some government agencies, for example. At the other end of the continuum are communities that form spontaneously and last for little longer than the span of the first and only meeting. Examples might include emergency assemblies, concerts, demonstrations, staff inductions and rallies, etc.
Members of a community may be individuals or other communities. Communities contain eight non-exclusive classes of participant:
# Members - All participants are members, regardless of whether they are also members of the other classes.
# Beneficiaries - Information, goods and services consumers
# Suppliers - Information, goods and services providers
# Patrons - Funding providers who therefore also tend to direct
# Governors - Providers who administer, moderate, direct, control access, monitor, and tune.
# Custodians - Provide the infrastructure, durable assets, information warehouse, community tools.
# Partners - Provide compatible, complementary non competitive services or goods consumed by members in association with those of the community, but not as part of the community.
# Public - Comprised of potential participants, and participants who may also spontaneously form communities that compete with or otherwise influence the context of the community.
The more mature the community, the more clearly these roles are differentiated and actively operating. For a community to reach stability over an extended time the more important it is for these duties implied in these roles to be fulfilled.
Members of a community:
*share in a communal identity,
*have a shared purpose with other members,
*need similar access to information, and
*draw from a common set of tools.
The community will interact with other communities both individually and as a group. The more cohesive and mature the community is, however, the more likely it is that it will interact as community with other communities through nominated representatives.
The community is the fundamental building block of an organisation, but communities are structurally recursive and fluid. Communities themselves naturally subdivide into teams that service particular interests or needs of the community. These teams from their own communities, and together these internal communities form a network of interacting communities. The larger and more heterogeneous the parent community the more noticeable, numerous, segregated, larger and autonomous these internal communities become.
These internal communities may also interact directly with external communities, and have external participants in otherwise internal communities. The more predominant the external participation is, the more likely is the internal community to transition though the parent community boundary to become an external community (with respect to the originating parent community). Similarly the higher the proportion of community participation from a single community in an external community, the more likely that external community will transition to an internal contextually constrained community.
Each community is, therefore, comprised of a fluid network of communities contextually constrained by, and in some way supporting the activities of the parent community.
Community based organisational structures extend horizontally through unconstrained networks of interactions and vertically through community subdivision and absorption into constrained networks of specialised communities.
====Making and Strengthening a Community====
The longer a community survivies - the more mature it becomes - the more clearly the community identity, roles and rules become. For example, a group of people with a common interest in a playing of cricket meet by chance through visits to a local field - perhaps looking for a game being played. Over time they tend to arrive more regularly and predictably at around the same time in greater numbers. Some start bringing equipment and start a game, while others join in fielding or watching. As the predictability of the presence of other interested parties grows, participants start arriving in the expectation that others will also be present, while other participants bring supporting material - like refreshments, etc. Gradually, a community is forming with self nominated and perhaps suggested or allocated roles.
Eventually the group might suggest a common name - the Sometimes Cricket Club - and others might attempt to organise more sophisticated or permanent resources, and eventually the funding needs of the group might dictate an expansion in its membership and the need to more formally manage finances on behalf of the group, etc. Rules might initially be common-sense and unspoken (like not stealing the bat and ball from the guy that supplied it), others may be agreed through shared experience. Sharing or common interests and the need to improve predicatbility of participants in games will encourage the group members to share contact details and channels of communication. The more individuals invest their time, energy and resources on behalf of the group, the more they will expect later joining members to make a catch-up contribution for the existing investment - and the community may start placing barriers to entry in the form of membership criteria and fees.
As the group grows handshake agreements may need to be formally agreed and recorded, and individuals will be formally allocated roles and leadership agreed. Along the way as disagreements arise (like who should bat first) dispute resolution mechanisms will be required.
Thus a community has been formed and gradually self-organised. If the initial casual group fails to ever define roles, find equipment supplier(s), it will be most unlikely to ever get to the stage of even the first game. If it fails to agree its meeting place and times of meetings it will probably not achieve the second game. If it fails to identify its membership and establish an identity (and therefore a brand) and all the other functions of a cricket club it will be unlikely to last out a season.
To make an effective long term community we need to pay attention to the characteristics that form a community and ensure that these characteristics are serviced. From the simple example above we see that a community has:
*Members
*Shared resources
*Identity / Brand
*Communication
*Define and shared purpose
*Location - a meeting place (which may be virtual)
*Roles
*Rules
*Governance structure
*Barriers to entry (note this might be as small as deciding to participate)
*Patron (implied or formal)
We grow and strengthen a community by addressing these characteristics directly. Ignoring any one of these will result in the failure of the community over time. For a community that assembles for a single purpose for only a short period of time - such as a demonstration, or an entertainment event this may not be a concern. If we wish the community to have any kind of longevity we will need to consider how we enable the defining charcteristices of the community.
It is with some surprise that we note that when we look at the permanent communities within many organisations we will find that several of these characteristics are only weakly addressed - if at all - rarely understood, and even more rarely considered. Herein lies the key to the internal structural failure of many organisations that have grown much beyond the oversite of their founders splitting into many semi-autonomous communities.
====The Organisation as a Community====
Here we distinguish a physical organisation from the organisation of its operations and resources.
A physical organisation - such as a company, government agency, not-for-profit, or even a political party - is:
# a community containing a network of communities,
# a patron of both internal and external communities
# a custodian of information and provider of infrastructure for communities
# a governor of community mandate, direction, performance, and culture, etc.
The physical organisation is, by definition, a community, but its boundaries may be so fuzzily defined that as a community it is little more than a container for a network of communities, whose primary allegiances are directed outside of the physical organisational boundary. Some communities in the organisation's network are planned and facilitated communities, while others are not planned but facilitated (such as professional associations, unions, standards bodies) and others are neither planned nor facilitated (but, perhaps, accommodated) (such as schools, sporting clubs, arts groups, social movements, etc.).
As a patron the physical organisation plays it primary role. Patronage is provided through a funded pool of resources that can be applied to communities as participants and enablers of community infrastructure, and through direct funding of community operations, or through funding infrastructure provision, etc. Patronage is about funding, and every gift "in kind" of resources or equipment, etc is an implied gift of funding as well. Patronage is accompanied by some ability to influence direction - if only from the implied threat of future funding cessation.
As a custodian, the physical organisation will also provide services to communities of storing knowledge, providing and maintaining technical and physical infrastructure used by communities, and management of liquid assets, etc. These are called custodian functions because they are about the preservation of assets, wealth, capability and capacity.
In its governance function the physical organisation imposes accountability for patronage, standards, policy compliance, legal compliance, strategic direction, performance measurement, financial control and resource utilisation, etc
All organisations are simultaneously intersected by many special interest communities:
*The average workforce is riddled with communities some intersecting the organisation, some not - union(s), professional bodies, schools (if staff have school age children), political, sporting, social, OHS cases, divisional, project, etc.
*Industrial associations, standards committees, regulators, etc.
*The company is surrounded by public interest groups, political and semi political groups, consumer advocacy groups, and the public relations industries.
*Internally the organisation might have communities of buyers, marketing and sales, logistics, process & quality improvement, governance, safety, research and development, financial control, etc.
Communities do not respect the conventional boundaries of corporate or governmental agencies. Communities that interact with external stakeholders, for example, draw in members of the public and convert them into organisational stakeholders in the process, but not employees (at least in the conventional sense).
====The Advantages from using Communities to Model Organisations====
In some organisational theories, communities are represented as external and internal forces or drivers, but are not directly modelled into the organisational structure. The organisation is seen as a collection of consumer-provider relationships - whether those relationships are about transmitting instructions, funding, goods, services, resources, etc. The relationships are essentially hierarchical - even in matrix organisations - and feed back and feed forward control systems have to be imposed on the structures to make them work. Structural entropy gradually causes the structure to disassemble without constant maintenance on the organisation structure itself.
The community is an advance on the classic consumer-provider interactive model, because it:
*assumes most business relationships are multi-directional exchanges between the provider and the consumer and other providers and consumers extending over a period of time;
*recognises that all transactions between parties involve a series of micro exchanges going in both directions, not a single uni-directional exchange. For example, a purchase involves the consumer providing information (identity, location, preferences, competitor data, demand level, buying cycle, etc.) and possibly funding, a sales team matching the need to available offerings and defining and providing the promise, a legal team defining the obligations, a delivery team to deliver the good or service, a quality and support team providing quality management, logistics team providing transport, etc. All of these are participants of the same community involved in meeting client needs.
*delivers the benefits of the one-stop-shop process models, without the training cost, and inherent quality variability, by forming a community of specialists to collectively provide the single point solution.
*provides a model for structuring the online presence of an organisation.
*provides an organisational architecture that distributes the costs of providing and consuming goods and services across the community rather than exclusively concentrated in the larger party. For example, a buying community might assume some of the costs of sales by providing their details online directly into the client database, select from available product (by watching videos, reading information and product comparisons provided from central location), or submit special orders online, respond to questions from other clients in hosted forums, and advertise the organisation's products and quality in organised reviewer sites, or social networking sites.
*places the provider and consumer into the same "team" and positions them as jointly trying to meet a need. The community model facilitates all participants contributing jointly and sharing ownership of the outcome - rather than one party meeting the needs of the other.
Each community is a collection of participants (members) who share common operational characteristics, goals, interests and/or functional needs. The greater the extent to which the participants share characteristics, interests, needs and goals in common the greater the cohesion in and resilience of the community - in simple terms the community is active, "tight", involved, and the members share a sense of identity, belonging and, most importantly, ownership.
Communities are semi-autonomous, self-selecting, self directed, and inclusive. This does not mean communities are necessarily "open-access". In fact communities with higher barriers to entry often have the highest sense of cohesion because membership is something hard to attain and therefore something of value. Cohesion does not necessarily mean active, however, and lack of activity generally makes a community less interesting organisationally. Communities survive by exchanging things. The greater the volume of services, tangible goods or intangible goods (such as information), that flows through and around the community the stronger the community becomes. In the community model an organisation therefore benefits by fostering participation and particularly communication among all its members.
===What is the Stakeholder Community Network Model?===
'''''The stakeholder community network model is an organisational design and analysis paradigm that sees the organisation as a network of co-dependent stakeholder communities positioned in a larger network of interacting (but not necessarily co-dependent) communities. Within this paradigm, all of an organisation's services, functions and facilities exist to service the needs of the various stakeholder communities in the network.'''''
It should be noted from the outset, that co-dependent does not mean cooperative. As with domestic co-dependent relationships, the community network may include some positively destructive co-dependent community relationships.
The model defines an organisation as consisting of a network of operations that may extend beyond the boundaries of the organisation's body corporate. One such situation might arise in franchised operations or trading networks where an external entity provides critical services on which the corporate organisation depends.
The model works as an organisational design paradigm, a process design framework, an IT strategic design paradigm and a risk and performance analysis framework. It is directly suited to modern network, online, virtual, service operational models as well as bricks and mortar industries including utilities, government, general and project manufacturing, and education. It has not been tested in the resources sector or transport sector.
As an analysis tool identification and labelling of existing implicit and explicit communities and the physical and virtual flows between them against current planning, score cards, policies, performance measurement systems, service agreements, compliance frameworks, risk models, quality, control and feedback systems highlight areas of dysfunction, duplication, redundant effort, counter-productive strategies, missed opportunities, and structural inefficiency and ineffectiveness.
As a design tool it results in the alignment of organisation wide activities to identifiable purposes with targeted participants and measurable performance. It facilitates structurally many different and potentially divergent simultaneous strategies while painting a boundary and direction for such divergence. Such support in organisational design is essential for dealing in global, highly cyclic, or political markets where cultures, rules and geographic features may require the ability to operate as "her to him and him to her", and to retire and replace entire limbs rapidly.
As a customer, partner and supplier service process model it results in bound customers and suppliers and well integrated partners while distributing a significant portion of the organisations costs to the participants.
As an IT systems framework it provides an efficient protocol for defining shared services, community portal service architectures, intra-cloud and cloud services, virtualisation clusters, etc.
==Definitions==
===The Organisation===
Organisations are networks of communities. These communities are comprised of members drawn from inside and outside the organisation's corporate legal identity, and may include communities of which the organisation has no effective control (in traditional terms).
Under the stakeholder community network model we view an organisation as a community comprised exclusively of interconnected sub-communities of people providing and consuming goods and services. Each sub-community forms multiple sub-sub-communities within it, and the community subdivision continues recursively until the costs of organising communities out way the benefits gained from the additional community.
Contrast this view of an organisation with that of other models that classify organisations in terms of bureaucratic, divisional, matrix, and similar structures. Under the stakeholder network view all of these structures can coexist in an organisation simultaneously as they are simply overlapping communities defined around structural paradigms. The stakeholder community network model does not replace such paradigms - it absorbs them.
In the stakeholder community view an organisation is a free-flowing evolving network of teams forming and disbanding as required, with some acquiring near-permanent status, while others enjoy but a single day in the sunshine. Community membership is not exclusive and it is normal for members of one community to also be members of other communities.
===The Community===
The model first defines a structural unit (the community) that possesses identifiable and comparable characteristics, such as focus, information need, functional need, etc. Secondly, the model looks to the mechanisms of facilitating stakeholder communities in a cost effective and consistently reliable and predictable way, utilising common services designed to enable and utilise the shared or distinguishing characteristics. So initially, at least, the model is community structure agnostic.
Communities form for multiple reasons, including:
*shared geographic proximity
*shared heritage
*shared communications technology
*shared language
*shared interests
*shared skills
The things we share are like gravitational attractors around which people cluster in self organising social units we are calling communities.
As communities grow beyond a few members they form sub communities whose members service the parent community or concentrate in some specialised capacity in addition their other roles as members of the community.
The communities in which we are most commonly interested (in the general organisational performance improvement context) are those forming around shared interests and skills. Within an organisation the geographic, and language communities may be crucially important, and in some contexts would be directly accommodated but they will also usually need some form of communities formed around skills and interests (like, at the very least, consuming or providing something), in order to assist the organisation achieve its purpose.
Within each community formed around shared interests or skills are a further set shared interests such as membership, meeting space, information, branding, commercial services, engagement, arbitration, and support. As these needs are common (with minor variations) across all communities they are an attractive first target for shared service provision across all communities. In designing these shared services one should remember that a properly harnessed community can be self managing, peer supporting and self selecting. Shared service provided to communities should be designed to encourage this ownership by the community membership.
A community model assumes a multi-way conversation within the community among the community members - not a massively parallel bilateral conversation between the community members and the organisation. The latter is a client-supplier relationship and by excluding inter-member interaction it embeds the costly push model of marketing, sales and service delivery. By encouraging intra-community conversation we harness the consumers in the community into one or more of the many supply roles in the community. In a customer/client oriented community supply roles span such things as marketing assistance with reviews, discussions and forum participation to support assistance in peer help spaces, and even product improvement and testing such as in software Beta programmes. On the supplier and partner side, supplier side community roles include online supply of certifications, supplier self-registration of details, self selection of available contracts, online invoice entry directly by suppliers, and suppliers providing new product information feeds matching community standardised classifications and measures, etc.
===The Stakeholder Community===
A stakeholder community, is a collection of people, agencies, or units of an agency, that share three traits in common:
# They have an interest in the organisation being modelled or analysed (IE: they are stakeholders).
# As a group, they are co-dependent with other groups of the same organisation. (IE: the groups can not operate with complete autonomy as they depend on each other for their functioning and survival).
# They possess additional distinguishing dimensions of their interest in the organisation that allow them to be functionally separated from some members of the collection and similarly grouped with others (IE: they form an identifiable and functionally similar subgroup of stakeholders).
A stakeholder community of an organisation might be defined as geographically based, and representing all customers within a geographic area, or it might be an enterprise wide collection of staff injured in forklift truck accidents, or a worldwide extra net of ECL policy advisers, or suppliers and corporate buyers for raw materials,... or any one of a long list of possible organisation specific or related groupings.
We call the members of a community "Resources". A resource may be a person or another collection of resources such an organisation, a unit of an organisation, another community. In all cases where a collection of resources is a member of a community, that collection will participate through one or more "community representatives". So in a sense resources can be seen as ultimately comprising people (even though they may be members fulfilling constrained roles).
===The Stakeholder Community Network===
A stakeholder community network is a collection of stakeholder communities that form a network of loosely co-dependent communities.
The communities comprising the network preserve the rules of membership of a stakeholder community domain (as defined above). The links between member communities represent the co-dependencies. The dependencies are functional in nature and may be about information, goods or services - provision or supply, etc. They therefore represent the first layer of potential service level agreements in an organisation.
Technically speaking, the graph connecting all members of the stakeholder network is a digraph (directed graph) when the functional attribute of the network relationship is included in the inter-community link definition.
===The Well-formed Stakeholder Network===
In the universe consisting of all possible stakeholder communities of an organisation, a complete network would include all communities in the network topography. Such a network is said to be "theoretically complete".
Theoretical completeness is neither practical nor possible to achieve in practice. We can not know, and thus enumerate, every possible stakeholder community as each resource and every possible combination of two or more resources up to and including the entire membership of the organisation's stakeholder domain is potentially a community.
Another way of viewing completeness is to first test to ensure that all members of the stakeholder community are also members in one or more of the other communities in the network. This network is then complete in terms of a organisation's resource coverage.
It is worth noting that an organisation's stakeholder resource list may include both members of the public and entities that have no direct dealing with the organisation as well as staff, clients and suppliers (etc.) of an organisation.
===The Stakeholder Community Network Model===
The stakeholder community network model views an organisation in terms of stakeholder communities with shared needs, interests and/or purposes.
The model is a government and business meta-organisational model for organisational design, performance analysis and competitive strategy. It founded on a theory of operational design that embraces networked co-dependent business structures (such as outsourcing, join-ventures and social networking), while not mandating them. The step into communities, however, fundamentally changes the organisational focus from internal structure management to external service delivery. By rejecting all activity not designed to service an identifiable community it forces the entire enterprise to embrace a service culture at every level - everybody is a client of somebody else and in a stakeholder relationship (and usually responsible to someone, or responsible for something) with many other people.
The community structure inherently distributes some of the costs of marketing, sales and servicing, from the net providers to the net consumers within the community, but is effectively a premium willingly paid by community net consumers for greater influence over service form, more relevant and timely information, improved service speed, and risk perception confirmation (the role of public forums), etc.
Communities are essentially self determining and semi-autonomous so a community network modelled organisation naturally accommodates multiple value streams simultaneously. The ability for a community to recursively sub-divide into smaller overlapping specialised communities means the enclosing community structure can accommodate not only multiple value streams internally, but also multiple agendas. Thus financial performance can be enhanced, while quality improvement, social policy or research (and other long term strategies) are driven with equal priority. Further, new value streams can be added to the structure without compromising the integrity or culture of the existing structure.
The semi autonomous nature of communities means that both competitive and and non-competitive business architectures are compatible with the community network model.
We say it is a "meta-organisational model" because, while you might design your physical organisational structure around the model (particularly at the business unit level, or in the online context), it is more common to use it to redesign the roles, service agreements and strategies of existing organisational structures in an organisation. The meta-organisational model is one that floats through a physical organisation providing a new virtualisation of the organisation by re-engineering the service agreements, social networks and logistical networks in an organisation.
One way to think of this is that the impact of applying the community stakeholder thought process is to rearrange the plumbing, the lifts, the corridors and the internal doorways inside a heritage listed building. It is still the same building on the outside, but now you don't get lost inside it, and clients and customers start sharing your destination, not just what you do.
Sure you could tear down the building and replace it with a campus that modelled your stakeholder community structure exactly, but you do not need to do so to get the benefits, and in fact doing so might be counter productive to your market.
The model does tend to have certain organisational impacts - even as a thought exercise:
*The model encourages networked structures and specialisation of semi-autonomous co-dependent internal units.
*The communities share common servicing needs and efficiency dictates some form of shared service provision for these common needs. These structures imply additional cost, which in a zero-sum change process implies that resources will have to be transferred from somewhere else.
*The network model will tend to reach across multiple divisions of an organisation in defining communities.
In the normal entity (government or business) an individual or even business unit might participate in multiple stakeholder communities at once. So the communities are not necessarily defining an organisational structure as much as a set of interlocking co-dependence structures around which services can be consolidated and streamlined, duplication identified and removed, and context specific organisational purposes can be clearly articulated.
=Applying the Stakeholder Community Network in Practice=
==Step 1. Identifying and Defining Stakeholder Communities==
We must fist decide whether we a looking for a directed outcome such "quality improvement" or an undirected (normal) outcome. This impacts the design of every community.
In a directed outcome model the directed outcome becomes a community in its own right that is automatically a participant in every other community. This allows consideration of the requirements of the directed outcome community to be capture and implemented in every other community structure.
In the undirected model no such imposed membership is mandated and the community architecture is left to optimise the framework with which it has been equipped.
In most situations we use the undirected model for analysis and the directed model in conceptual design (refactoring into an undirected model once the directed redesign has been finished).
==Step 2. Identifying and Defining the Community Ennoblement Functions==
In the model, the central object of the organisation is to ensure communities are facilitated, serviced, and harnessed for the purposes of the organisation as best it can, or otherwise "actively managed". The model sees only communities - so every participant within and without the organisation must be able to be defined as falling into one or more stakeholder communities if the model is to be considered "well-formed" (read "complete").
Within the model, the aim of the enterprise is to facilitate communities (generally) and a defined set of communities specifically - which translates into:
*identifying stakeholder communities
*mapping new and existing stakeholder communities to organisations objectives, mandate and purpose as they change
*mapping inter-community work flows testing and identifying duplicated communities, duplicated flows, and under resourcing, etc
*seeding communities as required
*funding stakeholder communities (eg seed capital, cross charging, external billing, etc)
*organising stakeholder communities
*branding stakeholder communities
*fostering community participation and outcome ownership
*providing the and possibly managing the infrastructure for community self organisation
*liaising/interfacing between stakeholder communities (eg. client community versus customer community)
*delivering the community's requested service or goods
*harnessing community ownership of the service/product improvement process
*trapping and archiving expert knowledge from both internal (to the organisation) and external community participants over time
Within an organisation adopting the stakeholder community network paradigm operationally, the stakeholder community network must be actively managed. This means it must be facilitated, moderated and funded. Resourcing is required to make it fast and efficient to implement and equip new communities and retire existing ones. Part of equipping a community is establishing its charter, budget, performance measures, governance, operating rules (constitution), core membership, decision model, meeting space, common (shared) tools and specialised applications or services need.
This necessitates the creation of a new centralised or distributed role of community facilitator(s) and a central role of community registrar (manager). The former is about equipping and assisting new communities, identifying and seed communities as required and advising and improving existing communities. The latter is about containing, policing, funding, planning, judging and budgeting communities.
==Step 3. Considerations in Designing the Stakeholder Community Analytical Structure==
Once we have a standard definition of the community concept as it applies in our analysis and organisation, the next step is to define a framework of communities through which to analyse the organisation.
As each community shares facilities between their members, the fewer top level communities there are the better the efficiency gains in the entire model will be. Unless, of course, their are too few and the resulting groupings are not homogeneous over sufficient characteristics, or the communities are badly chosen with many shared characteristics between the groups rather than within the groups.
Secondly, the choice of communities can slant the servicing view internally or externally, or indeed could simply mirror existing organisation structures. None of these effects are likely to produce efficiency gains sufficient to justify the operational overhead of the stakeholder community support systems. The gain comes from achieving 100% coverage of participants, with communities comprised of both external and internal participants, with the minimum need for intra-community process or system customisation. By demanding the mixing of internal and external members aim to eliminate duplication between external and internal systems and processes servicing the same need.
So, ultimately, the choice of top level stakeholder communities proves to be crucial to the outcome of the model - on all fronts.
In our experience, if the model is well designed the chosen top level community groups will tend to be highly co-dependent which automatically provides a structure and focus for service level agreements, and intra-community risk profiles will be highly consistent.
The choice of stakeholder communities used is prima-facie up to the organisation and the purpose of the analysis. While generalisation is possible at the highest level, as the view descends through the communities into their member sub-communities the groupings become quite specific to an organisation.
After many years of using and refining the concept we have settled on a standard top level stakeholder community model we call SCNM03. It has proven to be work predictably in both government and commercial agencies in both physical (eg manufacturing) and virtual (eg software) organisations. Alternative models include the groupings under Porter's Theory of Competitive Advantage.
=Standard Stakeholder Community Network Model: SCNM03 in Practice=
==SCNM03: Bishop's Model Stakeholder Network==
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
In the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
==Risk and the Stakeholder Community Network Model==
Risk in the model tends to vary with time and the degree of influence the organsiation (the meta-community) has in the specific community being examined. This influence will vary over time.
Consequently, in the longer time frames (ie. the strategic time frame) the Public and Governance communities are usually the highest inherent strategic risk communities in the model. The organisation tends to have the least influence over the sub-communities contained there-in and may participate only as a guest (information receiver, price-taking customer, subject of legislation, etc.), or not at all. Public attitudes can swing against the activities of the organisation, and influence the legislaters, who, in turn, can legislate the marketplace or the organisation out of existance. COnsumer preferences can change as technology progresses, making the organisation's business model irrelevant. The stakeholder network model therefore naturally tends to encourage both lobbying and active public relations management (or the exact opposite: invisibility!), and participation in external communities for information gathering.
Where timeframes being considered are shorter, ie. from an operational or tactical risk perspective, Workforce will rank as one of the highest risk spaces. If we think of Workforce as being comprised of smaller communities - say contractors and employees, and then each of these in turn being comprised of even smaller communities - say divisions, units and ultimately individuals we see that the more we subdivide the group the closer we get to a community of one member - the individual. In the very short term humans thus represent a highly variable factor.
In the micro-community of one person. the only member of the community that exists inside the employee's head is him or her self. All the risk minimisation and behaviour modification controls naturally present in a larger commnunity are dependent on that one member. In that community one person fulfills all the roles of the multi-member community. Strategies such as training, and standard processes work over an extended time frame to reduce the probability of incidents and create predictability across the workforce as a group, but in the very short or immediate timeframe the individual is still entirely responsible for each action with little chance for other community members to intercede (because there aren't any!). In the instant, this micro-community can make an unsafe decision that impacts the well being of the larger organisation (as well as themselves). Planning, thorough and extended training, careful member selection, 'idiot-proof' machine and user interface design will improve the predictability of the individual - but all these strategies take time to design, implement and achieve their effects. So, over the shortest unit of time - say, a second into the future - the individual can make a very bad decision with disastrous outcomes. This is a technical way of saying that people do dumb things that can be prevented with enough preparation and training - but only if enough time is available.
==Competition and the Stakeholder Community Network Model==
The SCNM03 model captures a deliberately divergent view of competitive strategy from that presented by many earlier authors. In this model, competitors are seen as potential suppliers, partners, clients, customers or workforce and strategies to bring them into one or more of those communities would be pursued.
Crucial to understanding the SCNM03 stakeholder model is that, purely applied, the model sees the entire universe in terms of these communities. It starts with the ideal vision built-in and therefore models a best fit to that scenario.
One obvious issue, then, is that there is clearly no community of "competitors". Under the pure SCNM03 stakeholder network model our aim is to make competitors a member of one or more of the other communities. We are therefore encouraged to both define our service offering away from competition and structure ourselves as complementary to another's offering or needs. The extent to which we are not able to achieve this influences the inherent risk that lays in the public communities.
We do not loose the unresolved participants, instead they appear as sub-communities of the public community and are subject to a range of risk mitigation strategies.
==Stakeholder Communities and Sub-Communities in SCNM03==
Each of these 8 communities is comprised of smaller communities with more specialised shared needs. For example, workforce is comprised of two specialised communities: contractors and staff (or other appropriate terminology). While many requirements of these groups are the same, there are specific differences in engagement, management, ancillary services, social interaction and disclosure levels between these groups to warrant separate community identities.
Conceptually the stakeholder network organisation is (almost) a franchiser of community management systems within a defined product/service space and in a given organisational cultural context. An organisation adopting this model will naturally look to standardise the managerial and technological profile of the communities it manages.
Applying the stakeholder network model in process design, performance analysis, compliance management or risk assessment often results in process structures and views that differ dramatically from the Divisional, Matrix, Hierarchical and Service models under which the organisation may operate. The community network model is agnostic when it comes to organisational structure (with the one exception being an organisation exactly mirroring the network model itself).
By way of example, an organisation that produces widgets, might traditionally see itself in terms of functions and processes concerning widgets. It has widget raw materials planning and acquisition, inventory management, widget production, widget distribution, widget order management and sales, etc. The same organisation in the stakeholder network model would see the world in terms of satisfying the needs of defined stakeholder groups first - not the things they were manufacturing.
In the SCNM03 stakeholder network model the natural home of the manufacturing functions is in the customer community where they are firmly focused to the customer (note - not client) desires, and materials acquisition function might be seen to contract the services of both the partner and supplier communities to satisfy material demand.
A couple of outcomes of the model are immediately apparent from this example, the model blurs the distinction between internal sourcing and external sourcing,
From a computing perspective, the model automatically leads to service portal based architectures, systems consolidation, cloud structuring (whether internal or externally hosted), and highlights the places where inter-system integration and system standardisation are needed. From an operations perspective it leads to service focused organisational architectures with defined client groups and document service standard agreements.
==The SCNM03 Communities Explained==
An individual is often a member of multiple communities (eg Customers and Clients). Our standard stakeholder communities (which in 12 years have yet to be wrong) are:
{|
|-
|Clients
|style="padding-bottom: 10px; padding-top: 10px; border-bottom: 1px solid black;bottommargin:10px;"|Stakeholders who receive or deliver services Clients are interested in rapidly finding information, requesting service, reporting hazards / incidents / events / ideas.
A classic result of the client stakeholder focus are client portals. In a local government these might take the form of a resident portal, where a city rate payer can find in one spot all the online systems for garbage collection, events, bylaws, parking permits, voting, pet registration, planning applications and objection lodgment, etc. In a direct-to-customer manufacturer the client might have access to a portal with product information, product enhancements, support, manuals, training, online-store, peer forums, product reviews, newsletter/blog, and peer/expert hints and suggestions all in one spot.
|-
|Customers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Stakeholders who pay for services that clients receive. This is separation is very common.
Customers want to pay for things in as convenient and consolidated a way as possible, and have mechanisms available for enquiring, revoking or monitoring services for which they pay. Companies that send multiple bills for the different services they provide are examples of firms that seriously need to look at their customers as a stakeholder group.
Governments provide the classic examples of customer and client separation: A State Government might pay for (or part-pay for) some services that are received by citizens of a city government. The state government is the customer, while the citizen is the client.
|-
|Suppliers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Suppliers of services and materials to the organisation. Suppliers have common service interests such as finding tenders, quotes, interfacing supply catalogues to purchase order systems, checking on payment status, locating standard contracts, etc.
|-
|Partners
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Partners are providers of complementary services. A “meals on wheels” charity provider may function as a partner to a local government, delivering services complementary to those of the city government, but funded by non-City sources.
Partners are mainly interested in ensuring their services stay complementary and not competitive with the organisation. So information on strategies, management of joint projects, identification of opportunities, etc are of interest.
Roads constructions authorities are partners who provide accident minimisation services, and traffic impact control services, etc. that complement those of the local or city government roads teams.
The relationship between insurance companies and the fire service is another example of a partnering structure. Insurance companies have an interest in facilitating the fire control services as they reduce their insured risks.
Franchised sales teams for a retailer, independent software manufacturers for a computer or games console manufacturer, and joint-ventures are all examples of partner community networks.
|-
|Workforce
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Workforce include both employees, contractors and consultants. HR systems, payroll, contract management, OHS, incident management, etc. are examples of services needed by this community.
|-
|Treasury/Custodians
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Treasury & other custodians are always an internal community. Their members are charged with maintaining assets and lowest level enabling systems for the other communities.
IT/IS, Building Management, Maintenance and Treasury are always members of the custodians group. They protect assets and provide the infrastructure on which the community specific applications reside.
Email, communications, data storage, server management clearly fit under this group.
|-
|Governance
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The governance community, like the workforce community includes multiple sub-communities, such as the executive, regulators, government bodies, risk management, compliance management, etc. These communities use services related to the provision of control and performance monitoring. Finance, council management, boards, executive team, performance review committee, inter-government reporting, risk, and compliance systems, and planning/budgeting systems are typically included here. Governance community members are both internal and external bodies with which the organisation has an accounting and reporting relationship.
|-
|The Public
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The public includes everyone else. This is a very important community as it has the ultimate power to remove the entire organisation from existence, or cause government to legislate it out of existence.
It is also the group from which all the other stakeholders originally come. From a strategic perspective, the aim of every organisation should be to get every member of the public community to transition to one of the other stakeholder groups.
The public need to know about the services an organisation provides, its ethics, and social performance.
While most membership of this community is reasonably obvious, the presence of public relations teams, lobbying and marketing in this community may be less so.
An organisation is always a member of the public stakeholder communities of all other organisations.
|}
=Applying the Stakeholder Network Model=
The stakeholder networks model is recursive. It applies organisation wide and through each sub grouping down to the individual business unit level (in fact it can also work at the individual level – but not generally in an IS context). Just as the organisation has these broad stakeholder groups, each business unit has the same stakeholder breakdown, all be it with most stakeholders in the various communities being internal to the organisation – rather than external to the City.
The stakeholder community network has clear relationships between the elements - particularly as realised in SCNM03 - and provides a model under which social networking and portal systems naturally fit. The model leads naturally to both network organisations (those using mixed in and out-sourcing, shared service models and joint-ventures as their standard business model.
The stakeholder community model has a number of applications:
#As an IT system design paradigm and idea promoter.
#As a full organisational modelling paradigm. In this form it results in dramatically different organisation models from those in general usage and is thus often too radical for executive comfort.
#As an analytic “best practice” benchmark it is outstanding, and even when only partly applied results in improved and more cost efficient process design.
#In designing and online and web service business presence. With a little thought it should be apparent how effective the stakeholder model is in designing an online presence and structuring of mutual obligation social networks.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
f54d609f8d240d7ba168fd4d101ce36b7edfe76b
393
345
2012-08-30T15:53:24Z
Bishopj
1
wikitext
text/x-wiki
=Introduction - What is the Stakeholder Community Network Model?=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this and other topics. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Re engineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, style and the detail provided, as the original text was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time.
'''''Author's Note:''''' The stakeholder community network concept was originally mapped out in the mid to late 1990's and reflected both my own search for a paradigm for online and virtual corporations. It effectively pre-dates the rise of cloud computing and social network sites as a component of business (for which it almost seems to have been designed) by some five to eight years. It did, however, benefit from existence of the fore-runners of these concepts. It was developed in the context of the observed behaviours of successful online ventures such as DELL and CISCO, the Victorian whole of government reform agenda, the tail end of the TQM experiment, shift from paper to online work flow both intra and inter business, rise of risk management, progressive adoption of balanced score cards, appearance of network trading organisations (groups of independent complementary businesses that traded together as a unit cross-feeding work and niching away from each other through specialisation - they flourished briefly locally in the mid-1990's), and the rise of on-line portals, peer managed corporate forums, application service providers, enterprise scale ERP and CRM systems, and web based B2B systems and the emergence of cataloguing standards. I have used it heavily over the years. It has been modified over time, to accommodate learnings from organisations that survived economic, technological social and political reversals and fertilised throughout by proven tactical and management philosophies, the stakeholder community network model would now seem to have come of age.
</noinclude>
==What and Why==
===What is the Community Network Theory of Organisations?===
====Organisational Community Network Theory====
'''''Organisational Community Network Theory premises that an organisation is a network of one or more communities existing in a network of other communities. The network links communities along lines of exchange such as communication, dependence, and obligation. Communities are collections autonomous agents and/or other communities that interact and share a sense of group identity, or share at least one purpose in common.'''''
Agents are essentially people, but the category could easily accommodate AI devices as these develop appropriate capabilities.
====Characteristics of a Community in Organisational Design====
Communities provide a natural, spontaneously-forming, self-organising, and evolving human organisational structure that forms because something is shared by the participants. Through the things the participants share in common, the community unit provides a framework for standardisation, streamlining, automating, and specialising in delivery of services and products to meet the shared purposes and operational needs of the individual community, and groups of communities.
Communities form initially because there is one or more needs in common among the participants (possibly only the need to identify and classify each other!). They are not inherently permanent structures, however there are some communities, that because of their survival through multiple generations or over multiple business cycles are effectively permanent. Such a list might include cities, countries, religions, professional associations, sporting clubs, and some government agencies, for example. At the other end of the continuum are communities that form spontaneously and last for little longer than the span of the first and only meeting. Examples might include emergency assemblies, concerts, demonstrations, staff inductions and rallies, etc.
Members of a community may be individuals or other communities. Communities contain eight non-exclusive classes of participant:
# Members - All participants are members, regardless of whether they are also members of the other classes.
# Beneficiaries - Information, goods and services consumers
# Suppliers - Information, goods and services providers
# Patrons - Funding providers who therefore also tend to direct
# Governors - Providers who administer, moderate, direct, control access, monitor, and tune.
# Custodians - Provide the infrastructure, durable assets, information warehouse, community tools.
# Partners - Provide compatible, complementary non competitive services or goods consumed by members in association with those of the community, but not as part of the community.
# Public - Comprised of potential participants, and participants who may also spontaneously form communities that compete with or otherwise influence the context of the community.
The more mature the community, the more clearly these roles are differentiated and actively operating. For a community to reach stability over an extended time the more important it is for these duties implied in these roles to be fulfilled.
Members of a community:
*share in a communal identity,
*have a shared purpose with other members,
*need similar access to information, and
*draw from a common set of tools.
The community will interact with other communities both individually and as a group. The more cohesive and mature the community is, however, the more likely it is that it will interact as community with other communities through nominated representatives.
The community is the fundamental building block of an organisation, but communities are structurally recursive and fluid. Communities themselves naturally subdivide into teams that service particular interests or needs of the community. These teams from their own communities, and together these internal communities form a network of interacting communities. The larger and more heterogeneous the parent community the more noticeable, numerous, segregated, larger and autonomous these internal communities become.
These internal communities may also interact directly with external communities, and have external participants in otherwise internal communities. The more predominant the external participation is, the more likely is the internal community to transition though the parent community boundary to become an external community (with respect to the originating parent community). Similarly the higher the proportion of community participation from a single community in an external community, the more likely that external community will transition to an internal contextually constrained community.
Each community is, therefore, comprised of a fluid network of communities contextually constrained by, and in some way supporting the activities of the parent community.
Community based organisational structures extend horizontally through unconstrained networks of interactions and vertically through community subdivision and absorption into constrained networks of specialised communities.
====Making and Strengthening a Community====
The longer a community survivies - the more mature it becomes - the more clearly the community identity, roles and rules become. For example, a group of people with a common interest in a playing of cricket meet by chance through visits to a local field - perhaps looking for a game being played. Over time they tend to arrive more regularly and predictably at around the same time in greater numbers. Some start bringing equipment and start a game, while others join in fielding or watching. As the predictability of the presence of other interested parties grows, participants start arriving in the expectation that others will also be present, while other participants bring supporting material - like refreshments, etc. Gradually, a community is forming with self nominated and perhaps suggested or allocated roles.
Eventually the group might suggest a common name - the Sometimes Cricket Club - and others might attempt to organise more sophisticated or permanent resources, and eventually the funding needs of the group might dictate an expansion in its membership and the need to more formally manage finances on behalf of the group, etc. Rules might initially be common-sense and unspoken (like not stealing the bat and ball from the guy that supplied it), others may be agreed through shared experience. Sharing or common interests and the need to improve predicatbility of participants in games will encourage the group members to share contact details and channels of communication. The more individuals invest their time, energy and resources on behalf of the group, the more they will expect later joining members to make a catch-up contribution for the existing investment - and the community may start placing barriers to entry in the form of membership criteria and fees.
As the group grows handshake agreements may need to be formally agreed and recorded, and individuals will be formally allocated roles and leadership agreed. Along the way as disagreements arise (like who should bat first) dispute resolution mechanisms will be required.
Thus a community has been formed and gradually self-organised. If the initial casual group fails to ever define roles, find equipment supplier(s), it will be most unlikely to ever get to the stage of even the first game. If it fails to agree its meeting place and times of meetings it will probably not achieve the second game. If it fails to identify its membership and establish an identity (and therefore a brand) and all the other functions of a cricket club it will be unlikely to last out a season.
To make an effective long term community we need to pay attention to the characteristics that form a community and ensure that these characteristics are serviced. From the simple example above we see that a community has:
*Members
*Shared resources
*Identity / Brand
*Communication
*Define and shared purpose
*Location - a meeting place (which may be virtual)
*Roles
*Rules
*Governance structure
*Barriers to entry (note this might be as small as deciding to participate)
*Patron (implied or formal)
We grow and strengthen a community by addressing these characteristics directly. Ignoring any one of these will result in the failure of the community over time. For a community that assembles for a single purpose for only a short period of time - such as a demonstration, or an entertainment event this may not be a concern. If we wish the community to have any kind of longevity we will need to consider how we enable the defining charcteristices of the community.
It is with some surprise that we note that when we look at the permanent communities within many organisations we will find that several of these characteristics are only weakly addressed - if at all - rarely understood, and even more rarely considered. Herein lies the key to the internal structural failure of many organisations that have grown much beyond the oversite of their founders splitting into many semi-autonomous communities.
====The Organisation as a Community====
Here we distinguish a physical organisation from the organisation of its operations and resources.
A physical organisation - such as a company, government agency, not-for-profit, or even a political party - is:
# a community containing a network of communities,
# a patron of both internal and external communities
# a custodian of information and provider of infrastructure for communities
# a governor of community mandate, direction, performance, and culture, etc.
The physical organisation is, by definition, a community, but its boundaries may be so fuzzily defined that as a community it is little more than a container for a network of communities, whose primary allegiances are directed outside of the physical organisational boundary. Some communities in the organisation's network are planned and facilitated communities, while others are not planned but facilitated (such as professional associations, unions, standards bodies) and others are neither planned nor facilitated (but, perhaps, accommodated) (such as schools, sporting clubs, arts groups, social movements, etc.).
As a patron the physical organisation plays it primary role. Patronage is provided through a funded pool of resources that can be applied to communities as participants and enablers of community infrastructure, and through direct funding of community operations, or through funding infrastructure provision, etc. Patronage is about funding, and every gift "in kind" of resources or equipment, etc is an implied gift of funding as well. Patronage is accompanied by some ability to influence direction - if only from the implied threat of future funding cessation.
As a custodian, the physical organisation will also provide services to communities of storing knowledge, providing and maintaining technical and physical infrastructure used by communities, and management of liquid assets, etc. These are called custodian functions because they are about the preservation of assets, wealth, capability and capacity.
In its governance function the physical organisation imposes accountability for patronage, standards, policy compliance, legal compliance, strategic direction, performance measurement, financial control and resource utilisation, etc
All organisations are simultaneously intersected by many special interest communities:
*The average workforce is riddled with communities some intersecting the organisation, some not - union(s), professional bodies, schools (if staff have school age children), political, sporting, social, OHS cases, divisional, project, etc.
*Industrial associations, standards committees, regulators, etc.
*The company is surrounded by public interest groups, political and semi political groups, consumer advocacy groups, and the public relations industries.
*Internally the organisation might have communities of buyers, marketing and sales, logistics, process & quality improvement, governance, safety, research and development, financial control, etc.
Communities do not respect the conventional boundaries of corporate or governmental agencies. Communities that interact with external stakeholders, for example, draw in members of the public and convert them into organisational stakeholders in the process, but not employees (at least in the conventional sense).
====The Advantages from using Communities to Model Organisations====
In some organisational theories, communities are represented as external and internal forces or drivers, but are not directly modelled into the organisational structure. The organisation is seen as a collection of consumer-provider relationships - whether those relationships are about transmitting instructions, funding, goods, services, resources, etc. The relationships are essentially hierarchical - even in matrix organisations - and feed back and feed forward control systems have to be imposed on the structures to make them work. Structural entropy gradually causes the structure to disassemble without constant maintenance on the organisation structure itself.
The community is an advance on the classic consumer-provider interactive model, because it:
*assumes most business relationships are multi-directional exchanges between the provider and the consumer and other providers and consumers extending over a period of time;
*recognises that all transactions between parties involve a series of micro exchanges going in both directions, not a single uni-directional exchange. For example, a purchase involves the consumer providing information (identity, location, preferences, competitor data, demand level, buying cycle, etc.) and possibly funding, a sales team matching the need to available offerings and defining and providing the promise, a legal team defining the obligations, a delivery team to deliver the good or service, a quality and support team providing quality management, logistics team providing transport, etc. All of these are participants of the same community involved in meeting client needs.
*delivers the benefits of the one-stop-shop process models, without the training cost, and inherent quality variability, by forming a community of specialists to collectively provide the single point solution.
*provides a model for structuring the online presence of an organisation.
*provides an organisational architecture that distributes the costs of providing and consuming goods and services across the community rather than exclusively concentrated in the larger party. For example, a buying community might assume some of the costs of sales by providing their details online directly into the client database, select from available product (by watching videos, reading information and product comparisons provided from central location), or submit special orders online, respond to questions from other clients in hosted forums, and advertise the organisation's products and quality in organised reviewer sites, or social networking sites.
*places the provider and consumer into the same "team" and positions them as jointly trying to meet a need. The community model facilitates all participants contributing jointly and sharing ownership of the outcome - rather than one party meeting the needs of the other.
Each community is a collection of participants (members) who share common operational characteristics, goals, interests and/or functional needs. The greater the extent to which the participants share characteristics, interests, needs and goals in common the greater the cohesion in and resilience of the community - in simple terms the community is active, "tight", involved, and the members share a sense of identity, belonging and, most importantly, ownership.
Communities are semi-autonomous, self-selecting, self directed, and inclusive. This does not mean communities are necessarily "open-access". In fact communities with higher barriers to entry often have the highest sense of cohesion because membership is something hard to attain and therefore something of value. Cohesion does not necessarily mean active, however, and lack of activity generally makes a community less interesting organisationally. Communities survive by exchanging things. The greater the volume of services, tangible goods or intangible goods (such as information), that flows through and around the community the stronger the community becomes. In the community model an organisation therefore benefits by fostering participation and particularly communication among all its members.
===What is the Stakeholder Community Network Model?===
'''''The stakeholder community network model is an organisational design and analysis paradigm that sees the organisation as a network of co-dependent stakeholder communities positioned in a larger network of interacting (but not necessarily co-dependent) communities. Within this paradigm, all of an organisation's services, functions and facilities exist to service the needs of the various stakeholder communities in the network.'''''
It should be noted from the outset, that co-dependent does not mean cooperative. As with domestic co-dependent relationships, the community network may include some positively destructive co-dependent community relationships.
The model defines an organisation as consisting of a network of operations that may extend beyond the boundaries of the organisation's body corporate. One such situation might arise in franchised operations or trading networks where an external entity provides critical services on which the corporate organisation depends.
The model works as an organisational design paradigm, a process design framework, an IT strategic design paradigm and a risk and performance analysis framework. It is directly suited to modern network, online, virtual, service operational models as well as bricks and mortar industries including utilities, government, general and project manufacturing, and education. It has not been tested in the resources sector or transport sector.
As an analysis tool identification and labelling of existing implicit and explicit communities and the physical and virtual flows between them against current planning, score cards, policies, performance measurement systems, service agreements, compliance frameworks, risk models, quality, control and feedback systems highlight areas of dysfunction, duplication, redundant effort, counter-productive strategies, missed opportunities, and structural inefficiency and ineffectiveness.
As a design tool it results in the alignment of organisation wide activities to identifiable purposes with targeted participants and measurable performance. It facilitates structurally many different and potentially divergent simultaneous strategies while painting a boundary and direction for such divergence. Such support in organisational design is essential for dealing in global, highly cyclic, or political markets where cultures, rules and geographic features may require the ability to operate as "her to him and him to her", and to retire and replace entire limbs rapidly.
As a customer, partner and supplier service process model it results in bound customers and suppliers and well integrated partners while distributing a significant portion of the organisations costs to the participants.
As an IT systems framework it provides an efficient protocol for defining shared services, community portal service architectures, intra-cloud and cloud services, virtualisation clusters, etc.
==Definitions==
===The Organisation===
Organisations are networks of communities. These communities are comprised of members drawn from inside and outside the organisation's corporate legal identity, and may include communities of which the organisation has no effective control (in traditional terms).
Under the stakeholder community network model we view an organisation as a community comprised exclusively of interconnected sub-communities of people providing and consuming goods and services. Each sub-community forms multiple sub-sub-communities within it, and the community subdivision continues recursively until the costs of organising communities out way the benefits gained from the additional community.
Contrast this view of an organisation with that of other models that classify organisations in terms of bureaucratic, divisional, matrix, and similar structures. Under the stakeholder network view all of these structures can coexist in an organisation simultaneously as they are simply overlapping communities defined around structural paradigms. The stakeholder community network model does not replace such paradigms - it absorbs them.
In the stakeholder community view an organisation is a free-flowing evolving network of teams forming and disbanding as required, with some acquiring near-permanent status, while others enjoy but a single day in the sunshine. Community membership is not exclusive and it is normal for members of one community to also be members of other communities.
===The Community===
The model first defines a structural unit (the community) that possesses identifiable and comparable characteristics, such as focus, information need, functional need, etc. Secondly, the model looks to the mechanisms of facilitating stakeholder communities in a cost effective and consistently reliable and predictable way, utilising common services designed to enable and utilise the shared or distinguishing characteristics. So initially, at least, the model is community structure agnostic.
Communities form for multiple reasons, including:
*shared geographic proximity
*shared heritage
*shared communications technology
*shared language
*shared interests
*shared skills
The things we share are like gravitational attractors around which people cluster in self organising social units we are calling communities.
As communities grow beyond a few members they form sub communities whose members service the parent community or concentrate in some specialised capacity in addition their other roles as members of the community.
The communities in which we are most commonly interested (in the general organisational performance improvement context) are those forming around shared interests and skills. Within an organisation the geographic, and language communities may be crucially important, and in some contexts would be directly accommodated but they will also usually need some form of communities formed around skills and interests (like, at the very least, consuming or providing something), in order to assist the organisation achieve its purpose.
Within each community formed around shared interests or skills are a further set shared interests such as membership, meeting space, information, branding, commercial services, engagement, arbitration, and support. As these needs are common (with minor variations) across all communities they are an attractive first target for shared service provision across all communities. In designing these shared services one should remember that a properly harnessed community can be self managing, peer supporting and self selecting. Shared service provided to communities should be designed to encourage this ownership by the community membership.
A community model assumes a multi-way conversation within the community among the community members - not a massively parallel bilateral conversation between the community members and the organisation. The latter is a client-supplier relationship and by excluding inter-member interaction it embeds the costly push model of marketing, sales and service delivery. By encouraging intra-community conversation we harness the consumers in the community into one or more of the many supply roles in the community. In a customer/client oriented community supply roles span such things as marketing assistance with reviews, discussions and forum participation to support assistance in peer help spaces, and even product improvement and testing such as in software Beta programmes. On the supplier and partner side, supplier side community roles include online supply of certifications, supplier self-registration of details, self selection of available contracts, online invoice entry directly by suppliers, and suppliers providing new product information feeds matching community standardised classifications and measures, etc.
===The Stakeholder Community===
A stakeholder community, is a collection of people, agencies, or units of an agency, that share three traits in common:
# They have an interest in the organisation being modelled or analysed (IE: they are stakeholders).
# As a group, they are co-dependent with other groups of the same organisation. (IE: the groups can not operate with complete autonomy as they depend on each other for their functioning and survival).
# They possess additional distinguishing dimensions of their interest in the organisation that allow them to be functionally separated from some members of the collection and similarly grouped with others (IE: they form an identifiable and functionally similar subgroup of stakeholders).
A stakeholder community of an organisation might be defined as geographically based, and representing all customers within a geographic area, or it might be an enterprise wide collection of staff injured in forklift truck accidents, or a worldwide extra net of ECL policy advisers, or suppliers and corporate buyers for raw materials,... or any one of a long list of possible organisation specific or related groupings.
We call the members of a community "Resources". A resource may be a person or another collection of resources such an organisation, a unit of an organisation, another community. In all cases where a collection of resources is a member of a community, that collection will participate through one or more "community representatives". So in a sense resources can be seen as ultimately comprising people (even though they may be members fulfilling constrained roles).
===The Stakeholder Community Network===
A stakeholder community network is a collection of stakeholder communities that form a network of loosely co-dependent communities.
The communities comprising the network preserve the rules of membership of a stakeholder community domain (as defined above). The links between member communities represent the co-dependencies. The dependencies are functional in nature and may be about information, goods or services - provision or supply, etc. They therefore represent the first layer of potential service level agreements in an organisation.
Technically speaking, the graph connecting all members of the stakeholder network is a digraph (directed graph) when the functional attribute of the network relationship is included in the inter-community link definition.
===The Well-formed Stakeholder Network===
In the universe consisting of all possible stakeholder communities of an organisation, a complete network would include all communities in the network topography. Such a network is said to be "theoretically complete".
Theoretical completeness is neither practical nor possible to achieve in practice. We can not know, and thus enumerate, every possible stakeholder community as each resource and every possible combination of two or more resources up to and including the entire membership of the organisation's stakeholder domain is potentially a community.
Another way of viewing completeness is to first test to ensure that all members of the stakeholder community are also members in one or more of the other communities in the network. This network is then complete in terms of a organisation's resource coverage.
It is worth noting that an organisation's stakeholder resource list may include both members of the public and entities that have no direct dealing with the organisation as well as staff, clients and suppliers (etc.) of an organisation.
===The Stakeholder Community Network Model===
The stakeholder community network model views an organisation in terms of stakeholder communities with shared needs, interests and/or purposes.
The model is a government and business meta-organisational model for organisational design, performance analysis and competitive strategy. It founded on a theory of operational design that embraces networked co-dependent business structures (such as outsourcing, join-ventures and social networking), while not mandating them. The step into communities, however, fundamentally changes the organisational focus from internal structure management to external service delivery. By rejecting all activity not designed to service an identifiable community it forces the entire enterprise to embrace a service culture at every level - everybody is a client of somebody else and in a stakeholder relationship (and usually responsible to someone, or responsible for something) with many other people.
The community structure inherently distributes some of the costs of marketing, sales and servicing, from the net providers to the net consumers within the community, but is effectively a premium willingly paid by community net consumers for greater influence over service form, more relevant and timely information, improved service speed, and risk perception confirmation (the role of public forums), etc.
Communities are essentially self determining and semi-autonomous so a community network modelled organisation naturally accommodates multiple value streams simultaneously. The ability for a community to recursively sub-divide into smaller overlapping specialised communities means the enclosing community structure can accommodate not only multiple value streams internally, but also multiple agendas. Thus financial performance can be enhanced, while quality improvement, social policy or research (and other long term strategies) are driven with equal priority. Further, new value streams can be added to the structure without compromising the integrity or culture of the existing structure.
The semi autonomous nature of communities means that both competitive and and non-competitive business architectures are compatible with the community network model.
We say it is a "meta-organisational model" because, while you might design your physical organisational structure around the model (particularly at the business unit level, or in the online context), it is more common to use it to redesign the roles, service agreements and strategies of existing organisational structures in an organisation. The meta-organisational model is one that floats through a physical organisation providing a new virtualisation of the organisation by re-engineering the service agreements, social networks and logistical networks in an organisation.
One way to think of this is that the impact of applying the community stakeholder thought process is to rearrange the plumbing, the lifts, the corridors and the internal doorways inside a heritage listed building. It is still the same building on the outside, but now you don't get lost inside it, and clients and customers start sharing your destination, not just what you do.
Sure you could tear down the building and replace it with a campus that modelled your stakeholder community structure exactly, but you do not need to do so to get the benefits, and in fact doing so might be counter productive to your market.
The model does tend to have certain organisational impacts - even as a thought exercise:
*The model encourages networked structures and specialisation of semi-autonomous co-dependent internal units.
*The communities share common servicing needs and efficiency dictates some form of shared service provision for these common needs. These structures imply additional cost, which in a zero-sum change process implies that resources will have to be transferred from somewhere else.
*The network model will tend to reach across multiple divisions of an organisation in defining communities.
In the normal entity (government or business) an individual or even business unit might participate in multiple stakeholder communities at once. So the communities are not necessarily defining an organisational structure as much as a set of interlocking co-dependence structures around which services can be consolidated and streamlined, duplication identified and removed, and context specific organisational purposes can be clearly articulated.
=Applying the Stakeholder Community Network in Practice=
==Step 1. Identifying and Defining Stakeholder Communities==
We must fist decide whether we a looking for a directed outcome such "quality improvement" or an undirected (normal) outcome. This impacts the design of every community.
In a directed outcome model the directed outcome becomes a community in its own right that is automatically a participant in every other community. This allows consideration of the requirements of the directed outcome community to be capture and implemented in every other community structure.
In the undirected model no such imposed membership is mandated and the community architecture is left to optimise the framework with which it has been equipped.
In most situations we use the undirected model for analysis and the directed model in conceptual design (refactoring into an undirected model once the directed redesign has been finished).
==Step 2. Identifying and Defining the Community Ennoblement Functions==
In the model, the central object of the organisation is to ensure communities are facilitated, serviced, and harnessed for the purposes of the organisation as best it can, or otherwise "actively managed". The model sees only communities - so every participant within and without the organisation must be able to be defined as falling into one or more stakeholder communities if the model is to be considered "well-formed" (read "complete").
Within the model, the aim of the enterprise is to facilitate communities (generally) and a defined set of communities specifically - which translates into:
*identifying stakeholder communities
*mapping new and existing stakeholder communities to organisations objectives, mandate and purpose as they change
*mapping inter-community work flows testing and identifying duplicated communities, duplicated flows, and under resourcing, etc
*seeding communities as required
*funding stakeholder communities (eg seed capital, cross charging, external billing, etc)
*organising stakeholder communities
*branding stakeholder communities
*fostering community participation and outcome ownership
*providing the and possibly managing the infrastructure for community self organisation
*liaising/interfacing between stakeholder communities (eg. client community versus customer community)
*delivering the community's requested service or goods
*harnessing community ownership of the service/product improvement process
*trapping and archiving expert knowledge from both internal (to the organisation) and external community participants over time
Within an organisation adopting the stakeholder community network paradigm operationally, the stakeholder community network must be actively managed. This means it must be facilitated, moderated and funded. Resourcing is required to make it fast and efficient to implement and equip new communities and retire existing ones. Part of equipping a community is establishing its charter, budget, performance measures, governance, operating rules (constitution), core membership, decision model, meeting space, common (shared) tools and specialised applications or services need.
This necessitates the creation of a new centralised or distributed role of community facilitator(s) and a central role of community registrar (manager). The former is about equipping and assisting new communities, identifying and seed communities as required and advising and improving existing communities. The latter is about containing, policing, funding, planning, judging and budgeting communities.
==Step 3. Considerations in Designing the Stakeholder Community Analytical Structure==
Once we have a standard definition of the community concept as it applies in our analysis and organisation, the next step is to define a framework of communities through which to analyse the organisation.
As each community shares facilities between their members, the fewer top level communities there are the better the efficiency gains in the entire model will be. Unless, of course, their are too few and the resulting groupings are not homogeneous over sufficient characteristics, or the communities are badly chosen with many shared characteristics between the groups rather than within the groups.
Secondly, the choice of communities can slant the servicing view internally or externally, or indeed could simply mirror existing organisation structures. None of these effects are likely to produce efficiency gains sufficient to justify the operational overhead of the stakeholder community support systems. The gain comes from achieving 100% coverage of participants, with communities comprised of both external and internal participants, with the minimum need for intra-community process or system customisation. By demanding the mixing of internal and external members aim to eliminate duplication between external and internal systems and processes servicing the same need.
So, ultimately, the choice of top level stakeholder communities proves to be crucial to the outcome of the model - on all fronts.
In our experience, if the model is well designed the chosen top level community groups will tend to be highly co-dependent which automatically provides a structure and focus for service level agreements, and intra-community risk profiles will be highly consistent.
The choice of stakeholder communities used is prima-facie up to the organisation and the purpose of the analysis. While generalisation is possible at the highest level, as the view descends through the communities into their member sub-communities the groupings become quite specific to an organisation.
After many years of using and refining the concept we have settled on a standard top level stakeholder community model we call SCNM03. It has proven to be work predictably in both government and commercial agencies in both physical (eg manufacturing) and virtual (eg software) organisations. Alternative models include the groupings under Porter's Theory of Competitive Advantage.
=Standard Stakeholder Community Network Model: SCNM03 in Practice=
==SCNM03: Bishop's Model Stakeholder Network==
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
In the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
==Risk and the Stakeholder Community Network Model==
Risk in the model tends to vary with time and the degree of influence the organsiation (the meta-community) has in the specific community being examined. This influence will vary over time.
Consequently, in the longer time frames (ie. the strategic time frame) the Public and Governance communities are usually the highest inherent strategic risk communities in the model. The organisation tends to have the least influence over the sub-communities contained there-in and may participate only as a guest (information receiver, price-taking customer, subject of legislation, etc.), or not at all. Public attitudes can swing against the activities of the organisation, and influence the legislaters, who, in turn, can legislate the marketplace or the organisation out of existance. COnsumer preferences can change as technology progresses, making the organisation's business model irrelevant. The stakeholder network model therefore naturally tends to encourage both lobbying and active public relations management (or the exact opposite: invisibility!), and participation in external communities for information gathering.
Where timeframes being considered are shorter, ie. from an operational or tactical risk perspective, Workforce will rank as one of the highest risk spaces. If we think of Workforce as being comprised of smaller communities - say contractors and employees, and then each of these in turn being comprised of even smaller communities - say divisions, units and ultimately individuals we see that the more we subdivide the group the closer we get to a community of one member - the individual. In the very short term humans thus represent a highly variable factor.
In the micro-community of one person. the only member of the community that exists inside the employee's head is him or her self. All the risk minimisation and behaviour modification controls naturally present in a larger commnunity are dependent on that one member. In that community one person fulfills all the roles of the multi-member community. Strategies such as training, and standard processes work over an extended time frame to reduce the probability of incidents and create predictability across the workforce as a group, but in the very short or immediate timeframe the individual is still entirely responsible for each action with little chance for other community members to intercede (because there aren't any!). In the instant, this micro-community can make an unsafe decision that impacts the well being of the larger organisation (as well as themselves). Planning, thorough and extended training, careful member selection, 'idiot-proof' machine and user interface design will improve the predictability of the individual - but all these strategies take time to design, implement and achieve their effects. So, over the shortest unit of time - say, a second into the future - the individual can make a very bad decision with disastrous outcomes. This is a technical way of saying that people do dumb things that can be prevented with enough preparation and training - but only if enough time is available.
==Competition and the Stakeholder Community Network Model==
The SCNM03 model captures a deliberately divergent view of competitive strategy from that presented by many earlier authors. In this model, competitors are seen as potential suppliers, partners, clients, customers or workforce and strategies to bring them into one or more of those communities would be pursued.
Crucial to understanding the SCNM03 stakeholder model is that, purely applied, the model sees the entire universe in terms of these communities. It starts with the ideal vision built-in and therefore models a best fit to that scenario.
One obvious issue, then, is that there is clearly no community of "competitors". Under the pure SCNM03 stakeholder network model our aim is to make competitors a member of one or more of the other communities. We are therefore encouraged to both define our service offering away from competition and structure ourselves as complementary to another's offering or needs. The extent to which we are not able to achieve this influences the inherent risk that lays in the public communities.
We do not loose the unresolved participants, instead they appear as sub-communities of the public community and are subject to a range of risk mitigation strategies.
==Stakeholder Communities and Sub-Communities in SCNM03==
Each of these 8 communities is comprised of smaller communities with more specialised shared needs. For example, workforce is comprised of two specialised communities: contractors and staff (or other appropriate terminology). While many requirements of these groups are the same, there are specific differences in engagement, management, ancillary services, social interaction and disclosure levels between these groups to warrant separate community identities.
Conceptually the stakeholder network organisation is (almost) a franchiser of community management systems within a defined product/service space and in a given organisational cultural context. An organisation adopting this model will naturally look to standardise the managerial and technological profile of the communities it manages.
Applying the stakeholder network model in process design, performance analysis, compliance management or risk assessment often results in process structures and views that differ dramatically from the Divisional, Matrix, Hierarchical and Service models under which the organisation may operate. The community network model is agnostic when it comes to organisational structure (with the one exception being an organisation exactly mirroring the network model itself).
By way of example, an organisation that produces widgets, might traditionally see itself in terms of functions and processes concerning widgets. It has widget raw materials planning and acquisition, inventory management, widget production, widget distribution, widget order management and sales, etc. The same organisation in the stakeholder network model would see the world in terms of satisfying the needs of defined stakeholder groups first - not the things they were manufacturing.
In the SCNM03 stakeholder network model the natural home of the manufacturing functions is in the customer community where they are firmly focused to the customer (note - not client) desires, and materials acquisition function might be seen to contract the services of both the partner and supplier communities to satisfy material demand.
A couple of outcomes of the model are immediately apparent from this example, the model blurs the distinction between internal sourcing and external sourcing,
From a computing perspective, the model automatically leads to service portal based architectures, systems consolidation, cloud structuring (whether internal or externally hosted), and highlights the places where inter-system integration and system standardisation are needed. From an operations perspective it leads to service focused organisational architectures with defined client groups and document service standard agreements.
==The SCNM03 Communities Explained==
An individual is often a member of multiple communities (eg Customers and Clients). Our standard stakeholder communities (which in 12 years have yet to be wrong) are:
{|
|-
|Clients
|style="padding-bottom: 10px; padding-top: 10px; border-bottom: 1px solid black;bottommargin:10px;"|Stakeholders who receive or deliver services Clients are interested in rapidly finding information, requesting service, reporting hazards / incidents / events / ideas.
A classic result of the client stakeholder focus are client portals. In a local government these might take the form of a resident portal, where a city rate payer can find in one spot all the online systems for garbage collection, events, bylaws, parking permits, voting, pet registration, planning applications and objection lodgment, etc. In a direct-to-customer manufacturer the client might have access to a portal with product information, product enhancements, support, manuals, training, online-store, peer forums, product reviews, newsletter/blog, and peer/expert hints and suggestions all in one spot.
|-
|Customers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Stakeholders who pay for services that clients receive. This is separation is very common.
Customers want to pay for things in as convenient and consolidated a way as possible, and have mechanisms available for enquiring, revoking or monitoring services for which they pay. Companies that send multiple bills for the different services they provide are examples of firms that seriously need to look at their customers as a stakeholder group.
Governments provide the classic examples of customer and client separation: A State Government might pay for (or part-pay for) some services that are received by citizens of a city government. The state government is the customer, while the citizen is the client.
|-
|Suppliers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Suppliers of services and materials to the organisation. Suppliers have common service interests such as finding tenders, quotes, interfacing supply catalogues to purchase order systems, checking on payment status, locating standard contracts, etc.
|-
|Partners
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Partners are providers of complementary services. A “meals on wheels” charity provider may function as a partner to a local government, delivering services complementary to those of the city government, but funded by non-City sources.
Partners are mainly interested in ensuring their services stay complementary and not competitive with the organisation. So information on strategies, management of joint projects, identification of opportunities, etc are of interest.
Roads constructions authorities are partners who provide accident minimisation services, and traffic impact control services, etc. that complement those of the local or city government roads teams.
The relationship between insurance companies and the fire service is another example of a partnering structure. Insurance companies have an interest in facilitating the fire control services as they reduce their insured risks.
Franchised sales teams for a retailer, independent software manufacturers for a computer or games console manufacturer, and joint-ventures are all examples of partner community networks.
|-
|Workforce
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Workforce include both employees, contractors and consultants. HR systems, payroll, contract management, OHS, incident management, etc. are examples of services needed by this community.
|-
|Treasury/Custodians
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Treasury & other custodians are always an internal community. Their members are charged with maintaining assets and lowest level enabling systems for the other communities.
IT/IS, Building Management, Maintenance and Treasury are always members of the custodians group. They protect assets and provide the infrastructure on which the community specific applications reside.
Email, communications, data storage, server management clearly fit under this group.
|-
|Governance
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The governance community, like the workforce community includes multiple sub-communities, such as the executive, regulators, government bodies, risk management, compliance management, etc. These communities use services related to the provision of control and performance monitoring. Finance, council management, boards, executive team, performance review committee, inter-government reporting, risk, and compliance systems, and planning/budgeting systems are typically included here. Governance community members are both internal and external bodies with which the organisation has an accounting and reporting relationship.
|-
|The Public
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The public includes everyone else. This is a very important community as it has the ultimate power to remove the entire organisation from existence, or cause government to legislate it out of existence.
It is also the group from which all the other stakeholders originally come. From a strategic perspective, the aim of every organisation should be to get every member of the public community to transition to one of the other stakeholder groups.
The public need to know about the services an organisation provides, its ethics, and social performance.
While most membership of this community is reasonably obvious, the presence of public relations teams, lobbying and marketing in this community may be less so.
An organisation is always a member of the public stakeholder communities of all other organisations.
|}
=Applying the Stakeholder Network Model=
The stakeholder networks model is recursive. It applies organisation wide and through each sub grouping down to the individual business unit level (in fact it can also work at the individual level – but not generally in an IS context). Just as the organisation has these broad stakeholder groups, each business unit has the same stakeholder breakdown, all be it with most stakeholders in the various communities being internal to the organisation – rather than external to the City.
The stakeholder community network has clear relationships between the elements - particularly as realised in SCNM03 - and provides a model under which social networking and portal systems naturally fit. The model leads naturally to both network organisations (those using mixed in and out-sourcing, shared service models and joint-ventures as their standard business model.
The stakeholder community model has a number of applications:
#As an IT system design paradigm and idea promoter.
#As a full organisational modelling paradigm. In this form it results in dramatically different organisation models from those in general usage and is thus often too radical for executive comfort.
#As an analytic “best practice” benchmark it is outstanding, and even when only partly applied results in improved and more cost efficient process design.
#In designing and online and web service business presence. With a little thought it should be apparent how effective the stakeholder model is in designing an online presence and structuring of mutual obligation social networks.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
f54d609f8d240d7ba168fd4d101ce36b7edfe76b
503
393
2012-08-30T15:53:24Z
Bishopj
1
wikitext
text/x-wiki
=Introduction - What is the Stakeholder Community Network Model?=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this and other topics. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Re engineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, style and the detail provided, as the original text was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time.
'''''Author's Note:''''' The stakeholder community network concept was originally mapped out in the mid to late 1990's and reflected both my own search for a paradigm for online and virtual corporations. It effectively pre-dates the rise of cloud computing and social network sites as a component of business (for which it almost seems to have been designed) by some five to eight years. It did, however, benefit from existence of the fore-runners of these concepts. It was developed in the context of the observed behaviours of successful online ventures such as DELL and CISCO, the Victorian whole of government reform agenda, the tail end of the TQM experiment, shift from paper to online work flow both intra and inter business, rise of risk management, progressive adoption of balanced score cards, appearance of network trading organisations (groups of independent complementary businesses that traded together as a unit cross-feeding work and niching away from each other through specialisation - they flourished briefly locally in the mid-1990's), and the rise of on-line portals, peer managed corporate forums, application service providers, enterprise scale ERP and CRM systems, and web based B2B systems and the emergence of cataloguing standards. I have used it heavily over the years. It has been modified over time, to accommodate learnings from organisations that survived economic, technological social and political reversals and fertilised throughout by proven tactical and management philosophies, the stakeholder community network model would now seem to have come of age.
</noinclude>
==What and Why==
===What is the Community Network Theory of Organisations?===
====Organisational Community Network Theory====
'''''Organisational Community Network Theory premises that an organisation is a network of one or more communities existing in a network of other communities. The network links communities along lines of exchange such as communication, dependence, and obligation. Communities are collections autonomous agents and/or other communities that interact and share a sense of group identity, or share at least one purpose in common.'''''
Agents are essentially people, but the category could easily accommodate AI devices as these develop appropriate capabilities.
====Characteristics of a Community in Organisational Design====
Communities provide a natural, spontaneously-forming, self-organising, and evolving human organisational structure that forms because something is shared by the participants. Through the things the participants share in common, the community unit provides a framework for standardisation, streamlining, automating, and specialising in delivery of services and products to meet the shared purposes and operational needs of the individual community, and groups of communities.
Communities form initially because there is one or more needs in common among the participants (possibly only the need to identify and classify each other!). They are not inherently permanent structures, however there are some communities, that because of their survival through multiple generations or over multiple business cycles are effectively permanent. Such a list might include cities, countries, religions, professional associations, sporting clubs, and some government agencies, for example. At the other end of the continuum are communities that form spontaneously and last for little longer than the span of the first and only meeting. Examples might include emergency assemblies, concerts, demonstrations, staff inductions and rallies, etc.
Members of a community may be individuals or other communities. Communities contain eight non-exclusive classes of participant:
# Members - All participants are members, regardless of whether they are also members of the other classes.
# Beneficiaries - Information, goods and services consumers
# Suppliers - Information, goods and services providers
# Patrons - Funding providers who therefore also tend to direct
# Governors - Providers who administer, moderate, direct, control access, monitor, and tune.
# Custodians - Provide the infrastructure, durable assets, information warehouse, community tools.
# Partners - Provide compatible, complementary non competitive services or goods consumed by members in association with those of the community, but not as part of the community.
# Public - Comprised of potential participants, and participants who may also spontaneously form communities that compete with or otherwise influence the context of the community.
The more mature the community, the more clearly these roles are differentiated and actively operating. For a community to reach stability over an extended time the more important it is for these duties implied in these roles to be fulfilled.
Members of a community:
*share in a communal identity,
*have a shared purpose with other members,
*need similar access to information, and
*draw from a common set of tools.
The community will interact with other communities both individually and as a group. The more cohesive and mature the community is, however, the more likely it is that it will interact as community with other communities through nominated representatives.
The community is the fundamental building block of an organisation, but communities are structurally recursive and fluid. Communities themselves naturally subdivide into teams that service particular interests or needs of the community. These teams from their own communities, and together these internal communities form a network of interacting communities. The larger and more heterogeneous the parent community the more noticeable, numerous, segregated, larger and autonomous these internal communities become.
These internal communities may also interact directly with external communities, and have external participants in otherwise internal communities. The more predominant the external participation is, the more likely is the internal community to transition though the parent community boundary to become an external community (with respect to the originating parent community). Similarly the higher the proportion of community participation from a single community in an external community, the more likely that external community will transition to an internal contextually constrained community.
Each community is, therefore, comprised of a fluid network of communities contextually constrained by, and in some way supporting the activities of the parent community.
Community based organisational structures extend horizontally through unconstrained networks of interactions and vertically through community subdivision and absorption into constrained networks of specialised communities.
====Making and Strengthening a Community====
The longer a community survivies - the more mature it becomes - the more clearly the community identity, roles and rules become. For example, a group of people with a common interest in a playing of cricket meet by chance through visits to a local field - perhaps looking for a game being played. Over time they tend to arrive more regularly and predictably at around the same time in greater numbers. Some start bringing equipment and start a game, while others join in fielding or watching. As the predictability of the presence of other interested parties grows, participants start arriving in the expectation that others will also be present, while other participants bring supporting material - like refreshments, etc. Gradually, a community is forming with self nominated and perhaps suggested or allocated roles.
Eventually the group might suggest a common name - the Sometimes Cricket Club - and others might attempt to organise more sophisticated or permanent resources, and eventually the funding needs of the group might dictate an expansion in its membership and the need to more formally manage finances on behalf of the group, etc. Rules might initially be common-sense and unspoken (like not stealing the bat and ball from the guy that supplied it), others may be agreed through shared experience. Sharing or common interests and the need to improve predicatbility of participants in games will encourage the group members to share contact details and channels of communication. The more individuals invest their time, energy and resources on behalf of the group, the more they will expect later joining members to make a catch-up contribution for the existing investment - and the community may start placing barriers to entry in the form of membership criteria and fees.
As the group grows handshake agreements may need to be formally agreed and recorded, and individuals will be formally allocated roles and leadership agreed. Along the way as disagreements arise (like who should bat first) dispute resolution mechanisms will be required.
Thus a community has been formed and gradually self-organised. If the initial casual group fails to ever define roles, find equipment supplier(s), it will be most unlikely to ever get to the stage of even the first game. If it fails to agree its meeting place and times of meetings it will probably not achieve the second game. If it fails to identify its membership and establish an identity (and therefore a brand) and all the other functions of a cricket club it will be unlikely to last out a season.
To make an effective long term community we need to pay attention to the characteristics that form a community and ensure that these characteristics are serviced. From the simple example above we see that a community has:
*Members
*Shared resources
*Identity / Brand
*Communication
*Define and shared purpose
*Location - a meeting place (which may be virtual)
*Roles
*Rules
*Governance structure
*Barriers to entry (note this might be as small as deciding to participate)
*Patron (implied or formal)
We grow and strengthen a community by addressing these characteristics directly. Ignoring any one of these will result in the failure of the community over time. For a community that assembles for a single purpose for only a short period of time - such as a demonstration, or an entertainment event this may not be a concern. If we wish the community to have any kind of longevity we will need to consider how we enable the defining charcteristices of the community.
It is with some surprise that we note that when we look at the permanent communities within many organisations we will find that several of these characteristics are only weakly addressed - if at all - rarely understood, and even more rarely considered. Herein lies the key to the internal structural failure of many organisations that have grown much beyond the oversite of their founders splitting into many semi-autonomous communities.
====The Organisation as a Community====
Here we distinguish a physical organisation from the organisation of its operations and resources.
A physical organisation - such as a company, government agency, not-for-profit, or even a political party - is:
# a community containing a network of communities,
# a patron of both internal and external communities
# a custodian of information and provider of infrastructure for communities
# a governor of community mandate, direction, performance, and culture, etc.
The physical organisation is, by definition, a community, but its boundaries may be so fuzzily defined that as a community it is little more than a container for a network of communities, whose primary allegiances are directed outside of the physical organisational boundary. Some communities in the organisation's network are planned and facilitated communities, while others are not planned but facilitated (such as professional associations, unions, standards bodies) and others are neither planned nor facilitated (but, perhaps, accommodated) (such as schools, sporting clubs, arts groups, social movements, etc.).
As a patron the physical organisation plays it primary role. Patronage is provided through a funded pool of resources that can be applied to communities as participants and enablers of community infrastructure, and through direct funding of community operations, or through funding infrastructure provision, etc. Patronage is about funding, and every gift "in kind" of resources or equipment, etc is an implied gift of funding as well. Patronage is accompanied by some ability to influence direction - if only from the implied threat of future funding cessation.
As a custodian, the physical organisation will also provide services to communities of storing knowledge, providing and maintaining technical and physical infrastructure used by communities, and management of liquid assets, etc. These are called custodian functions because they are about the preservation of assets, wealth, capability and capacity.
In its governance function the physical organisation imposes accountability for patronage, standards, policy compliance, legal compliance, strategic direction, performance measurement, financial control and resource utilisation, etc
All organisations are simultaneously intersected by many special interest communities:
*The average workforce is riddled with communities some intersecting the organisation, some not - union(s), professional bodies, schools (if staff have school age children), political, sporting, social, OHS cases, divisional, project, etc.
*Industrial associations, standards committees, regulators, etc.
*The company is surrounded by public interest groups, political and semi political groups, consumer advocacy groups, and the public relations industries.
*Internally the organisation might have communities of buyers, marketing and sales, logistics, process & quality improvement, governance, safety, research and development, financial control, etc.
Communities do not respect the conventional boundaries of corporate or governmental agencies. Communities that interact with external stakeholders, for example, draw in members of the public and convert them into organisational stakeholders in the process, but not employees (at least in the conventional sense).
====The Advantages from using Communities to Model Organisations====
In some organisational theories, communities are represented as external and internal forces or drivers, but are not directly modelled into the organisational structure. The organisation is seen as a collection of consumer-provider relationships - whether those relationships are about transmitting instructions, funding, goods, services, resources, etc. The relationships are essentially hierarchical - even in matrix organisations - and feed back and feed forward control systems have to be imposed on the structures to make them work. Structural entropy gradually causes the structure to disassemble without constant maintenance on the organisation structure itself.
The community is an advance on the classic consumer-provider interactive model, because it:
*assumes most business relationships are multi-directional exchanges between the provider and the consumer and other providers and consumers extending over a period of time;
*recognises that all transactions between parties involve a series of micro exchanges going in both directions, not a single uni-directional exchange. For example, a purchase involves the consumer providing information (identity, location, preferences, competitor data, demand level, buying cycle, etc.) and possibly funding, a sales team matching the need to available offerings and defining and providing the promise, a legal team defining the obligations, a delivery team to deliver the good or service, a quality and support team providing quality management, logistics team providing transport, etc. All of these are participants of the same community involved in meeting client needs.
*delivers the benefits of the one-stop-shop process models, without the training cost, and inherent quality variability, by forming a community of specialists to collectively provide the single point solution.
*provides a model for structuring the online presence of an organisation.
*provides an organisational architecture that distributes the costs of providing and consuming goods and services across the community rather than exclusively concentrated in the larger party. For example, a buying community might assume some of the costs of sales by providing their details online directly into the client database, select from available product (by watching videos, reading information and product comparisons provided from central location), or submit special orders online, respond to questions from other clients in hosted forums, and advertise the organisation's products and quality in organised reviewer sites, or social networking sites.
*places the provider and consumer into the same "team" and positions them as jointly trying to meet a need. The community model facilitates all participants contributing jointly and sharing ownership of the outcome - rather than one party meeting the needs of the other.
Each community is a collection of participants (members) who share common operational characteristics, goals, interests and/or functional needs. The greater the extent to which the participants share characteristics, interests, needs and goals in common the greater the cohesion in and resilience of the community - in simple terms the community is active, "tight", involved, and the members share a sense of identity, belonging and, most importantly, ownership.
Communities are semi-autonomous, self-selecting, self directed, and inclusive. This does not mean communities are necessarily "open-access". In fact communities with higher barriers to entry often have the highest sense of cohesion because membership is something hard to attain and therefore something of value. Cohesion does not necessarily mean active, however, and lack of activity generally makes a community less interesting organisationally. Communities survive by exchanging things. The greater the volume of services, tangible goods or intangible goods (such as information), that flows through and around the community the stronger the community becomes. In the community model an organisation therefore benefits by fostering participation and particularly communication among all its members.
===What is the Stakeholder Community Network Model?===
'''''The stakeholder community network model is an organisational design and analysis paradigm that sees the organisation as a network of co-dependent stakeholder communities positioned in a larger network of interacting (but not necessarily co-dependent) communities. Within this paradigm, all of an organisation's services, functions and facilities exist to service the needs of the various stakeholder communities in the network.'''''
It should be noted from the outset, that co-dependent does not mean cooperative. As with domestic co-dependent relationships, the community network may include some positively destructive co-dependent community relationships.
The model defines an organisation as consisting of a network of operations that may extend beyond the boundaries of the organisation's body corporate. One such situation might arise in franchised operations or trading networks where an external entity provides critical services on which the corporate organisation depends.
The model works as an organisational design paradigm, a process design framework, an IT strategic design paradigm and a risk and performance analysis framework. It is directly suited to modern network, online, virtual, service operational models as well as bricks and mortar industries including utilities, government, general and project manufacturing, and education. It has not been tested in the resources sector or transport sector.
As an analysis tool identification and labelling of existing implicit and explicit communities and the physical and virtual flows between them against current planning, score cards, policies, performance measurement systems, service agreements, compliance frameworks, risk models, quality, control and feedback systems highlight areas of dysfunction, duplication, redundant effort, counter-productive strategies, missed opportunities, and structural inefficiency and ineffectiveness.
As a design tool it results in the alignment of organisation wide activities to identifiable purposes with targeted participants and measurable performance. It facilitates structurally many different and potentially divergent simultaneous strategies while painting a boundary and direction for such divergence. Such support in organisational design is essential for dealing in global, highly cyclic, or political markets where cultures, rules and geographic features may require the ability to operate as "her to him and him to her", and to retire and replace entire limbs rapidly.
As a customer, partner and supplier service process model it results in bound customers and suppliers and well integrated partners while distributing a significant portion of the organisations costs to the participants.
As an IT systems framework it provides an efficient protocol for defining shared services, community portal service architectures, intra-cloud and cloud services, virtualisation clusters, etc.
==Definitions==
===The Organisation===
Organisations are networks of communities. These communities are comprised of members drawn from inside and outside the organisation's corporate legal identity, and may include communities of which the organisation has no effective control (in traditional terms).
Under the stakeholder community network model we view an organisation as a community comprised exclusively of interconnected sub-communities of people providing and consuming goods and services. Each sub-community forms multiple sub-sub-communities within it, and the community subdivision continues recursively until the costs of organising communities out way the benefits gained from the additional community.
Contrast this view of an organisation with that of other models that classify organisations in terms of bureaucratic, divisional, matrix, and similar structures. Under the stakeholder network view all of these structures can coexist in an organisation simultaneously as they are simply overlapping communities defined around structural paradigms. The stakeholder community network model does not replace such paradigms - it absorbs them.
In the stakeholder community view an organisation is a free-flowing evolving network of teams forming and disbanding as required, with some acquiring near-permanent status, while others enjoy but a single day in the sunshine. Community membership is not exclusive and it is normal for members of one community to also be members of other communities.
===The Community===
The model first defines a structural unit (the community) that possesses identifiable and comparable characteristics, such as focus, information need, functional need, etc. Secondly, the model looks to the mechanisms of facilitating stakeholder communities in a cost effective and consistently reliable and predictable way, utilising common services designed to enable and utilise the shared or distinguishing characteristics. So initially, at least, the model is community structure agnostic.
Communities form for multiple reasons, including:
*shared geographic proximity
*shared heritage
*shared communications technology
*shared language
*shared interests
*shared skills
The things we share are like gravitational attractors around which people cluster in self organising social units we are calling communities.
As communities grow beyond a few members they form sub communities whose members service the parent community or concentrate in some specialised capacity in addition their other roles as members of the community.
The communities in which we are most commonly interested (in the general organisational performance improvement context) are those forming around shared interests and skills. Within an organisation the geographic, and language communities may be crucially important, and in some contexts would be directly accommodated but they will also usually need some form of communities formed around skills and interests (like, at the very least, consuming or providing something), in order to assist the organisation achieve its purpose.
Within each community formed around shared interests or skills are a further set shared interests such as membership, meeting space, information, branding, commercial services, engagement, arbitration, and support. As these needs are common (with minor variations) across all communities they are an attractive first target for shared service provision across all communities. In designing these shared services one should remember that a properly harnessed community can be self managing, peer supporting and self selecting. Shared service provided to communities should be designed to encourage this ownership by the community membership.
A community model assumes a multi-way conversation within the community among the community members - not a massively parallel bilateral conversation between the community members and the organisation. The latter is a client-supplier relationship and by excluding inter-member interaction it embeds the costly push model of marketing, sales and service delivery. By encouraging intra-community conversation we harness the consumers in the community into one or more of the many supply roles in the community. In a customer/client oriented community supply roles span such things as marketing assistance with reviews, discussions and forum participation to support assistance in peer help spaces, and even product improvement and testing such as in software Beta programmes. On the supplier and partner side, supplier side community roles include online supply of certifications, supplier self-registration of details, self selection of available contracts, online invoice entry directly by suppliers, and suppliers providing new product information feeds matching community standardised classifications and measures, etc.
===The Stakeholder Community===
A stakeholder community, is a collection of people, agencies, or units of an agency, that share three traits in common:
# They have an interest in the organisation being modelled or analysed (IE: they are stakeholders).
# As a group, they are co-dependent with other groups of the same organisation. (IE: the groups can not operate with complete autonomy as they depend on each other for their functioning and survival).
# They possess additional distinguishing dimensions of their interest in the organisation that allow them to be functionally separated from some members of the collection and similarly grouped with others (IE: they form an identifiable and functionally similar subgroup of stakeholders).
A stakeholder community of an organisation might be defined as geographically based, and representing all customers within a geographic area, or it might be an enterprise wide collection of staff injured in forklift truck accidents, or a worldwide extra net of ECL policy advisers, or suppliers and corporate buyers for raw materials,... or any one of a long list of possible organisation specific or related groupings.
We call the members of a community "Resources". A resource may be a person or another collection of resources such an organisation, a unit of an organisation, another community. In all cases where a collection of resources is a member of a community, that collection will participate through one or more "community representatives". So in a sense resources can be seen as ultimately comprising people (even though they may be members fulfilling constrained roles).
===The Stakeholder Community Network===
A stakeholder community network is a collection of stakeholder communities that form a network of loosely co-dependent communities.
The communities comprising the network preserve the rules of membership of a stakeholder community domain (as defined above). The links between member communities represent the co-dependencies. The dependencies are functional in nature and may be about information, goods or services - provision or supply, etc. They therefore represent the first layer of potential service level agreements in an organisation.
Technically speaking, the graph connecting all members of the stakeholder network is a digraph (directed graph) when the functional attribute of the network relationship is included in the inter-community link definition.
===The Well-formed Stakeholder Network===
In the universe consisting of all possible stakeholder communities of an organisation, a complete network would include all communities in the network topography. Such a network is said to be "theoretically complete".
Theoretical completeness is neither practical nor possible to achieve in practice. We can not know, and thus enumerate, every possible stakeholder community as each resource and every possible combination of two or more resources up to and including the entire membership of the organisation's stakeholder domain is potentially a community.
Another way of viewing completeness is to first test to ensure that all members of the stakeholder community are also members in one or more of the other communities in the network. This network is then complete in terms of a organisation's resource coverage.
It is worth noting that an organisation's stakeholder resource list may include both members of the public and entities that have no direct dealing with the organisation as well as staff, clients and suppliers (etc.) of an organisation.
===The Stakeholder Community Network Model===
The stakeholder community network model views an organisation in terms of stakeholder communities with shared needs, interests and/or purposes.
The model is a government and business meta-organisational model for organisational design, performance analysis and competitive strategy. It founded on a theory of operational design that embraces networked co-dependent business structures (such as outsourcing, join-ventures and social networking), while not mandating them. The step into communities, however, fundamentally changes the organisational focus from internal structure management to external service delivery. By rejecting all activity not designed to service an identifiable community it forces the entire enterprise to embrace a service culture at every level - everybody is a client of somebody else and in a stakeholder relationship (and usually responsible to someone, or responsible for something) with many other people.
The community structure inherently distributes some of the costs of marketing, sales and servicing, from the net providers to the net consumers within the community, but is effectively a premium willingly paid by community net consumers for greater influence over service form, more relevant and timely information, improved service speed, and risk perception confirmation (the role of public forums), etc.
Communities are essentially self determining and semi-autonomous so a community network modelled organisation naturally accommodates multiple value streams simultaneously. The ability for a community to recursively sub-divide into smaller overlapping specialised communities means the enclosing community structure can accommodate not only multiple value streams internally, but also multiple agendas. Thus financial performance can be enhanced, while quality improvement, social policy or research (and other long term strategies) are driven with equal priority. Further, new value streams can be added to the structure without compromising the integrity or culture of the existing structure.
The semi autonomous nature of communities means that both competitive and and non-competitive business architectures are compatible with the community network model.
We say it is a "meta-organisational model" because, while you might design your physical organisational structure around the model (particularly at the business unit level, or in the online context), it is more common to use it to redesign the roles, service agreements and strategies of existing organisational structures in an organisation. The meta-organisational model is one that floats through a physical organisation providing a new virtualisation of the organisation by re-engineering the service agreements, social networks and logistical networks in an organisation.
One way to think of this is that the impact of applying the community stakeholder thought process is to rearrange the plumbing, the lifts, the corridors and the internal doorways inside a heritage listed building. It is still the same building on the outside, but now you don't get lost inside it, and clients and customers start sharing your destination, not just what you do.
Sure you could tear down the building and replace it with a campus that modelled your stakeholder community structure exactly, but you do not need to do so to get the benefits, and in fact doing so might be counter productive to your market.
The model does tend to have certain organisational impacts - even as a thought exercise:
*The model encourages networked structures and specialisation of semi-autonomous co-dependent internal units.
*The communities share common servicing needs and efficiency dictates some form of shared service provision for these common needs. These structures imply additional cost, which in a zero-sum change process implies that resources will have to be transferred from somewhere else.
*The network model will tend to reach across multiple divisions of an organisation in defining communities.
In the normal entity (government or business) an individual or even business unit might participate in multiple stakeholder communities at once. So the communities are not necessarily defining an organisational structure as much as a set of interlocking co-dependence structures around which services can be consolidated and streamlined, duplication identified and removed, and context specific organisational purposes can be clearly articulated.
=Applying the Stakeholder Community Network in Practice=
==Step 1. Identifying and Defining Stakeholder Communities==
We must fist decide whether we a looking for a directed outcome such "quality improvement" or an undirected (normal) outcome. This impacts the design of every community.
In a directed outcome model the directed outcome becomes a community in its own right that is automatically a participant in every other community. This allows consideration of the requirements of the directed outcome community to be capture and implemented in every other community structure.
In the undirected model no such imposed membership is mandated and the community architecture is left to optimise the framework with which it has been equipped.
In most situations we use the undirected model for analysis and the directed model in conceptual design (refactoring into an undirected model once the directed redesign has been finished).
==Step 2. Identifying and Defining the Community Ennoblement Functions==
In the model, the central object of the organisation is to ensure communities are facilitated, serviced, and harnessed for the purposes of the organisation as best it can, or otherwise "actively managed". The model sees only communities - so every participant within and without the organisation must be able to be defined as falling into one or more stakeholder communities if the model is to be considered "well-formed" (read "complete").
Within the model, the aim of the enterprise is to facilitate communities (generally) and a defined set of communities specifically - which translates into:
*identifying stakeholder communities
*mapping new and existing stakeholder communities to organisations objectives, mandate and purpose as they change
*mapping inter-community work flows testing and identifying duplicated communities, duplicated flows, and under resourcing, etc
*seeding communities as required
*funding stakeholder communities (eg seed capital, cross charging, external billing, etc)
*organising stakeholder communities
*branding stakeholder communities
*fostering community participation and outcome ownership
*providing the and possibly managing the infrastructure for community self organisation
*liaising/interfacing between stakeholder communities (eg. client community versus customer community)
*delivering the community's requested service or goods
*harnessing community ownership of the service/product improvement process
*trapping and archiving expert knowledge from both internal (to the organisation) and external community participants over time
Within an organisation adopting the stakeholder community network paradigm operationally, the stakeholder community network must be actively managed. This means it must be facilitated, moderated and funded. Resourcing is required to make it fast and efficient to implement and equip new communities and retire existing ones. Part of equipping a community is establishing its charter, budget, performance measures, governance, operating rules (constitution), core membership, decision model, meeting space, common (shared) tools and specialised applications or services need.
This necessitates the creation of a new centralised or distributed role of community facilitator(s) and a central role of community registrar (manager). The former is about equipping and assisting new communities, identifying and seed communities as required and advising and improving existing communities. The latter is about containing, policing, funding, planning, judging and budgeting communities.
==Step 3. Considerations in Designing the Stakeholder Community Analytical Structure==
Once we have a standard definition of the community concept as it applies in our analysis and organisation, the next step is to define a framework of communities through which to analyse the organisation.
As each community shares facilities between their members, the fewer top level communities there are the better the efficiency gains in the entire model will be. Unless, of course, their are too few and the resulting groupings are not homogeneous over sufficient characteristics, or the communities are badly chosen with many shared characteristics between the groups rather than within the groups.
Secondly, the choice of communities can slant the servicing view internally or externally, or indeed could simply mirror existing organisation structures. None of these effects are likely to produce efficiency gains sufficient to justify the operational overhead of the stakeholder community support systems. The gain comes from achieving 100% coverage of participants, with communities comprised of both external and internal participants, with the minimum need for intra-community process or system customisation. By demanding the mixing of internal and external members aim to eliminate duplication between external and internal systems and processes servicing the same need.
So, ultimately, the choice of top level stakeholder communities proves to be crucial to the outcome of the model - on all fronts.
In our experience, if the model is well designed the chosen top level community groups will tend to be highly co-dependent which automatically provides a structure and focus for service level agreements, and intra-community risk profiles will be highly consistent.
The choice of stakeholder communities used is prima-facie up to the organisation and the purpose of the analysis. While generalisation is possible at the highest level, as the view descends through the communities into their member sub-communities the groupings become quite specific to an organisation.
After many years of using and refining the concept we have settled on a standard top level stakeholder community model we call SCNM03. It has proven to be work predictably in both government and commercial agencies in both physical (eg manufacturing) and virtual (eg software) organisations. Alternative models include the groupings under Porter's Theory of Competitive Advantage.
=Standard Stakeholder Community Network Model: SCNM03 in Practice=
==SCNM03: Bishop's Model Stakeholder Network==
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
In the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
==Risk and the Stakeholder Community Network Model==
Risk in the model tends to vary with time and the degree of influence the organsiation (the meta-community) has in the specific community being examined. This influence will vary over time.
Consequently, in the longer time frames (ie. the strategic time frame) the Public and Governance communities are usually the highest inherent strategic risk communities in the model. The organisation tends to have the least influence over the sub-communities contained there-in and may participate only as a guest (information receiver, price-taking customer, subject of legislation, etc.), or not at all. Public attitudes can swing against the activities of the organisation, and influence the legislaters, who, in turn, can legislate the marketplace or the organisation out of existance. COnsumer preferences can change as technology progresses, making the organisation's business model irrelevant. The stakeholder network model therefore naturally tends to encourage both lobbying and active public relations management (or the exact opposite: invisibility!), and participation in external communities for information gathering.
Where timeframes being considered are shorter, ie. from an operational or tactical risk perspective, Workforce will rank as one of the highest risk spaces. If we think of Workforce as being comprised of smaller communities - say contractors and employees, and then each of these in turn being comprised of even smaller communities - say divisions, units and ultimately individuals we see that the more we subdivide the group the closer we get to a community of one member - the individual. In the very short term humans thus represent a highly variable factor.
In the micro-community of one person. the only member of the community that exists inside the employee's head is him or her self. All the risk minimisation and behaviour modification controls naturally present in a larger commnunity are dependent on that one member. In that community one person fulfills all the roles of the multi-member community. Strategies such as training, and standard processes work over an extended time frame to reduce the probability of incidents and create predictability across the workforce as a group, but in the very short or immediate timeframe the individual is still entirely responsible for each action with little chance for other community members to intercede (because there aren't any!). In the instant, this micro-community can make an unsafe decision that impacts the well being of the larger organisation (as well as themselves). Planning, thorough and extended training, careful member selection, 'idiot-proof' machine and user interface design will improve the predictability of the individual - but all these strategies take time to design, implement and achieve their effects. So, over the shortest unit of time - say, a second into the future - the individual can make a very bad decision with disastrous outcomes. This is a technical way of saying that people do dumb things that can be prevented with enough preparation and training - but only if enough time is available.
==Competition and the Stakeholder Community Network Model==
The SCNM03 model captures a deliberately divergent view of competitive strategy from that presented by many earlier authors. In this model, competitors are seen as potential suppliers, partners, clients, customers or workforce and strategies to bring them into one or more of those communities would be pursued.
Crucial to understanding the SCNM03 stakeholder model is that, purely applied, the model sees the entire universe in terms of these communities. It starts with the ideal vision built-in and therefore models a best fit to that scenario.
One obvious issue, then, is that there is clearly no community of "competitors". Under the pure SCNM03 stakeholder network model our aim is to make competitors a member of one or more of the other communities. We are therefore encouraged to both define our service offering away from competition and structure ourselves as complementary to another's offering or needs. The extent to which we are not able to achieve this influences the inherent risk that lays in the public communities.
We do not loose the unresolved participants, instead they appear as sub-communities of the public community and are subject to a range of risk mitigation strategies.
==Stakeholder Communities and Sub-Communities in SCNM03==
Each of these 8 communities is comprised of smaller communities with more specialised shared needs. For example, workforce is comprised of two specialised communities: contractors and staff (or other appropriate terminology). While many requirements of these groups are the same, there are specific differences in engagement, management, ancillary services, social interaction and disclosure levels between these groups to warrant separate community identities.
Conceptually the stakeholder network organisation is (almost) a franchiser of community management systems within a defined product/service space and in a given organisational cultural context. An organisation adopting this model will naturally look to standardise the managerial and technological profile of the communities it manages.
Applying the stakeholder network model in process design, performance analysis, compliance management or risk assessment often results in process structures and views that differ dramatically from the Divisional, Matrix, Hierarchical and Service models under which the organisation may operate. The community network model is agnostic when it comes to organisational structure (with the one exception being an organisation exactly mirroring the network model itself).
By way of example, an organisation that produces widgets, might traditionally see itself in terms of functions and processes concerning widgets. It has widget raw materials planning and acquisition, inventory management, widget production, widget distribution, widget order management and sales, etc. The same organisation in the stakeholder network model would see the world in terms of satisfying the needs of defined stakeholder groups first - not the things they were manufacturing.
In the SCNM03 stakeholder network model the natural home of the manufacturing functions is in the customer community where they are firmly focused to the customer (note - not client) desires, and materials acquisition function might be seen to contract the services of both the partner and supplier communities to satisfy material demand.
A couple of outcomes of the model are immediately apparent from this example, the model blurs the distinction between internal sourcing and external sourcing,
From a computing perspective, the model automatically leads to service portal based architectures, systems consolidation, cloud structuring (whether internal or externally hosted), and highlights the places where inter-system integration and system standardisation are needed. From an operations perspective it leads to service focused organisational architectures with defined client groups and document service standard agreements.
==The SCNM03 Communities Explained==
An individual is often a member of multiple communities (eg Customers and Clients). Our standard stakeholder communities (which in 12 years have yet to be wrong) are:
{|
|-
|Clients
|style="padding-bottom: 10px; padding-top: 10px; border-bottom: 1px solid black;bottommargin:10px;"|Stakeholders who receive or deliver services Clients are interested in rapidly finding information, requesting service, reporting hazards / incidents / events / ideas.
A classic result of the client stakeholder focus are client portals. In a local government these might take the form of a resident portal, where a city rate payer can find in one spot all the online systems for garbage collection, events, bylaws, parking permits, voting, pet registration, planning applications and objection lodgment, etc. In a direct-to-customer manufacturer the client might have access to a portal with product information, product enhancements, support, manuals, training, online-store, peer forums, product reviews, newsletter/blog, and peer/expert hints and suggestions all in one spot.
|-
|Customers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Stakeholders who pay for services that clients receive. This is separation is very common.
Customers want to pay for things in as convenient and consolidated a way as possible, and have mechanisms available for enquiring, revoking or monitoring services for which they pay. Companies that send multiple bills for the different services they provide are examples of firms that seriously need to look at their customers as a stakeholder group.
Governments provide the classic examples of customer and client separation: A State Government might pay for (or part-pay for) some services that are received by citizens of a city government. The state government is the customer, while the citizen is the client.
|-
|Suppliers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Suppliers of services and materials to the organisation. Suppliers have common service interests such as finding tenders, quotes, interfacing supply catalogues to purchase order systems, checking on payment status, locating standard contracts, etc.
|-
|Partners
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Partners are providers of complementary services. A “meals on wheels” charity provider may function as a partner to a local government, delivering services complementary to those of the city government, but funded by non-City sources.
Partners are mainly interested in ensuring their services stay complementary and not competitive with the organisation. So information on strategies, management of joint projects, identification of opportunities, etc are of interest.
Roads constructions authorities are partners who provide accident minimisation services, and traffic impact control services, etc. that complement those of the local or city government roads teams.
The relationship between insurance companies and the fire service is another example of a partnering structure. Insurance companies have an interest in facilitating the fire control services as they reduce their insured risks.
Franchised sales teams for a retailer, independent software manufacturers for a computer or games console manufacturer, and joint-ventures are all examples of partner community networks.
|-
|Workforce
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Workforce include both employees, contractors and consultants. HR systems, payroll, contract management, OHS, incident management, etc. are examples of services needed by this community.
|-
|Treasury/Custodians
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Treasury & other custodians are always an internal community. Their members are charged with maintaining assets and lowest level enabling systems for the other communities.
IT/IS, Building Management, Maintenance and Treasury are always members of the custodians group. They protect assets and provide the infrastructure on which the community specific applications reside.
Email, communications, data storage, server management clearly fit under this group.
|-
|Governance
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The governance community, like the workforce community includes multiple sub-communities, such as the executive, regulators, government bodies, risk management, compliance management, etc. These communities use services related to the provision of control and performance monitoring. Finance, council management, boards, executive team, performance review committee, inter-government reporting, risk, and compliance systems, and planning/budgeting systems are typically included here. Governance community members are both internal and external bodies with which the organisation has an accounting and reporting relationship.
|-
|The Public
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The public includes everyone else. This is a very important community as it has the ultimate power to remove the entire organisation from existence, or cause government to legislate it out of existence.
It is also the group from which all the other stakeholders originally come. From a strategic perspective, the aim of every organisation should be to get every member of the public community to transition to one of the other stakeholder groups.
The public need to know about the services an organisation provides, its ethics, and social performance.
While most membership of this community is reasonably obvious, the presence of public relations teams, lobbying and marketing in this community may be less so.
An organisation is always a member of the public stakeholder communities of all other organisations.
|}
=Applying the Stakeholder Network Model=
The stakeholder networks model is recursive. It applies organisation wide and through each sub grouping down to the individual business unit level (in fact it can also work at the individual level – but not generally in an IS context). Just as the organisation has these broad stakeholder groups, each business unit has the same stakeholder breakdown, all be it with most stakeholders in the various communities being internal to the organisation – rather than external to the City.
The stakeholder community network has clear relationships between the elements - particularly as realised in SCNM03 - and provides a model under which social networking and portal systems naturally fit. The model leads naturally to both network organisations (those using mixed in and out-sourcing, shared service models and joint-ventures as their standard business model.
The stakeholder community model has a number of applications:
#As an IT system design paradigm and idea promoter.
#As a full organisational modelling paradigm. In this form it results in dramatically different organisation models from those in general usage and is thus often too radical for executive comfort.
#As an analytic “best practice” benchmark it is outstanding, and even when only partly applied results in improved and more cost efficient process design.
#In designing and online and web service business presence. With a little thought it should be apparent how effective the stakeholder model is in designing an online presence and structuring of mutual obligation social networks.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
f54d609f8d240d7ba168fd4d101ce36b7edfe76b
Is there a cost associated with telephone support (i.e.: cost per call or issue)?
0
326
467
2013-03-25T12:21:48Z
Bishopj
1
/* Answer */
wikitext
text/x-wiki
==Answer==
No - If your annual maintenance subscription is paid and active, phone support for IT Technical issues and software operational problems is covered. With a new license, or while evaluating the software the initial install help is provided free. Your maintenance subscription also covers phone support for re-installs, software usage strategy and a reasonable volume of 'how-to' questions. If there is a significant volume of assistance required, or for general risk management and consulting support there may be separate charges so talk to us and, if necessary, we may will propose a modest quote.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
48137d0fda8fb2ee3d54def68dd80520bde62af1
Main Page
0
1
1
2018-04-03T10:54:31Z
MediaWiki default
0
wikitext
text/x-wiki
<strong>MediaWiki has been installed.</strong>
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]
5702e4d5fd9173246331a889294caf01a3ad3706
Real Learning in Virtual Worlds - CHAPTER 2: Literature Review
0
278
303
2018-10-28T00:34:00Z
Bishopj
1
/* 2.8.4.2 ACSII Virtual Worlds */
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 2: Virtual Worlds - Concepts, History, and Use in Education (Literature Review)=
==2.1 Introduction==
Gartner (2007) predicts that as many as 80% of active internet users will have a ‘Second Life’ in a virtual world by the end of 2011. Depending on your definition of ‘virtual world’ this may seem a little ambitious. Certainly, the extent to which virtual worlds are seen to include massively multi-user online environments supporting collaborative exchange of information in shared virtual space, the prediction might prove reasonably safe. To the extent that this definition is constrained to massively multi-player online games then prediction may prove a little “braver”.
Today’s virtual worlds represent the convergence of multiple technology streams, with the latest examples of the genre representing the merger of internet, telecommunications, instant messaging, virtual reality, 2D & 3D graphics, a variety of 3D modelling technologies, spatial sound, distributed databases, spatial indexing, mapping, streaming data transmission, physics, scripting languages, object-oriented software, agent theory, artificial intelligence, networking, economic modelling, online trading systems, game theory and many, many more technologies.
While the developers of many virtual worlds are content within the game space, some virtual world developers, such as Linden Research (developers of Second Life) have ambitions to be the web platform of the future (Bulkley, 2007). To this end a number of the commercial developers of virtual worlds have joined forces with a number of major corporate consumers, systems integrators and US government bodies to explore common standards for inter-operability of virtual world platforms which is a necessary first step in moving the technologies from the isolated proprietary place they now inhabit to a world-wide shared web platform (Terdiman, 2007).
This chapter explores virtual worlds, reviews the literature considering alternative definitions, characteristics, history, key architectural features, research outcomes and applications in education. The chapter concludes with an examination of traditional education taxonomy and relates that to the virtual world context as a basis for structuring an approach to exploring education affordances offered by two approaches to education in virtual worlds.
==2.2 Virtual Worlds==
===2.2.1 What is a Virtual World?===
====2.2.1.1 In Search of a Definition====
“Virtual worlds are places where the imaginary meets the real”. (Bartle, 2003, p. 1)
Virtual, as defined in the Oxford Dictionary (1989) with respect to the computing context is: “… not physically existing as such but made by software to appear to do so from the point of view of the program or the user….” and defined in the virtual reality context to be “… a notional image or environment generated by computer software, with which a user can interact realistically as by using a helmet containing a screen, gloves fitted with sensors, etc.” (1997).
The term world is defined in the Oxford Dictionary (1989) as “the ‘realm’ within which one moves or lives”.
In simple terms, therefore, a ‘virtual world’ can be defined as a generated computer software realm in which a user moves, exists or lives in a manner that appears to be real to the user.
A common definition for the term ‘virtual world’ is passionately debated in the literature (see Combs, 2004; Jennings, 2007; Reynolds, 2008; Wilson, 2007). It is a term that is used to describe many types of software environments from a simple MUD (Multi User Dungeons, also referred to as Multi User Dimensions or Domains) (Bartle, 2003; Keegan, 1997; Slator et al., 2007) to a sophisticated fully immersive 3D virtual reality environment used in gaming, physical training simulators or social interaction spaces (MetaMersion; Patel, Bailenson, Jung, Diankov, & Bajcsy, 2006; Van Dam, Forsberg, Laidlaw, LaViola, & Simpson, 2000). The term virtual world can be used to describe a single user walk-through simulated environment (Dalgarno, 2004; Youngblut, 1998) or an environment such as a massive multiplayer online role playing game (MMORPG) like World of Warcarft (Bainbridge, 2007). The term virtual world is also interchanged with other terms such as - virtual environment, synthetic world, mirror world, metaverse, virtual universe, artificial world etc[2] (Grøstad, 2007).
Bartle (2003, p. 1) provides the following definition:
<blockquote>
“Virtual worlds are implemented by a computer (or network of computers) that simulate an environment. Some -but not all- of the entities in this environment act under the direct control of individual people. Because several such people can affect the same environment simultaneously, the world is said to be shared or multi-user. The environment continues to exist and develop internally (at least to some degree) even when there are no people interacting with it; this means it is persistent.”
</blockquote>
Therefore, using Bartle’s definition in conjunction with the Oxford Dictionary definition provided above a virtual world can be defined as:
<blockquote>A shared software environment (or realm) in which a person represented as a projected entity (such as an digitally projected image, text identity or other computationally representational object) moves, exists or lives in a manner that appears to be real to the person and capable of affecting that environment and, being affected by, in a manner that simultaneously effects the experiences of other entities within the environment and which generally remains persistent once the user has left the world.
</blockquote>
The key components of this definition are:
#A shared environment in which a real-world participant shares a computationally generated artificial space with other real world participants and/or other computationally generated entities.
#The nature of the real-world participant’s projection into the computationally generated virtual space.
#The characteristics of the space, which establish a sense of realism to the participant.
#The manner and extent to which the real world participant is able to affect the shared space.
#The nature and form of persistence that the artificial space retains.
Throughout this section we will examine the current state of these components; the ideas and literature analysing contributing to the current expression of these concepts in the form of currently available virtual worlds. The realisation of virtual worlds in software has been (and continues to be) a rapidly evolving field continually consolidating mixed influences from a fiction, mechanical and electrical engineering, computer science, gaming theory, telecommunications, social science, commerce, religion and sociology. It is a field where advances are made as much in the act of amateur invention as in formal science, and a field in which the academic literature frequently lags the leading edge of the advances by a significant degree.
===2.2.2 Recognising a Virtual World by its Features===
While there is not as yet a single common set of universally accepted attributes, the literature offers a variety of feature based definitions that attempt to provide a basis for classifying whether a given application or environment is, or is not, a virtual world. Across these competing views there are some features that are most frequently repeated.
Coming from the perspective of virtual worlds as gaming platforms, Bartle (2003, pp. 3-4) proposes that a virtual world should adhere to the following conventions:
*'''Physics''': The world contains automated rules for the players that effect change in the world.
*'''Character''': The player is a part of in world experience that is represented by a character and with which they strongly identify.
*'''Interactions''': All interactions with the world are channelled thought the character.
*'''Real-time''': Interaction in the world take place in real-time.
*'''Shared''': The world is shared by others characters in common.
*'''Persistence''': The world is continuous and evolves to some degree regardless of whether or not the player is present in the word. While not present the player’s state in the game remains unchanged.
Bartle tends to use the term character, for what this thesis refers to as an avatar, and considers that the player (which will be identified as ‘the intelligence’ in this thesis) must strongly identify with that character. In the context of role playing games where the player assumes an identity not their own, this aspect of the feature list goes to recognise the effectiveness of the immersion and sense of presence the player experiences (concepts we will be exploring later), but outside of this space, where the player and the ‘character’ may be one and the same, this feature is less of a distinguishing criterion.
His use of the term Physics in the context of an application genre that may include 3D environments is perhaps a little confusing. In these spaces Physics most commonly refers to the physics engine that manages the simulation of an avatar and object dynamics in the space (such as gravity, acceleration, force, momentum and limb movement, etc). As used by Bartle, the term includes the ‘business rules’ and behaviours of the system – the rules governing all interaction, not just those simulating physical movement.
The nature of the shared space and interactive channel imply that the actions of one player affect the experience of another.
Edward Castronova (2001, pp. 5-6) proposes that a virtual world should have the following features:
*'''Interactivity''': Existing on one computer and can be accessed via a network (or the internet) by many simultaneous users. The actions of each user have influence on other users in the world.
*'''Physicality''': Users access the world by a computer, which provides a first person view of the world, the world is generally ruled by natural laws much like the real world with scarcity of resources.
*'''Persistence''': The world is continuous and evolves to some degree regardless of whether or not the player is present in the word. While not present the player’s state in the game remains unchanged.
Castronova’s feature requirements are essentially a subset of Bartle’s, although with the possible omission of the expectation that interaction is necessarily real time.
Sun Microsystems Inc (2008, p. 3) proposed the following common features of open virtual worlds (ie multi-user virtual worlds open to public access over the internet):
*Shared space, allowing multiple users to participate simultaneously.
*Users interact with one another and the environment.
*Persistence.
*Immediacy of the interactions.
*Similarities to the real world rules.
We might, perhaps reject Sun’s expectation of any need to assimilate ‘real world rules’ as this would exclude many fantasy role playing games from being classed as virtual worlds, but outside from this aspect Sun’s list is essentially consistent with the views of Bartle and Castronova.
These three sources are essentially consistent with the body of the literature, making allowance for the additional attributes and some latitude in interpretation we can establish a minimum feature list that would be generally accepted:
*The environment is shared;
*Interaction are in real-time;
*A person participates in the world through some form of representation with which they identify and are identified and that facilitates interaction and recognition (such as a character or avatar);
*Interactivity in the world is channelled though the avatar;
*Changes induced by a participant influence the experience of the space for other participants;
*Rules govern the world and interactions are shared and commonly applied; and
*The world is persistent.
==2.3 The Avatar–The Nature of a Participant’s Projection into a Virtual World==
While Bartle (2003) refers to a participant’s projection into a virtual world as a “Character”, the more widely accepted name today for a real world participant’s projection into a virtual world is an Avatar. This is the term this thesis will be adopting in this research.
The word avatar derives from avatara a Sanskrit word meaning “descent of a deity” or incarnation and utilised by the Vaishnavism religious tradition of Hinduism. The Hindi concept of an avatar is thought to originate as early as the second century B.C.E (Sheth 2002). One of the most recognised Hindu deities is Vishnu (Figure 1). In Hinduism, Vishnu, is said to have a standard list of ten avataras (collectively known as Dasavatara) with one of them said to be Buddha (Siddhārtha Gautama) the founder of Buddhism (Sheth 2002).
[[image:Vishnu_Hindu_Avatar_001.jpg]]
Figure 1. Hindu Avatara
Left: Visnu (or Vishnu) Hindu deity the protector and preserver of the universe
Right: Ten avatars of Visnu (Dasavatara)
(Vivekananda Centre, 2008)
In computing terms, little has changed from the original Hindi meaning of avatar. As with Hindu avatara, the virtual world participant can be thought of as “descending” or “projected” from reality to become a computational representational in a virtual world. In virtual worlds, an avatar is generally (although not exclusively) a graphical representation of the user’s persona (Deuchar & Nodder, 2003) although it can also be a representation of a system or a function in some applications (Sheth, 2003), a simple name in the form of a text string (in some text based MUD’s) and is evolving to include virtualisations of other senses (such as aural and tactile) (S.-Y. Lee, Kim, Ahn, Lim, & Kim, 2005). The graphical representation of an Avatar was thought to originate from a networked multi-user virtual world game called Habitat in 1984 (Bye, 2008; Morningstar & Farmer, 1990). Early research seems to suggest that the use of digital avatars in virtual worlds provides the user with reduced inhibitions and dissolves social status, or reconstructs social status among users (Dede, 1995; Dickey, 2003; Rheingold, 1993).
The projected form is not necessarily a recognisable representation of the real world human form. In his or her projected form, for example, the avatar might be represented as an image of a human, an animal, an animated mechanical object, a simple name, or any form appropriate to the virtual world, and within the technical capabilities of that world’s object management systems. For example, in Eve (a space based virtual world) all avatars are space ships whereas in Second Life (a social based virtual world) an avatar can take any form (Figure 2) but regardless of appearance your avatar’s name remains the same.
[[image:SecondLife_Digital_Avatars_002.jpg]]
Figure 2. Digital Avatars of Second Life (Levine, 2007)
In terms of today’s virtual worlds, and for the purposes of this research, an avatar should be thought of as a combination of a representation, an agent and an intelligence:
#The ''representation'' may be visual, aural, tactile or any other sense conveying the presence of the avatar to other avatars or agents in a virtual world.
#The ''agent'' is the library of capabilities of the avatar in a virtual world.
#The ''intelligence'' (or actor) provides the tactical and strategic control of the avatar, which could be artificial or natural (eg human).
In a virtual world the decisions of the intelligence are communicated to, and realised by, the agent. The consequence of the agent realising (enacting/implementing) the intelligence’s commands may result in a change in the state of both the agent and the representation, eg, in a 3D Graphical virtual world, a command to walk issued by the intelligence might result in the agent changing position and entering a movement or walking state and triggering the representation to display a walking animation (enter a walking animation state).
==2.4 A Taxonomy of Virtual Worlds==
===2.4.1 Introduction===
As might be expected, the literature contains extensive discussion of the appropriate taxa to be applied in classifying virtual worlds, and also an equal measure of disagreement among authors as to the appropriate criterion so to be applied. In spite of the range of discussions, most attempts are incomplete and therefore capable of classifying in a useable form only a portion of the genre. To be fair, this space is rapidly evolving and possibly as fast as it is classified a new entrant appears that change the paradigm, and old entrants are updated to include new capabilities.
===2.4.2 A Taxon for Virtual Worlds===
Outside of the education and virtual reality streams, possibly the largest single family of virtual worlds are those developed for games. While not actually claiming to propose a taxon, Bartle (2003, pp. 38-61), whose pedigree is essentially from the gaming stream, proposes a set of attributes that can be used to classify Virtual (game) Worlds. Not surprisingly, the attributes are most relevant to multi-user game focussed virtual worlds, but provide a workable superset of the current thought on the matter and with some adjustment can be extended to the more general examples of virtual worlds. He suggests that a virtual world can be categorised according to the following taxa:
#'''Appearance''': To a ‘newbie’ (Bartle’s term for a new user of a virtual world application) the distinction is whether the virtual world is a ‘text based’ MUD, ASCI, graphical 2D or graphical 3D etc. To an ‘oldbie’ (as described by Bartle) this is only an interface issue and therefore not as important as the other listed categories.
#'''Genre''': Is the world fantasy, cyberpunk, horror, social etc. The plot or the settings of the virtual world. This taxon is most helpful with purpose focussed virtual worlds. In the non-gaming or semi-gaming space occupied by some generalised social worlds, the virtual world is as much a platform on which other ‘sub-worlds’ can be based, and thus the genre of the virtual world can be all other genres. Examples of this might include PLATO and Second Life.
#'''Codebase''': Although not as important for the user as it is hidden from them this is an important aspect to the designer of a virtual world. The codebase defines the technical makeup of the world - reusable content and controls, scripting language, database structure etc. This researcher suggests that the codebase is not a single taxon, but perhaps should be separated into multiple taxa. In its place one might propose the content management, asset management, game engine, environment application programming interface, AI, and scripting function library within the system as more relevant technical matters.
#'''Age''': How long the virtual world lasts is an important aspect for the measure of success of the virtual world. Generally the longer you can keep a player (or user) interested the longer the virtual world survives which in turn attracts new users which adds to the player base of the virtual world.
#'''Player base''': How large is the player (or user) base of the virtual world. This measure varies depending upon what you are counting for example, the number of registered users, the number of avatars (a user can have more than one character in a virtual world but in general not for simultaneous use), simultaneous users logged in, hours played per user, access over a period of time, number of active subscriptions, etc. In some worlds the meaningful measure of player base is in fact the number of owner occupied ‘acres’ of virtual land (as opposed to general users of the virtual world). The player base measures the current success of the virtual world, its popularity so to speak, which in turn lengthens the age of the virtual world. Given the number of ways a player base can be structured and measured a single measure is open to both misinterpretation and reporting manipulation, and for some measures (like subscribed users – where some subscriptions are costed and others free) may be completely erroneous when comparing one virtual world to the next.
#'''Degree to which they can be changed''': Virtual worlds vary in the degree to which a user can change the content or add to the content of the virtual world. Virtual worlds such as World of Warcraft (and most game based virtual environments) allow no change by the player with all content created by the developers of the virtual world. Other virtual worlds such as Second Life, Active Worlds, TruePlay and PLATO rely on content created by the community. In the case of Second Life (for example) the entire virtual world is made from user created content by providing them with building tools, import and export capabilities, out-of-world interfaces and communications capabilities, an extensive library of API functions and a scripting language. The degree to which a virtual world’s content can be changed by the user adds to the technical codebase complexity and the user’s (and other user’s for multi-user virtual worlds) experience of and within the virtual world.
#'''Degree of persistence''': Bartle defines persistence to be the degree to which a world’s state remains intact if you shutdown and restart the virtual world. He classifies persistence into ‘discrete’ or ‘continuous’ groups. At the extreme a discrete virtual world would regenerate - described a ‘Ground Hog’ world (named after the movie). Here all content and the location of the player would be reset to the start of play. In a continuous virtual world the content and locations are retained through a restart.<BR />Persistence also relates to what happens to the world when a user logs off, does the virtual world continue to evolve without the individual player – and if so can the player’s state be affected while off line? A virtual world generally displays some level of persistence and is generally a term used to distinguish if a ‘virtual world’ is really a ‘world’ or in fact just a simple ‘Ground Hog’ environment (see Gehorsam, 2003). The ultimate level of persistence being that akin to the real world which is constantly evolving and changing regardless of our existence within the World.
With some modification and generalisation most of the taxa can be applied in the general case of gaming and non-gaming virtual worlds. To be applied outside of the narrow RPG (Role Playing Game) grouping, the classification system would benefit from some subdivision of elements.
We have already noted codebase as one such category. Codebase is such a wide group that is could be applied to every functional capability of the virtual world not covered by another taxon, and thus is of limited help in establishing a consistent framework for classification. For example Castronova (2001) taxonomy recognises a grouping under marketplaces (implying commercial functionality) while both Kish (2007) and Cavazza (2007) recognise groupings covering Paraverses (although they use different terms). In Bartle’s taxa these might both be covered as distinguishing characteristics under codebase, yet the one relates to the ability to conduct real-world commercial transactions in the space, while the other addresses the merging of real-world content with virtual world content.
Persistence as framed by Bartle mixes up multiple discrete concepts – host state persistence, user state persistence, environmental evolution, and scenario persistence. This last item is generally typical of games (such as quest driven environments where on restarting a ‘quest’ the user can rely on the sequence of events being a repetition of the sequence that occurred previously – effectively a ground-hog space within a larger persistent environment), and absolutely essential for simulators and learning systems where a user taking a course should be able to rely on the lesson replaying in a consistent and predictable way each time (unless variation is an intended part of the training like in a military battlefield virtual world). In order to classify virtual worlds, recognising these attributes independently of each other would be more helpful than identifying the world as persistent or not persistent, nor are the sub-features linearly related – i.e. one form of persistence does not imply the inclusion of another form of persistence (Purbrick & Greenhalgh, 2002).
===2.4.3 Applied Taxonomies===
While Bartle proposes a reasonably extensive set of attributes (taxa) for classification, some authors have proposed simpler classification regimes, although all seem as yet to avoid claiming an actual taxonomy.
Kish (2007) recognised that with the appearance of the weakly defined ‘Web 2’ technologies, virtual worlds could be seen to encompass a wider range of social networking and world-imagining spaces. Kish’s classification groups virtual environments into the broad categories (Figure 3):
#'''MMORPGs''': Massively Multiplayer Online Role Playing Games. A category which includes text and graphical gaming environments with the common theme of role playing and containing internally a hierarchical, level based player grading system to determine expertise and implied seniority, and generally plot or quest driven and goal oriented as their linking characteristic. Typical examples might include World of Warcraft, Entropia Universe, Everquest, MUDs, etc.
#'''Metaverses''': Imagined public fantasy spaces, emphasising social interaction, creativity and lacking a single plot or purpose for participation. Generally exhibiting a devolved structure without a single levelling system or clear environment imposed hierarchic seniority system[3]. Typical examples might include Habitat, Second Life, Active Worlds, Furcadia, etc
#'''Paraverses''': Spaces that intersect with the real world, incorporating content from the real world and thus could be described as virtual extensions of the real world. This group potentially includes many of the Web 2 spaces that contain sufficient functionality to create in the minds of their users a ‘real’ virtual community as strongly present to the participant as their real world existence.
#'''Intraverses''': Spaces that are otherwise Metaverses or MMOLE’s but private or closed to the broader public. Virtual reality environments could be seen generally to fall into this category as well as private/corporate implementations of public virtual world spaces. Typical examples might include Qwaq, Sun System’s Wonderland, IBM’s Metaverse, etc.
#'''MMOLEs''': Massively Multi-user Online Learning Environments. Possibly the oldest class of virtual worlds as it includes systems such as PLATO and is typified by educational environments supporting user social interaction. Primarily purpose (or although not goal) driven – such as learning, training, idea exchange, simulation, etc. This space includes the dedicated training / teaching environments of PLATO and planning / simulation management systems of SIMNET, Blackboard, Boston College’s Media Grid, etc.
[[image:Kish_Virtual_Geography_003.jpg]]
Figure 3. Virtual Geography (Kish, 2007)
Cavazza (2007) proposes that a virtual world should be open (public) and contain taxa supporting strong and generalised capabilities in each of the dimensions (Figure 4):
#Social networking
#Gaming
#Entertainment
#Business
[[image:Cavazza_Virtual_Universes_Landscape_004.jpg]]
Figure 4. Virtual Universes Landscape (Cavazza, 2007)
Consequently most of the virtual worlds identified by other authors are excluded from Cavazza definition of virtual worlds, but included under the broad category of ‘Virtual Universe’. To illustrate this idea Cavazza has classified a huge range of the existing virtual environments:
#Social
#*2.5 & 3D Chats
#*Avatar Centric
#*Social Platforms
#*Branded Universe
#*Virtual Worlds
#Game
#*MOG
#*Sports
#*MMORPG
#*Avatar Centric
#*Social Platforms
#*Branded Universe
#*Adult Games
#*Virtual Worlds
#Entertainment
#*Virtual Sex
#*Virtual City Guides
#*2.5 & 3D Chats
#*Avatar Centric
#*Branded Universe
#*Virtual World Generators
#*Virtual Worlds
#Business
#*Serious Games
#*Virtual Marketplaces
#*Adult Games
#*Virtual World Generators
#*Virtual Worlds
Cavazza’s definition and classification system is extensive, and possibly the most comprehensive to date. While Kish’s classification tends to focus on functionality, Cavazza’s emphasises purpose. Never-the-less, there is significant cross-over in their ideas. For example, both recognise the difference between games and social networking, and both accommodate the paraverses in a special category (Cavazza includes them in ‘Virtual City Guides’ among other groups). Cavazza’s analysis, however, lacks the accommodation of the education, training and simulation virtual spaces present in Kish’s categorisation, although, it might be argued that these are covered in multiple categories including ‘Virtual World Generators’ (eg PLATO, VastPark) and Serious Games (training simulators).
==2.5 What’s in a Name? – Virtual Worlds versus Virtual Reality==
Virtual Reality environments are generally a combination of user interface hardware (such as headsets and data gloves) and software. The availability of the (often costly or purpose built) user interface hardware has meant that the majority of these environments are either single user or very small scale multi-user environments (Jones & Hicks, 2004; Miller & Thorpe, 1995). A direct consequence of this is that Virtual Reality environments have tended to ignore the dimensions of user interaction, game play and collaboration in favour of the technology of immersion. This fact, possibly more than any other, has predisposed some authors to exclude virtual reality spaces from the domain of virtual worlds (Bartle, 2003; Yee, 2006).
While Bartle’s virtual world definition, contributes part of the definition we have adopted for virtual worlds in this research, the researcher departs from the entirety of Bartle’s embodiment of virtual worlds as expanded in that work. Bartle believes that a virtual world has a meaning divergent from that of virtual reality believing that “Virtual reality is primarily concerned with the mechanism by which human beings interact with computer simulations… [rather than] the nature of the simulations themselves” (2003, p. 3). To this extent Bartle’s definition specifically excludes Virtual Reality spaces from the definition of virtual worlds.
This researcher adopts a view consistent with some other writers in the field that excluding the body of work in virtual reality from the concept of a virtual world by writing virtual reality spaces out of the definition, places the emphasis narrowly on the social and gaming dimensions of these worlds, and away from the immersive experience thus excluding the vast body of research that predates or has been done in parallel to the development of gaming virtual worlds (Cosby, 1999; Heilig, 1955; Pimentel & Teixeira, 1994; Rheingold, 1992; Schroeder, 1997; Steuer, 1992; Sutherland, 1965; Walker, 1990; Woolley, 1994) and constrains the consideration of these environments in the education context to their collaborative and scripting capabilities.
Other authors have adopted definitions wider than that posited by Bartle of the virtual world concept, although in most cases constrained from some portion of the body of work that has contributed to the space. Dickey (2005, p. 439) implies an exclusion of 2D and non visual environments while providing: “Three-dimensional virtual worlds are a networked desktop virtual reality in which users move and interact in simulated 3D spaces.” Similarly, McLellan (2004) presents 10 classifications of virtual reality, a single virtual world being classified as ‘through the window’ where as a multi-user virtual world would be classified as ‘cyberspace’. Mazuryk and Gervautz (1996) make no distinction in the number of users in the virtual world but define a virtual world to be a ‘desktop VR (virtual reality)’ or a ‘Window on World (WoW)’ system. Biocca and Delaney (1995) defines a virtual world to be a ‘window system’ a computer generated three-dimensional virtual world viewed either by a computer screen or with the assistance of a head mounted display.
This researcher’s view is that all of these definitions are correct, but incomplete and that a definition that allows the participation of all of these examples is the most useful and appropriate in the education context. To appreciate the reasoning behind this argument we must look at some of the history of the development of the technologies and concepts that have contributed to the current family of virtual worlds and the problems and purposes these stepping-stones intended to resolve or achieve.
Authors adopting Bartle’s view have generally also adopted the view that virtual reality is essentially a hardware interfacing technology and hence the environments managed in this space are of no consequence. The misconception that virtual reality is a collection of hardware (data glove, head mounted displays etc) neglects the very meaning of virtual reality, which seeks to evoke a feeling of immersion and presence within the virtual space. In virtual reality research stream, using external hardware devices to enter a virtual world is only one method by which immersion and presence is achieved (Briggs, 1996; Steuer, 1992). No external device will ensure a user’s experience of immersion if the world they enter is an unconvincing generator of an alternative reality for the participant. Furthermore, if virtual reality is to be excluded from the scope of the definition of virtual worlds, then the existence of VR plug-and-play devices such as stereoscopic headsets, data gloves or haptic controls that are readily available to use with many mass market virtual worlds (that otherwise would fall within Bartle’s definition) for example, Vuzix iWear headset, Evolution Motion Glove of PS1, Wii Remote for Nintendo Wii, MS Force Feedback controller for Flight Simulator etc. would seem to contradict the proposed disconnect between the study of virtual worlds and virtual reality. Lastly the exclusion of virtual reality environments from the definition of virtual worlds ignores that fact that in the 3D virtual world space many of the technologies and concepts utilised were contributed by the virtual reality research stream (as will become clear from the history presented in the following sections).
In the education context, virtual reality technologies (as expressed for example in simulators) are a critical and essential contribution to the pantheon of virtual (training) worlds (Bailenson et al., 2007; Dede, 2004). In this researcher’s view, virtual reality environments are a subset of the virtual worlds, which are increasingly converging, if the space has not already converged in current virtual world examples such as America’s Army, Second Life, etc and massive multiplayer training environments like SIMNET (Lang, Maclntyre, & Zugaza, 2008; Lenoir, 2003; Zyda, 2005).
==2.6 Dimensioning Virtual Worlds==
===2.6.1 The Degree of Virtuality===
The degree to which a world is ‘virtual’ can be looked at as a sliding scale between physical and virtual. Milgram and Kishino (1994) presents a taxonomy for mixed reality visual displays called a ‘reality-virtuality continuum’ (Figure 5). On the left hand side of the scale is the ‘real environment’, which is equivalent to the real or tangible world, while on the extreme right is the ‘virtual environment’, which is equivalent to an artificially generated world. Between these two extremes is classified as ‘mixed reality’ (MR) made up of combination of both real and virtual matter.[4]
[[image:Reality_Virtuality_Continuum_005.jpg]]
Figure 5. Reality-Virtuality Continuum: Representation Scale for Visual Display
(Milgram & Kishino, 1994)
Figure 6 illustrates an example of the use of the reality-virtuality continuum taken from the MagicBook Project (Billinghurst, Kato, & Poupyrev, 2001). On the left of the figure is a book that is real (ie. the real world environment); in the middle the same book but viewed though an Augmented Reality (AR) Display where figures appear like pop-up characters on top of the book (ie. mixed reality or augmented reality); while on the right the same book but viewed within a virtual environment where the “reader” becomes the characters within the book.
[[image:The_Magic_Project_006.jpg]]
Figure 6. The MagicBook Project: An Example Of The Full Reality-Virtuality Continuum
While the MagicBook project was conceived around the integration of physical (tangible) real world objects with digital virtual world generated objects, when the real world objects are themselves digital or intangible – such as with course materials of photographic images, text, or other digital content the merging of the ‘Real World’ and the ‘Virtual World’ become less obvious. For example, real world authors Pamela Woodard and Wilbur Witt have published their works in the Second Life virtual world first or simultaneously with publication in the real world (Bell, 2006). Second Life virtual world can integrate conventional HTML web page content directly into the virtual environment (Release Candidate, 2008). Content developers and particularly trainers and presenters in Second Life routinely import textures and slides and stream sound and video from outside of the virtual world into the virtual space.
In the context of Milgram and Kishino’s reality-virtuality continuum, this research focuses on the right hand end of the scale i.e. using a desktop display of a virtual world in which all content is delivered virtually. In contrast to the MagicBook project this research considers (in the education context) the affordances from two virtualisation strategies – a direct reproduction of the real world delivery into the virtual (in part, by importing the non virtual world generated materials into the virtual world), and a transformation of the real world material into virtual material (in part, by recasting the non virtual world materials into virtually generated form).
===2.6.2 The Degree of Immersion and Presence===
====2.6.2.1 Introduction====
Virtual reality literature often separates a user’s experience of a virtual environment into physical and psychological components (Benford, Greenhalgh, Reynard, Brown, & Koleva, 1998; Biocca & Delaney, 1995; Sheridan, 1992; Mal Slater, 1999; Mal Slater & Wilbur, 1997; Steuer, 1992). The psychological components include the interaction (or connectedness) and belief where contribution of the participant or their willingness to believe in the reality of which they would otherwise know to be unreal and the physical aided by external mechanical and functional capabilities of the system.
In exploring the factors determining the effectiveness of Virtual Reality environments, Burdea and Coiffet (2003) determined that the aim of virtual reality is to achieve a trio of ‘Immersion, Interaction and Imagination’ (Figure 7. The Three I's of Virtual Reality), each of which holds equal significance to the user’s experience of virtual reality systems. A virtual reality system seeks fully to engage the user in the virtual space. They proposed that excluding any one of these features exposed a user to passive participation, and ultimately detracted from the perceived ‘reality’ of the experience.
[[image:Immersion_Interaction_Imagination_007.jpg]]
Figure 7. The Three I's of Virtual Reality
Slater (1992) defined user involvement to be a combination of the human experience which in turn is dependent on the technology (Figure 8). Telepresence (or presence) is the human sensation of ‘being there’ in a virtual environment[5] and seen influenced in part by the technology in terms of vividness (richness, realism) and interactivity (response) of the environment.
[[image:Steuer_Variables_Influencing_Telepresence_008.jpg]]
Figure 8. Technological Variables Influencing Telepresence (Steuer, 1992)
Slater and Wilbur (1999; 1997) revisited these concepts in later work, defining a user’s experience in terms of immersion and presence. Immersion is seen as an objective measure of ‘systems immersion’ technology such as field of view, quality of display etc and while presence is seen as a subjective measure, a psychological sensation of ‘being there’. From here on we will be using the terms immersion and presence as defined by Wilbur and Slater.
====2.6.2.2 Immersion====
Benford et al. (1998) propose classifications of artificiality and transportation for collaborative environments (Figure 9) that extends Milgram and Kishino’s reality-virtuality continuum. Artificiality (physical-synthetic) is equivalent to the reality-virtuality continuum. Transportation (local-remote) is the degree to which a participant becomes removed from their local space to operate in a remote space, which they define to be a similar to the concept to immersion. For example, CVEs (Collaborative Virtual Environments[6]) are placed on a scale of partial to remote transportation where a fully immersive CVE would be the ultimate level of transportation in a virtual reality system using devices such as HMD, data gloves, tactical and aural equipment that allowed for no outside distraction, the participant would be operating completely within virtual environment and be fully remote form their local environment[7]. Whereas, a desktop CVE is partially immersive as ones local surroundings form a part of the virtual environment eg field of view that allows for head turning away from the virtual space etc (Sheridan, 1992). In the context of Benford et al. transportation scale this research is conducted using desktop CVEs and is therefore only partially immersive according to their scale.
[[image:Artificiality_Transportation_as_SS_Metrics_009.jpg]]
Figure 9. Shared Space Technology According to Artificiality and Transportation
====2.6.2.3 Presence====
Research in online gaming virtual worlds has tended to focus on the human experience (presence) of virtual worlds rather than the ‘systems immersion’ aspects, while studies of virtual reality environments have tended to consider both. This is possibly a function of the common standard interface for massively multiplayer game environments that has traditionally been the desktop computer equipped with a mouse and keyboard. Although various more advanced mass market input devices (head mounted displays and 3D mice, etc) have been available to the mass-market for many years, they are not yet widely utilised.
The degree of presence is often linked to the effectiveness of a virtual environments (Witmer & Singer, 1998) which due to its subjective nature is possibly the most difficult to comprehend and therefore measure (Mal Slater & Usoh, 1993). Hence, this area has been a widely researched with various explanations as to what constitutes presence in a virtual environment (Schuemie, Straaten, Krijn, & Mast, 2001). The sense of ‘being there’ in the environment is subjective as Slater and Usoh (1993; 1994) describe presence is similar to a person’s ‘willingness to suspend disbelief’, a concept derived from British poet and literary critic Samuel Coleridge (1772-1834) in his autobiography (1817) where he describes the phenomena of when a person becomes so engaged in a narrative that they are willing to believe an event is true if even for only a brief moment. Although suspension of disbelief is often linked today with mediums such as film and literature, virtual worlds (especially Role Playing Game (RPG) worlds) provide many of the same traits in which the user can be thought of as an actor within the virtual world that forms a part of the storyline.
A number of presence classification strategies have been proposed by various authors. We will consider:
#Schroeder - focussing on the importance of social interaction
#Bartle – focussing on the degree of commitment in the environment
Schroeder (2006) presents presence in a continuum of shared virtual environments (SVE) within a three-dimensional model (Figure 10). Presence (x), copresence (y) and connected presence (z) can be described respectively as ‘being there’, ‘being there together’ and ‘being connected together’. Connected presence can be thought as the extent to which a relationship is mediated when presence and copresence exist. Mapping is done on a comparison with a physical face-to-face relationship (0,0,0) and an entirely immersive environment such as a networked Cave (1,1,1). For example, face-to-face is (0,0,0) there is no presence (and thus no copresence) as no meeting is taking place in a virtual environment whereas in the case of a networked Cave (1,1,1) the entire relationship (and environment) is virtual where affordances are such for high connected presence.
[[image:Presence_Copresence_Connected-Presence_010.jpg]]
Figure 10. Presence, Copresence, and Connected Presence
In different media for being there together
Of interest in Schroeder’s model is the comparison of desktop SVE and online computer games. The example given in the model for a desktop SVE is Active Worlds which is a massively multiplayer online (MMO) social virtual world and the example provided in his paper for an online game is Quake, which at the time provided for up to 16 players sharing a common space. Both are virtual worlds, use text chat and sound, and use avatars to project the participant into the virtual world (although Quake takes a first person view exclusively). For the purpose of the analysis the main differences were perceived as the number of simultaneous players sharing the common virtual space and the imposition of clear game driven objectives in Quake, and the absence of those same game driven objectives in Active Worlds. Yet, Active Worlds was seen as providing a higher level of connected presence. So why does Active World provide a higher level of connected presence? The distinction here was seen to be the in the concept of the ‘game’ rather than number of players when you compare it to other SVEs presented in this model above. Active World is a social world in which no plot is provided to measure success or failure of an individual, unlike Quake where the measure of success is clear and the entire activity and function of the environment is the relentless pursuit of that individual success. Therefore it was deduced that a social (game) world provide for more connected presence than that of an individually focussed plot driven gaming virtual world (at least as analysed by Schroeder).
Schroeder observation of higher connected presence in social virtual worlds seems to fit with Heeter’s (1992; 2003) definition of social presence where she defines presence in terms of individual presence, social presence and environmental presence. Presence of an individual is increased when social relationships are formed which is based upon the social component of perceptual stimuli. When an environment or situation is focused on the relationship (rather than killing a monster like in RPGs) a higher social presence will be achieved.[8]
Bartle (2003, p. 42) identifies a system of levels of immersion (which in this paper we have defined as presence[9]) based upon a linear scale of the; Player (the real person), Avatar (the digital puppet), Character (representation in the world e.g. character name, role etc) and Persona (your identity in the virtual world where the player is the Character and is in the virtual world). Persona is similar to the concept presence, if your character is killed ‘you feel like you have died’ there is no distinction between the character and the player, they are one, the Persona. Bartle believes that the avatars and character are just steps along the way to persona. Persona is when a person ‘stops playing the world and starts living in the virtual world’.
==2.7 Influences on Virtual Worlds from Art and Literature==
===2.7.1 Introduction===
The concept of a virtual world is by no means unique to computing. The thought of exploring an imaginary realm has captivated people’s imagination throughout time.
“If we define that a virtual world is a place described by words and/or projected through pictures, which creates a space in the imagination real enough that you can feel you are inside of it, then the painted caves of our ancestors, shadow puppetry, the 17th-century Lanterna Magica, a good book, play or movie are all gateways to virtual worlds. Humanity’s most powerful new tool, the digital computer, was also destined to become a purveyor of virtual worlds, but with a new twist: The computer enables the virtual world to be both inhabited and co-created by people participating from different physical locations.”(Damer, 2007, p. 2)
At least with respect to the massively multiplayer online virtual worlds/role playing games (MMOVW, or MMORPG), all of today’s exhibits can trace their paradigms to literature. Some such as Eve, Entropia Universe, World of Warcraft are amalgams of a body of works and ideas while others such as MUD1 (Sword of the Phoenix (Howard, 1932)) and Second Life (Snow Crash (Stephenson, 1992)) are direct inspirations of specific literary works.
Consequently, to properly understand the ‘state of the art’ represented by today’s multi-user, connected together, virtual worlds and the gaming, social and business rules they have adopted to govern them, it is essential to consider the context from which they have been derived, and the art that has influenced their creators. While some operational paradigms in virtual worlds are technology constraints, functional capability constraints can be as much a condition of the imagined world being implemented as a real constraint of the technology of the day. To appreciate this fact one need only compare the camera controls of Project Entropia versus those of Second Life – two environments of comparable age, or the commercial capabilities of these two environments versus those of World of Warcraft. In each case the differences and apparent restrictions are a game design decision rather than a technology constraint.
===2.7.2 Virtual Worlds of the Arts===
James Pearson (2002) believes that from as early as 30,000 years ago in the Chauvet Cave in France shaman used cave art as a means to document their experiences of travel to the dream world. Packer and Jordan (2002) also draw this similarity in their book on virtual reality describing how the Cro Magnon in 15,000 BC in the Lascaux caves of south-western France used cave art (Figure 11) with candles and the acid aroma of animal fat for a magical theatre of the senses.
[[image:Cave_Art_BC_011.jpg]]
Figure 11. The caves of Lascaux: Cave Art 15,000 BC
The German composer Richard Wagner (1813-1883) (Figure 12) concept of Gesamtkunstwerk (total artwork) has also been cited as an early pioneer in the concept of immersion and presence in virtual worlds (Grau, 1999; Klich, 2007; Packer & Jordan, 2002). Wagner believed that a “Artistic Man can only fully content himself by uniting every branch of Art into the common Artwork” a synergy that not only includes the performance but all that surrounds so that mankind “...forgets the confines of the auditorium, and lives and breathes now only in the artwork which seems to it as Life itself, and on the stage which seems the wide expanse of the whole World” (Wagner, 1849, p. 184 & 186).
[[image:Wagner_Gesamtkunstwerk_012.jpg]]
Figure 12. Richard Wagner's Gesamtkunstwerk (Total Artwork)
===2.7.3 Virtual Worlds of Fiction and Fantasy===
There are numerous examples of virtual world that have been explored though fiction and fantasy. Each has contributed to the illusion of virtual worlds becoming a reality (Bartle, 2003; Chesher, 1994).
In Lewis Carroll’s novel, Alice's Adventures in Wonderland (1865), Alice fell down a rabbit hole to explore a fantasy world inhabited by peculiar and anthropomorphic creatures. Similarly, in Carroll’s follow on novel, Through the Looking Glass (1871), Alice explores a world behind a mirror. Hattori (1991) saw Lewis Carroll’s novels as a paradigm for modern virtual reality systems (Figure 13) blending the physical space with fantasy in a rapidly changing environment. To this extent, Carroll’s works provide a perfect analogy for the design and the development of virtual worlds (Rosenblum, 1995; West Virginia University, 2008). An explorative virtual world was realised as a children’s computer game called The Manhole (1988-2007) where it was based upon Carroll’s novel Alice’s Adventure in Wonderland (Wikipedia, 2008a).
[[image:Alice_via_Caroll_and_Hattori_013.jpg]]
Figure 13. 'Through the Looking Glass' Carroll (1871) & 'The World of Virtual Reality' Hattori (1991)
Within the fantasy literary genre, a key influence has been the works of J R R Tolkien starting with The Hobbit (1937) and its sequel The Lord of the Rings (1954, 1955) (Figure 14). An adventure fantasy that takes place in an imaginary world called Middle-Earth containing races such as Hobbits, Wizards, Elves, Orcs, Dwarves and Trolls. Tolkien’s literature style was so popular that the Oxford dictionary termed his literature approach as tolkienesque[10].
[[image:JRR_Tolkein_Book_Covers_014.jpg]]
Figure 14. The Hobbit & The Lord of the Rings by J. R. R. Tolkien (1937, 1954, 1955)
With respect to today’s virtual worlds, Tolkien’s contribution has not been merely in the construction of a raft of characters, racial groups and social concepts for role playing game inhabitants and interaction rules, but most importantly in his deep backgrounding of the imagined worlds. He did not merely describe his characters within the context and flow of the story line, but extended beyond that which was needed to tell a story, into that which was needed to make us believe of the real existence of his virtual worlds, Tolkien provides the reader with immaculate detail and descriptions to immerse them into the world Middle-Earth. Both books contained land maps (Figure 14) and the final book to The Lord of the Rings (released in 3 parts) containing appendices describing chronologies, histories, family trees, languages and translations and a calendar and dating system. Being a professor at Leeds and Oxford University he approached his work more like an academic anthropological study of an imagined world than a novelist (Macmillan, 2008).
In so doing Tolkien demonstrated a fundamental understanding a core strategy in establishing convincing presence – the necessity for a consistent, credible back story underpinning the virtual world. It is an early example of the depth of design that many later virtual worlds would exhibit in order to create a convincing sense of presence for the participant (Bartle, 2003; Schmidt, Kinzer, & Greenbaum, 2007).
A couple of virtual worlds that has been translated from Tolkien’s literature are the online virtual world ‘Lord of the Rings Online’ (2007) and PLATO’s MUD virtual world ‘Mines of Moria’ (1974).
More recently, literature has turned to imagining realities in which computational virtual worlds are a fundamental component of the plot. It is from this group that many of the terms now used to describe aspects and elements of virtual worlds are derived or were popularised, such as ‘avatar’, ‘metaverse’, ‘cyber-space’, etc. Some recent examples of novels containing a plot of computation virtual world are True Names (Vinge, 1981), Neuromancer (Gibson, 1984) and Snow Cash (Stephenson, 1992) (Figure 15).
[[image:Recent_VR_Literature_Covers_015.jpg]]
Figure 15. Recent Literature True Names (Vinge, 1981), Neuromancer (Gibson, 1984), Snow Cash (Stephenson, 1992)
'''Vernor Vinge’s True Names''' is not as well know as other novels in this genre but it was the first to present the concept of a person entering a computational virtual worlds meeting other people in ‘the other plane’ (Kelly, 1995). It was also unique in bringing the concept of anonymity to the digital world with one’s digital persona (handle) being different from one’s real self and where there was a necessity to hide one’s real identity thus your true name (and hence the title). It was translated to the computational virtual world in the form of ‘Habitat’ – the first graphical social networking virtual world (Farmer, 1992).
'''William Gibson’s Neuromancer''' a true cyberpunk[11] novel is possibly the most widely quoted in the virtual environment space (Chesher, 1994) . In this novel Gibson coined the term cyberspace with the concept of a viable parallel online world capable of critically impacting events and commerce in the real world.
'''Neal Stephenson's Snow Crash''' is where the term Metaverse was coined. Metaverse is a planet-sized city that has one continuous street 65,536 kilometres (216 km) in length where millions of people (known as avatars) travel up and down daily in search for entertainment, trade or social interaction. Although similar, in one sense, to Neuromancer it came from a different perspective in that people actually lived in the Metaverse not as cyberpunks getting up to mischief but as everyday people living a mainstream life real life in the virtual world. In this world real commerce was conducted and virtual artefacts were bought and sold with real world consequences which has been realised in the development of the virtual world Second Life.
Hollywood also contributed to the fantasy of the reality of virtual worlds. Films such as Tron (Lisberger, 1982), The Lawnmower Man (Leonard, 1992) and The Matrix (Wachowski & Wachowski, 1999) (Figure 16) just to name a few gave us the visual of virtual worlds that the books could only describe, and in some cases explored the haptic interfaces now being realised (Chesher, 1994).
[[image:VW_Films_Tron_LawnmowerMan_Matrix_016.jpg]]
Figure 16. Hollywood Films
Tron (Lisberger, 1982), The Lawnmower Man (Leonard, 1992), The Matrix (Wachowski & Wachowski, 1999)
At the time of their release, the novels and movies discussed above may have seemed futuristic and the concepts unobtainable but today we are much closer (if not already past) with advances in networking, computational processing power and understanding of the sociology of virtual environments. Maybe a ‘jack-in’ device that stimulates our nervous system to travel into cyberspace (Neuromancer, Gibson, 1984) is still a little way off (and may be too intrusive for some), or smelling odours or feeling textures within a virtual world may never be quite the same as the real life experience but what once seemed unimaginable in these works has become reality today. With technological advances and the rapid adoption of internet enabled online virtual worlds many of these concepts are less science fiction and more science fact than they once were.
==2.8 The History of Computational Virtual Worlds==
===2.8.1 Introduction===
In a lecture delivered by Ivan Sutherland in 1965 the first steps were made to combine computer design, construction, navigation and habitation of software generated virtual worlds (Packer & Jordan, 2002). Here Sutherland laid down a vision for the development of virtual worlds, as paraphrased by Brooks (1999, p. 16):
<blockquote>
“Don’t think of that thing as a screen, think of it as a window, a window through which one looks into a virtual world. The challenge to computer graphics is to make that virtual world look real, sound real, move and respond to interaction in real-time and even feel real.”
</blockquote>
The new-born medium of the graphical, digital virtual world experienced a “Cambrian Explosion” of diversity in the 1980s and ‘90s, with offspring species of many genres: first-person shooters, fantasy role-playing games, simulators, shared board and game tables, and social virtual worlds. (Damer, 2007)
The massively multiplayer online virtual worlds of today, with their world-wide user bases, are essentially a consequence of the mass adoption of the internet which commenced in the early 1990’s. Since the internet first achieved general acceptance they have advanced substantially in technical capabilities, graphics and number of subscribers (Figure 17) (Woodcock, 2008). See Appendix B: MMOG Analysis, for a break-down of MMOGs contained in this graph.
[[image:MMOVW_Growth_Rate_017.jpg]]
Figure 17. Massive Multiplayer Online Virtual World Growth Chart 98-2008
The virtual worlds of today (such as World of Warcraft, Entropia Universe, America’s Army, and Second Life, etc) represent a convergence of several disparate computational, technical and social origins and drivers. Current virtual worlds combine 3D visualisation, game theory, text messaging, animations, context and text sensitive gesturing, natural language processing, spatial voice & audio, artificial intelligence, agency theory, physics, connectedness, persistence, business strategy, sensory hardware and haptic interfaces, telecommunications, 2D image processing, video chroma-keying, social networking and many other influences to achieve their sense of immersion and presence. In this section we explore some of the milestones along these convergent paths.
As many of the influences that have contributed to our latest virtual world are derived from research streams that were concurrently pursued over more than 50 years, we shall look at the history of virtual worlds in six streams:
#Hardware based user interfaces and virtual reality environments
#Early graphical computer games
#Text and Text+ based Virtual Worlds
#2.5 and 3D graphical multi-player virtual worlds, broken down into:
#: a. MMORPGs
#: b. Social Virtual Worlds
#Simulation and Training Worlds
It should be noted that, while we will be considering the history in these streams, some virtual worlds necessarily exist in more than one stream. The grouping is that of the researcher, based on an extensive assessment of the literature, rather than the view of any one author.
===2.8.2 Hardware Based User Interfaces and Virtual Reality Systems===
====2.8.2.1 Introduction====
These two areas are grouped together, not because Virtual Reality (VR) Systems are a hardware solution, but rather because the work done in virtual reality worlds has generally aimed for extremely high levels of both immersion and presence and has therefore generally (although not always) been coupled with hardware in the form of purpose built user interfaces, designed to assist the sense of immersion such as headsets, or data gloves, etc.
The importance of the progress in VR systems to virtual worlds is that they have contributed or assisted much of the fundamental graphical rendering technologies, 3D animations studies and spatial awareness research and conceptualised the immersive aspects of virtual worlds.
====2.8.2.2 Sensorama====
One of the earliest inventions in the genre of virtual world simulators was developed by a cinematographer Morton Heilig. Inspired by the work of Fred Walker’s with Cinerama[12], Heilig presented a paper in 1955 ‘The Cinema of the Future’ (reprinted in Packer & Jordan, 2002). In an extension of Wagner’s (1849) Gesamtkunstwerk (total artwork) concept (Holmberg, 2003), Heilig believed that the logical extension of cinema was to provide the audience a first person experience of film using all their senses – “Open your eyes, listen, smell, and feel—sense the world in all its magnificent colors, depth, sounds, odors, and textures—this is the cinema of the future! (Packer & Jordan, 2002, p. 246)”
[[image:Morton_Heilig_Sensorama_Simulator_018.jpg]]
Figure 18. Morton Heilig, Sensorama Simulator, U.S. Patent #3050870, 1962
Heilig developed and patented the Sensorama Simulator (Figure 18) in 1962. The Sensorama was a single person simulator that offered the viewer a multi-sensory fully immersive theatre. The viewer could sit to watch a short three-dimensional stereoscopic movie that included stereo sound, an odour generator, force feedback handle bars, chair motion and wind on the viewers face (Rheingold, 1992). Heilig believed that the Sensorama Simulator could be next generation of theatres placed in hotels and lobbies or any small space that could fit his miniature theatre (Heilig, 1955, p. 345).
Heilig also recognised that the Sensorama Simulator offered training and learning potential for educational and industrial intuitions (Rheingold, 1992, p. 58) but unfortunately the Sensorama Simulator never took off, it was “a time when the business community couldn’t figure out what to do with it” (Laurel, 1991, p. 52). This may have been different a decade later when Pong kicked-off the arcade game industry and when education, industry and government saw great potential from investing in virtual world technology as they did with the Head Mounted Display (HMD).
====2.8.2.3 Head-Mounted Display====
In 1968 Ivan Sutherland presented the first computerised graphical HMD (Figure 19) (Sutherland, 1968)[13]. The HMD had a cathode ray tube (CRT) for each eye with a three-dimension simple wire-frame view of a room with motion tracking when the viewer moved their head. Known as ‘The Sword of Damocles’ based upon a Greek legend of a man placed in a precarious position of luxury with a sword above his head (Oxford Dictionary, 1989), similarly the HMD had a computer suspended above the users head attached by a mechanical arm (Figure 19, right) (Carlson, 2003).
[[image:HUD_The_Sword_of_Damocles_019.jpg]]
Figure 19. Head Mounted Display first called The Sword of Damocles (Sutherland,1968)
The HMD was a significant milestone in the development of virtual reality technology, which has since been used in a variety of applications in virtual worlds. Holding advantages over a traditional computer monitor such as total head and body movement, non interrupted viewing for total immersive HMDs and simultaneous viewing of real world and virtual world artefacts in ‘see-though’ HMDs or sometimes called Augmented Reality Displays (Rolland & Hua, 2005).
Today’s HMDs are more compact than Sutherland’s 1960s prototype (Figure 20). In the figure is shown on the left a HMD used for mixed reality environments similar to that designed by Sutherland and right a immersive HMD which is compatible with several online and gaming virtual worlds.
[[image:HUD_See_Through_and_Immersive_020.jpg]]
Figure 20. Today's Head Mounted Displays - Left: See-Though HMD - Right: Immersive HMD
===2.8.3 Early Graphical Computer Games===
Computer games have had a large influence on the evolution of virtual worlds both in the development and use of the technology. The contribution of games includes computational game theory, 2D and 3D graphics, social modelling, simulation, strategies for achieving presence, artificial intelligence, computational game physics and, possibly most significant delivery of a massive consumer market to fund and drive the investment needed for innovation and technology improvement. By far the majority of today’s online virtual worlds were conceived and/or delivered as games, they have subsequently evolved into general business or training platforms which are sometime referred to as Serious Games (Annetta, Murray, Laird, Bohr, & Park, 2006).
The early computer games that can be traced to a few innovative applications (Figure 21):
*'''Tennis for Two''': In 1958 William Higinbotham developed the first electronic game simulator using an oscilloscope display that demonstrated a two-dimensional side view of a tennis court. It was a two player game that the user could control the direction of the bouncing ball by turning a knob on a hand held device. Originally developed by Higinbotham to occupy visitors to Brookhaven National Laboratory during open days the game had queues of people waiting to play (Brookhaven National Laboratory, n.d.). Tennis for Two introduced the concepts of a shared multi-player electronic game experience, a rule based environment managed by a machine, and an electronic space where the actions of one player in the shared space affected the experience of another. The attention the game attracted demonstrated the willingness of participants to accept the visual and sensory limitations of a machine managed game environment and immerse themselves in the experience.
*'''Spacewar!''' The idea originated in 1961 by Steve Russell at the Massachusetts Institute of Technology (MIT) by 1962 the game was released with assistance from his colleges. Spacewar! was the first official release of a two-dimensional computer game.[14] A two player game each with a spaceship that would fire bullets at each other before being pulled into the middle by the sun. Developed originally to demonstrate the power of the new PDP-1 computer, this game was a good demonstration of both the graphic capabilities and the process power of the machine (Computer History Museum, n.d.; Markowitz, 2000). Later in 1969 Rick Blomme modified the game to run on PLATO which made this the first game to be networked (Koster, 2002; Mulligan, 2002). While Tennis for Two was the first multiplayer electronic game, Spacewar was the first computer based multiplayer game. It thus contributed the same key concepts and ideas as Tennis, only for the first time on a computer managed environment.
*'''Maze War''': In 1973-1974 Steve Colley developed the first three-dimensional ‘first person shooter’ (FPS) game Maze War at NASA Ames Research Center. A player would navigate around a maze searching for other players to shoot. As seen below (top right) the player had a first person view, (the eyeball seen in this picture is the other player). This is called a ‘first person’ game, placing the player ‘in-world’ as a part of the game is a significant concept of virtual world games. Maze War also provided other innovations now common to virtual worlds such as instant messaging, levelling and non player robot characters (Damer, 2007). This game which started as a two player game was eventually connected to ARPANET (the forerunner of our current internet network technology) allowing several users from remote locations to play and interact (Colley, n.d.; Damer, 2004). Maze War can therefore lay claim to being the progenitor of virtual worlds but not an actual virtual world because of its lack of persistence.
[[image:Early_Computer_Games_1958_To_1974_021.jpg]]
Figure 21. Early Computer Games 1958 - 1974
*'''DOOM (1993) (II, 1994)''' a 3D FPS game was influential both on a conceptual and technical level (Friedl, 2002; Mulligan, 2000). In DOOM the concept of Maze War was re-implemented in a much more graphically rich 3D environment. Although only a single player game, the key innovation of relevance was the method used to manage the rendering of the 3D space to allow multiple non-player characters to participate in the 3D environment with the player. The strategy adopted was essentially to divide the world into many small rooms surrounded on all sides by walls (essentially a cave system) by rendering only a single room at a time the entire resources of the computer could be devoted to a known confined rendering space, thus achieving the illusion of a highly detailed rendering with the limited computational resources available on the PC’s of the day. Although higher quality 3D rendered games were available some seven years earlier on the Amiga computers from 1986 (including some utilising real-time ray tracing technology), these relied on dedicated proprietary games architected graphics cards and did not provide a 3D space management paradigm that could be easily translated to the future demands of online 3D games. The Doom model could, precisely because it was architected for the graphically and processor challenged generalised home PC’s of the day, rather than proprietary games machines such as the Amiga. The Doom games engine was utilised in many subsequent games and later formed the basis for the model adopted for the online game Quake (Petrich, n.d.; Wikipedia Doom, 2008).
Around the time of DOOM the game industry realised the importance of connecting people together for online gaming, seeing the opportunity they started adding a modem and LAN play and later TCP/IP functionality to their games that allowed both single and multiplayer connectivity. Early games allowed up to 4 players but today’s games can have up to 64 players in a single game session (Quake Wars[15]). Some of the better known brand names included:
*'''Quake''' (1996, a multiplayer extension of DOOM) saw over 80,000 people connected to 10,000 + simultaneous game session (Mulligan, 2000).
Warcraft (1994) (II, 1995) that eventually would become the basis to the largest MMORPG today World of Warcraft (2004) which now has over 11 millions subscribed users (Blizzard Entertainment Inc, 2008).
===2.8.4 Text Based Virtual Worlds===
====2.8.4.1 Text Virtual Worlds: MUDs====
In 1978 the first MUD (Multi User Dungeon) outside of the PLATO system (discussed under Training and Simulators) was created by a Computer Science undergraduate Roy Trubshaw (and shortly afterwards joined by Richard Bartle) from Essex University in England. A text based virtual world, coined a MUD by Bartle was based upon Robert E Howard’s (1932) fictional tale ‘The Phoenix on the Sword’. MUD1[16] was an adventure role playing game, with game levelling and chat rooms which allowed up to 32 players to connect simultaneously over a remote connection (Figure 22) (Bartle, 2003).
[[image:Bartle_The_First_MUD_022.jpg]]
Figure 22. The First MUD: Roy Trubshaw and Richard Bartle (1978)
Early in the game’s history Essex University on whose computers the game was hosted became a part of ARPANET (the forerunner of the internet) and soon after MUD was distributed through that network and being played on universities throughout the world. Some of these institutions were also open for public access. Although copyrighted many variations of MUD1 were made and distributed freely from what Bartle (2003) describes as either player inspiration or pure frustration with the 32 player limitation which made it impossible to play when dial-in lines were fully allocated.
Keegan (1997) identifies two main classification of MUDs developed during this time (Figure 23) - the Essex MUDs (Trubshaw and Bartle’s) and Scepter of Goth (1978). Unfortunately Scepter died an early death, the game was sold and soon afterwards passed onto the creditors when the purchasing company ran out money (Bartle, 2003). Most MUDs were therefore based upon the ideas and technical structure of Trubshaw and Bartle’s MUD (Bartle, 2003; Keegan, 1997).
[[image:Basic_MUD_Tree_Structure_023.jpg]]
Figure 23. Basic Tree Structure for MUD classification
MUD1 introduced a number of concepts retained by most of today’s virtual worlds. Among which are:
*The role and effectiveness of the text based narrative and text communication that contributed to, rather than detracts from the sense of presence.
*Persistence in game play.
*Shared game space and cooperative (team based) activity.
*Non-player artificial intelligences called AI’s (or non player characters) as part of the experience.
*Region based environment management.
*Role-playing as a central game theme.
*Characters and avatars (all be it text based in the early MUDs).
*Game defined goals but player implemented plots.
Region based environment management is a computational aid that warrants particular attention. It was also used by the DOOM 3D graphics engine to manage multi-user environments allowing the computer to render the shared space in a single discrete region at a time. In DOOM this was a room, in MUD1 it was a cave in more recent virtual worlds it may be as much as a 65,000 sqm area (Second Life). This strategy provides a method of scaling the virtual worlds to many regions by distributing the region management across many discrete servers but imposes practical limits on the number of players that can be present in any given region at an instant in time (Hu & Liao, 2004).
MUD1 had a significant impact on virtual world design and development that dominated the online game space until the mid 1990s therefore MUD1 is often marked as the beginning of the first generation in online virtual worlds (Bartle, 2003). MUD1 can still be played online today at british-legends.com (CompuServe, 2007).
====2.8.4.2 ASCII Virtual Worlds====
In the early 1980’s pseudo graphical interfaces were added to some MUDs in the form of ASCII virtual worlds. ACSII (American Standard Code for Information Interchange) is the most widely adopted character encoding on western computer systems. ASCII virtual worlds provided a pseudo-graphical display making use of shape symbols and character positioning escape sequences to create crude planar maps of the terrain (dungeon) environment. The maps enhance the description of the room provided by the text.
ASCII pseudo graphical virtual worlds provided the player with a view of the world improved over the simple text prompt and description of MUDs. An example of an ASCII game can be seen below (Figure 24) Islands of Kesmai (IOK). Developed in 1982 and released in 1984 the game provided a player with a 3rd person view - overhead view of the world. Walls were denoted by [], fire **) and the players were letters (Bartle, 1990). IOK was Compuserve’s (USA ISP) best selling game with players paying up to $12.50 per hour to play (based upon connection time not game played) which usually had between 10-60 players online simultaneously (Bartle, 1990). Other ASCII games around this time were MegaWars I & MegaWars III (1983), NetHack (1987 (O'Donnell, 2003)), Sniper! and The Spy (Bartle, 1990).
[[image:RPG_Islands_Of_Kesmai_024.jpg]]
Figure 24. Islands of Kesmai ASCII Text Role Playing Game (1982-84)
By the mid to late 1980s home computing and online networking service providers opened the gates to huge expansion for on line virtual world. People paid for networking services by the hours, which gave a huge incentive to these providers to get their subscribers hooked on virtual worlds. There was big money to be made with 70% of revenue from one provider (Genie) in the early 1990s being made from games. By 1993 a study showed that 10% of the NSFNET backbone (precursor to the internet consisting of mainly government and universities) network traffic belonged to MUDs (Bartle, 2003).
===2.8.5 Graphical Virtual Worlds===
The text based MUDs evolved into two different streams: the 3D First Person Shooters such as DOOM and Quake which adopted the room at a time view of the world for 3D rendering, and the 2D graphical online virtual worlds that appeared in the early 1990s. Early examples include NeverWinter Nights (1991-1997), Shadow of Yserbius (1992-1996) and Kingdom of Drakkar (1992-Current) (Figure 25).
[[image:Graphical_2D_Virtual_Worlds_025.jpg]]
Figure 25. Graphical 2D Virtual Worlds
Unlike Habitat and Worldsaway (discussed under Social Networking Virtual Worlds) that predated these games appearing in the mid-1980’s, the graphically enhanced text based games were fantasy role playing games -- basically MUDs with graphics. Although 2D some of these games displayed isometrically on an angle which gave an illusion of a three-dimensional view for the player, for this reason these games are sometimes referred to as 2 ½D worlds (Bartle, 2003). These games used more sophisticated graphics (than the pseudo graphical solutions) to improve the sense of presence experienced by the players, while retaining the text based narrative.
By the mid 1990s with nearly 10 million internet hosts (Figure 26) (Slater III, 2002; Zakon, 2006) and price wars between providers the internet opened to doors to millions which saw hordes of inexpert computer users wanting to play games (Bartle, 2003). Game design had improved long with the graphical elements of virtual worlds with graphics rendering capabilities on standard PC’s and the emergence of common graphics file standards which made development of virtual worlds possible, practical and more economical.
[[image:InternetParticipatingHosts_Count_1990_to_1998_026.jpg]]
Figure 26. The Internet No. of Participating Hosts Oct. ‘90 - Apr. ‘98
====2.8.5.1 MMORPGs====
By the mid 1990s we saw the first 3D virtual world online Meridian 59 (1996-2000 & 2002-Current) although technically it used a pseudo-3D graphics engine (Axon, 2008; Bartle, 2003) providing a first person view where the player could view all angles in the environment (Figure 27). We saw the beginnings of a new era of virtual worlds with a massive 25,000 people signing up for the beta release (Axon, 2008), which unfortunately met with limited commercial success (Bartle, 2003; Friedl, 2002) and was shut down in 2000 but resurrected again in 2002 with the updated version online today at meridian59.neardeathstudios.com.
[[image:Meridian_59_First_3D_Online_Virtual_World_027.jpg ]]
Figure 27. Meridian 59 First 3D Online Virtual World (1996)
The turning point for online virtual worlds was Ultima Online (1997-Current). Ultima had already had met with success with the Ultima computer games series. With its online launch it had 50,000 subscribers within 3 months and was the first online virtual world to crack the 100,000 threshold within 12 months of release (which it did so in under 6 months) (Bartle, 2003; Woodcock, 2008). This added a new dimension to the term multiplayer where it has now come to know as a Massive Multiplayer Online, Role Playing Game or MMORPG. Subscription peaked at 250,000 in 2003 with 75,000 being reported in December 2007 (Woodcock, 2008).
Ultima Online consisting of a 2½D graphical virtual world has remained visual much the same (Figure 28) although recently the client that runs the game (the same concept as a web browser) has had a makeover in 2007 with the Kingdom Reborn (right). This game has received regular expansions to the world, which provides for new challenges and adventures for its player. Back in 2001 the client was upgraded to 3D (Wikipedia Ultima, 2008) but recently Electronic Arts announced they will be de-supporting their 3D client continuing only to support the 2D client going forward (Electronic Arts, 2007).
[[image:Ultima_Online_028.jpg]]
Figure 28. Ultima Online (1997-Current)
Other MMORPGs that started around the mid to late 1990s, which can still be played online today, are Furcadia (1996, longest running), The Realm (1996, second longest 15 days out from Furcadia), Lineage (1998), EverQuest (1999) and Asheron's Call (1999).
The more recent MMORPGs of today, not much has changed in game design from the original RPGs but technically they have improved and do provide much better graphics for the player (Figure 29). They have also increased substantially in popularity with the largest subscription based MMORPG World of Warcraft recently climbing to over 11 million players (Blizzard Entertainment Inc, 2008). Although these players do not play in one virtual world they are separated into different realms, the same game but with different people. This contrasts quite differently to the social virtual worlds like Second Life where all the users share one virtual world. In the next section we discuss social online virtual worlds which although they can be a MMORPG within the world itself (as mentioned early) their model of a virtual world is very different than the dedicated MMORPGs.
[[image:MMOZRG_Eve_and_WOW_029.jpg]]
Figure 29. MMORPG's Eve & World of Warcraft
====2.8.5.2 Social Virtual Worlds====
The first attempt for a commercial large scale multi-user game was made by George Lucas’s Lucasfilm Games. Habitat developed by Chip Morningstar and Randall Farmer started development in 1985 (McLellan, 2004; Ray, 2008; Slator et al., 2007). Habitat was built to support thousands of simultaneous users to run on the home computer Commodore 64 to be distributed via Quantum Link network service providers (later known as AOL). Inspired by a science fiction novel ‘True Names’ (Vinge, 1981) the world contained a fully-fledge economy where citizens of the world could own a virtual business, build a house, fall in love, get married and even established their own self governing laws (Morningstar & Farmer, 1990). Habitat a 2D graphical world looked similar to a cartoon (Figure 30, left) with the avatar (digital self) taking a third person view of the world. The storyline was based upon life rather than the fictional storyline of the MUDs, which placed greater emphasis on the social aspect of the world. Lucasfilm's Habitat was first released as a pilot in 1986 then later in 1988 as Club Caribe in North America which reportedly sustained a population of 15,000 participants by 1990 (Morningstar & Farmer, 1990). In 1990 it was released in Japan as Fujitsu Habitat and after extensive modifications. Habitat was realised again in 1995 as WorldsAway (Figure 30, Right) (Damer, 2007) and again as Dreamscape in 2008.
[[image:VW_Habitat_and_Worldsaway_030.jpg]]
Figure 30. Habitat (86) First Graphical Virtual World Precursor to Worldsaway (95)
Habitat introduced some key concepts in virtual worlds;
*The term ‘Avatar’ into the general virtual world community;
*The idea of focussing on social networking as a key form of game play;
*An economy where people could trade both in world currency and artefacts; and
*The most important, the concept that living in a virtual world and leading an alternate life that was not dictated by rules of a game (like with the dedicated MMORPG environments).
More recent social networking virtual worlds include Active Worlds (1995, 1997-current)[17], Second Life (2003-current) and There (2003-current) (Figure 31) – all of which have achieved a significant volume of educational interest as platforms for delivery of learning. The generalised nature of the social networking sites means that they tend to be more diverse in the range of facilities provided and the purposes to which they can be applied than the role playing game systems. They have generally provided participants with some form of content creation tools including the importing and/or exporting of non-virtual world artefacts. In the next section we discuss further the aspect of education in virtual worlds.
[[image:VW_SecondLife_and_There_031.jpg]]
Figure 31. Social Virtual Worlds: Second Life & There
===2.8.6 Simulation and Learning Systems===
====2.8.6.1 PLATO====
PLATO (Programmed Logic for Automated Teaching Operations) was a system designed for computer based education at University of Illinois that started in the early 1960s. Originally developed as a class room course system (Figure 32) with improvements in mainframe technology by 1972 saw up to a thousand simultaneous online users making it the first public online community that featured electronic course delivery, online chat, bulletin boards, 512 x 512 resolution monitors and 1200 baud connection speed (Unger, 1979; Woolley, 1994). With over 15,000 hours of instructional development PLATO was possibility the largest ever investment in educational technology (Garson, 2000).
[[image:PLATO_Lab_Image032.jpg]]
Figure 32. University of Illinois PLATO Lab & Terminal (1961-2006)
By the mid 1970s games made their way onto the university mainframes with great success. Between 1978 and May, 1985 about 20% of time spent on PLATO was game usage (Woolley, 1994). Games appeared such as Spacewar! (1969 game discussed earlier), Empire (1973, multi user space shooter game based upon Star Trek), DND, (1974, MUD[18] based upon the game Dungeons and Dragons), Mines of Moria (1974, MUD, 248 mazes based upon Tolkien’s Lord of the Rings), SPASIM (1974, 32 multi-user, FPS space ship game)[19], Airfight (1974-75 a 3D flight Simulator precursor to Microsoft’s Flight Simulator), Oubliette (1977, first person 3D MUD) and Avatar (19977-79 first person 3D MUD) (Bartle, 2003; Lowood, 2008; Pellett; Wikipedia, 2008b; Woolley, 1994). See below (Figure 33) for some examples of MUDs held on PLATO. Many of the games on PLATO were recreated for commercial use for arcade or personal computer games (Goldberg, 2002; Mulligan, 2002; Woolley, 1994).
[[image:PLATO_Popular_MUD_Games_Developed_For_PLATO_033.jpg]]
Figure 33. PLATO: Some Popular MUDs Games Developed for use on PLATO (1974-1979)
By 1985 after going commercial PLATO had established a systems of over 100 campuses worldwide (Garson, 2000). Known as the ‘ultimate electronic information and communication utility’ offering over 200,000 hours of courseware (Figure 34), with local dial-up of 300 or 1200 baud connection speed, access to both a social and educational contacts were among the many advances of PLATO that made it an attractive system for the academic community at large (Small & Small, 1984). Over time, with improvements in technology, and the cost of maintaining old technology the final PLATO system was turned off in 2006 (Wikipedia, 2008b).
[[image:PLATO_Online_Course_Count_1984_034.jpg]]
Figure 34. PLATO Over 200,000 online courses by 1984
A web site has been established for preservation of PLATO at cyber1.org (VCampus Corporation, 2008) which holds many of PLATO’s games and courseware for public download.
====2.8.6.2 SIMNET====
Military virtual world simulators started with a project called SIMNET (SIMulator NETworking). SIMNET was a DARPA project that enabled the first large scale real-time networked battlefield simulator. Development and implementation occurred on several levels between 1983 and 1990 (Cosby, 1999; Miller & Thorpe, 1995).
Prior to SIMNET military simulators consisted of immersive virtual reality training devices such as cockpit simulators. Cockpit simulators offered a replicated environment of the ‘real thing’ for example, an aeroplane cabin would be built in its entirety with motion and sensory feedback using pre-programmed software to produce repetitive simulations to provide an individual with mastery skills such as low to ground dog-fighting or missile avoidance training (Miller & Thorpe, 1995). SIMNET provided a cheaper alternative for certain types of training than the cockpit simulators and further offered ‘collective skills’ which Miller and Thorpe (1995) define to be cohesive team operations skills distinguished from individual mastery skills taught in cockpit simulators.
SIMNET a multiuser virtual world (Figure 35) consisted of real battle grounds with manned vehicles (tanks and helicopters), command posts, semi-automated forces where a single operator could control many vehicles in the simulation and the ability to record simulations from any view point (known as the flying carpet) so that it could replayed and statistically analysed and reported upon. At the conclusion of the program there were 250 simulators operating in nine locations (4 of which were in Europe) which provided real-time battle engagements that was directly under the control of the participants (Lenoir, 2003; Miller & Thorpe, 1995).
[[image:SIMNET_Battlefield_Simulator_035.jpg]]
Figure 35. SIMNET: Battlefield Simulator at Fort Knox USA (1983-1990)
SIMNET had a substantial impact on military training after being recognised as the key success factor in winning the 3 day ‘Battle of 73 Easting’ in the Gulf War (1991) which lead to several projects based upon the SIMNET technology (Figure 36) (Foley & Gifford, 2002) with the USA government commissioning $2,549 million dollars in 1997 for modelling and simulation projects (Lenoir, 2003).
[[image:US_Military_Networked_Simlator_Projects_1938_To_2001_036.jpg]]
Figure 36. Timeline of US Military Network Modelling and Simulator Projects (1983-2001)
In 1997 a project named Synthetic Theater of War (SToW) commenced which was a program to construct an environment to combine varies simulators into one large-scale distributed battle simulator capable of involving thousands of participants (Budge, Strini, Dehncke, & Hunt, 1998; Tiernan, 1996). This project has since become Joint Semi-Automated Forces (JSAF) (Hardy et al., 2001) which now enables more than 100,000 simultaneous simulations at a time (US Joint Forces Command, 2008). Australia military has also adopted the JSAF platform to build their our own Course Of Action Simulation (COA-Sim) for joint military operations training, exercises and planning (Carless, 2006; Gabrisch & Burgess, 2005)
====2.8.6.3 Military Use of Commercial Games Engines & The America’s Army====
In 1996, General Krulak of the US Marines tasked the Marine Combat Development Command to explore and approve the use of commercial games engines for military training purposes. One outcome of this effort was the collaboratively developed Marine Doom, based on the Software Id Corporation’s shareware Doom engine and Doom Level Editor. The simulation could be configured for simulation of special missions (such as hostage rescue) immediately prior to engagement and used to rehearse the planned mission (Lenoir, 2003).
In July of 2002 the US Military released a milestone in multi-user training game simulators in the form of America’s Army: Operations (Lenoir, 2003; Zyda, 2005). Based on Epic Games ‘Unreal’ games engine, the game created a virtual world that reproduced aspects of a career in the US Army, including ‘boot-camp’ commencement and weapons and tactical training through to various operations scenarios. Although originally developed and released as a recruitment tool, the game was also claimed to be utilised to improve training outcomes by army instructors at Fort Benning (Zyda, 2005).
Now, with 26 subsequent releases (as of 2008) and available for the PC, cell phone and Xbox, the game has more than 9 million registered users exploring entry level to advanced training, and operations in small units (Figure 37). Beyond a focus on realism that extends to accurate tree placement in training courses at the simulated training camps, the game adds an added dimension of presence to the participants through the active involvement of current and former real-world soldiers as players in the game (designated with a star icon in player profiles), interacting with non-military participants (Department of the Army, 2008).
[[image:Americas_Army_037.jpg]]
Figure 37 America's Army (2002)
From a training perspective anecdotal evidence from army trainers regarding the game is that sessions in the training scenarios such as the firing range or obstacle courses improve subsequent results in the real-life versions of these activities (Zyda, 2005). The US Army possibly one of the largest investors of virtual world game technology recently announced their plans to spend $50 million USD over the next 5 years to create 70 gaming systems in 53 locations around the world for combat training (Robson, 2008).
==2.9 Virtual Worlds for Education==
===2.9.1 Architecture Considerations===
====2.9.1.1 Introduction====
To appreciate properly the discussion of the literature examining educational directions in virtual worlds, the researcher considers a brief overview of the key architectural differences to assist the reader. This material is based on the researcher’s examination of a variety of game environments and virtual worlds, and discussions with experienced and knowledgeable users of these environments, rather than sourced from the work of other authors. As such the discussion is interpretive rather than authoritative.
Some of these environments have existed for only a few years, and have not yet enjoyed a comparative analysis undertaken by the academic community. As such, this discussion might not normally reside in the literature review, but it is felt that the placement of this discussion in this sub-section will assist the reader in better appreciating the issues explored in the literature discussion throughout the remainder of the section.
====2.9.1.2 Considerations of Operational Design====
While all of today’s major virtual worlds include capabilities for user interaction, sharing of the environment, persistence, avatars, business rules, streamed audio and text there are substantial differences in the technologies used to deliver the virtual experience. While some of these differences may create marginal differences in the world experience of the casual user, from the perspective of the educator and content creator the differences are substantial.
The major offerings can be viewed under the following groups (note: in each category the researcher has selected only a few example worlds, in most cases other options also exist):
#Proprietary closed engine (e.g. World of Warcraft, Everquest)
#Client resident closed content and world model with open engine (e.g. Shareware Doom )
#Streamed (or semi streamed) closed content and world model with closed engine (Entropia Universe)
#Open client resident content and world model with closed engine (Flight Simulator X, America’s Army, Unreal games, Quake, Doom)
#Open streamed content and world model (Hipi Hi, TruePlay, Active Worlds)
#Open streamed content and world model with out-of-world interfaces (Second Life V1, VastPark)
#Open streamed content and world model with out-of-world interfaces and open client (Second Life V1.2)
#Open streamed content and world model with out-of-world interfaces, open client and open server (DeepSim)
Architectural Components and Implications in Education
Below are some of the architectural components and implementations on the structure of a virtual education environment.
{| border="1"
|'''Architectural Components'''
|'''Implications in Education'''
|-
|Closed Proprietary System
|A closed proprietary system cannot generally be altered. These systems are generally not appropriate for education purposes unless the existing virtual world itself is built for the purpose of the training (such as a purpose built simulator). Closed systems can be used in education for group interaction and discussions, if not for lectures or anything requiring more than text or audio (assuming the system supports group audio communications).
|-
|Closed or Open Environment
|Whether content and world model is closed or open determines whether the textures, objects and artefacts of the world can be modified or created by users. This ability is essential if the world is to be utilised in education as anything more than a 3D discussion forum.
|-
|World Content
|Whether the content and world model is client resident or streamed goes to the complexity of distributing course content, and the dynamics available in delivery. If the content is streamed, it can be changed in real time, but will usually require a high speed internet connection. Systems supporting streamed content generally also include the tools for developing some if not all of the streamable content. If the content is client resident, client interfacing speeds can generally be slower, but the content must be centrally published and distributed to client systems and installed locally prior to use. It cannot be changed in real time, and content production will not generally be supported directly in the virtual world tool set, and will often require advanced 3D modelling skills in dedicated 3D modelling environments.
|-
|World Interfaces
|The existence of out-of-world interfaces goes to whether content from other sources such as the internet web pages, audio or video, etc can be streamed into the world and integrated with the world content and model. Systems capable of providing this capability with streamable open content offer the greatest potential for in-expensive production of course material and publication distribution of that material to students.
|-
|Client / Server Engine
|Whether the client or server engine is open or closed goes to whether the hosting software itself can be modified. Generally this should not be necessary for education if the capabilities of the engines driving the world are otherwise sufficient. Where the content / world are otherwise closed, but the engines are open, the existing content and world could be replaced by interfacing the games engine to a new world with new content.
|}
====2.9.1.3 Options for Content Modification====
The ability to modify the content of a virtual world is essential if the educator is to deliver course content in-world beyond that of an interactive discussion, or monologue.
There are essentially three ways content can be modified by end-users in current virtual world environments (as opposed to systems providers or publishers) depending on the operational design of the environment:
#'''Level Editor''' (eg: Doom, Half Life, America’s Army, Flight Simulator). Applicable to client resident worlds (i.e. systems where the world is stored on each client computer and distributed as a separately published down load). A level editor is a content editing tool that allows an entire simulation to be created including the world model, textures, characters, behaviours, etc. They usually support importation of textures and animations, etc into the ‘level’ and then distribution of the entire level to a central server for redistribution to clients.
#'''Client Content Editing Tool''' with import/export (eg: Second Life, Vast Park, etc). For environments where building and content creation is part of the ‘game play’ the client will have a content editor provided. These environments provide a simplified model for constructing shapes and objects (e.g. Second Life’s prims) and some means for importing complex objects such as organic shapes, textures, animations, sound, etc.
#'''Out-of-world interface''' (e.g. Second Life, Active Worlds). Potentially available in both client resident and server resident (streamed) worlds. An out-of-world interface allows for the connection of some aspect of the user experience while in world to be drawn directly and live from an off world location like a web page, internet resident database or streaming SoundCast server, etc.
====2.9.1.4 Implications of differential content capabilities====
Virtual world are comprised of components (objects) and functions that are managed by the virtual world (or game) engine and together comprise the capabilities of the world. Not all worlds have the same object management capabilities built into their engines. For the purposes of this discussion, the range of capabilities will be considered to be:
#'''Terrain''' – the land form or map of the virtual space. Essentially all virtual worlds offer some form of terrain map (although the terrain map may not be ground, but rather simply a 3D space.
#'''Avatars''' – Discussed extensively already, the avatar is the user’s projection into the virtual world and may or may not be customisable.
#'''Structural objects''' – Including buildings, furniture, ornaments, statues, models, etc. These are the virtual world equivalent to objects in the real world. They may or may not be animatible and scriptable. If they are scriptable they may be able to become autonomous agents, depending in the capabilities of the scripting engine.
#'''Textures''' – The visual covering of any object, terrain, or even avatars. The ability to display and upload/import textures is (generally) essential to the ability to ability to display lecture materials like slides, etc (but note the existence of streams as a potential alternative).
#'''Animations''' – An avatar and a non-player character appears to walk, sit, stand, change facial expressions, etc because of the animation it is playing at the time. Without animations an object might move from one point to another, but it will not change it apparent state. The ability to modify animations is advantageous for creating a sense of realism, but possibly not generally essential for the ability to deliver a lecture or every type of simulation. All virtual worlds examined, offered some range of built-in animations within their worlds. Some allow the animations to be imported or modified, or strung together to create more complex animations.
#'''Scripts''' – Scripting is a capability to programme the objects and behaviours in the world. In worlds modified by level editors and programming language is generally provided as part of the level editing environment and ‘compiled into’ the level before it is published and distributed. In user modifiable worlds, where scripting is supported (like Second Life) the scripting editor and compiler is provided as part of the client application and scripts are dynamically modifiable. In some architectures the scripts are stored in the objects and distributed with the objects (and therefore if the object is moved between worlds/simulators the script and behaviours move with it), whereas in others the scripts are centrally stored controlled for the world/level and not available outside of the world or level or simulator (as appropriate). Scripts govern the behaviour (movement, animations, actions, sounds, appearance, world responses, inter-object communication, etc) of objects. The capability and simplicity of language design of the scripting engine is critical to the options available for educators in building a simulation.
#'''Streams''' – Streams include any media that is streamable such as audio, video, web-page content, etc. The availability of streams is an extension (or possible an alternative) to the ability to import textures. From an educational standpoint it represents the ability to deliver video or sound presentations, or draw lecture materials directly from the internet. Depending on the world engine, stream content may be able to be dynamically published (drawn down to the client as required – such as in Second Life) or packaged into the client resident world (such as in America’s Army).
#'''Non-player Characters''' (also called Bots, AI’s or MOBs – mobile objects) - These are essentially characters that look like avatars but are completely controlled and managed by the engine. They interact with players/avatars in a semi-intelligent manner. The availability and capability of these vary significantly across worlds. In HalfLife and America’s Army, the AI capability is available within the engine and has considerable ‘intelligence’ and in some cases the ability to learn and modify their behaviour. In other worlds (such as Second Life) they are not directly supported by the virtual world engine at all. The existence of non-player characters can directly impact on the type of learning simulation that an educator can build as it can provide user feedback and the feeling of presence within the environment for the user (if implemented to provide a realistic experience for the user).
#'''Text Communication''' - Text chat (including instant messages, group communication chat, etc) is the standard communication strategy in all worlds. It is always instant and dynamic (in that it does not have to be pre-packaged into the world). It is a functional capability rather than an object, and may or may not be logged or copied depending on the client capabilities.
#'''Multi-way Voice Communication''' - Most virtual worlds do not support voice directly, although this has been an increasingly offered function over the last twelve months. Multi-way voice communication enables a group of players to converse as if they were in a conference call, without the necessity to type all communication in text. It is different from streams, in that every client can be a sound source to every other client, whereas streams are a one-way communication from a point source to many destination receivers. Clearly the availability of voice communication impacts both the type student and the form of discussion that can be undertaken in a learning situation.
In selecting the platform for delivering an educational experience, the extent to which the educator requires any or all of these capabilities within a virtual world will probably influence the decision. Some of these capabilities have only recently become generally available, and others are still in only rudimentary forms. In the literature review that follows, the approaches and content adopted, and the outcomes achieved have necessarily been constrained by capabilities of the technology options available at the time and the architectural constraints of the virtual world used.
===2.9.2 Education Applications in Virtual Worlds===
====2.9.2.1 Introduction====
During the 1970’s, 1980’s and early 1990’s, perhaps the most significant multi-user online environment for education was the PLATO system. From the mid 1990’s onwards, the influence of this system waned as it was progressively superseded in user interface capabilities by the emerging 3D online games, social networking systems and custom built virtual worlds for the specific application of subject matter.
Today the use of public online virtual worlds for is gaining popularity with educators with a recent special purpose committee of educators (The New Media Consortium & EDCAUSE, 2007) identifying that virtual worlds will have a significant impact in the future of teaching, learning, or creative expression within higher education. In the next section we will discuss some of the research findings of virtual worlds being used for educational purposes.
====2.9.2.2 Education Uses in Virtual Worlds====
Early work in education using text based MUDs showed that they offered support for constructive knowledge-building communities that offered affordances of coordinated presence with evidence for interactive learning and collaboration across time and space (Dickey, 2003).
The period of the late 1990’s until today has been typified by educators experimenting with the potential for mass market games engines (and more recently virtual worlds) to be re-tasked as education environments (Annetta et al., 2006; Beedle & Wright, 2007; Gikas & Van Eck, 2004). In some cases, such as America’s Army the ‘game’ environment was built with the specific goal of recruitment and training in mind (Zyda, 2005), or as with MicroSoft’s Flight Simulator a game evolved over time with the assistance of subject matter experts to create an accurate simulation tool for the games audience (Lenoir, 2003). In other cases a games engine (the operating system of a game) has been adapted to create a purpose built learning tool, such as educators and students at MIT utilising the Neverwinter Nights tools to create a historical game based on a battle in the Revolutionary War or MIT's Games-to-Teach Project produced playable prototypes of four games, including Biohazard, developed jointly by MIT and the Entertainment Technology Center at Carnegie Mellon University which trained emergency workers to deal with a cataclysmic attack (King, 2003).
The early 3D virtual worlds with their simplistic graphics bearing little resemblance to the real world provided students with advantages over traditional learning methods whilst fostering collaboration in multiuser virtual worlds. An extensive study of virtual reality technology in education was performed by Youngblut (1998) where she looked at 35 different research studies in education that varied in technology use, subject discipline and age group from 1993-1998. Below is an example of VARI House and Virtual Physics both of which were custom built (Figure 38), VARI a single user virtual world and Virtual Physics a multiuser virtual world. Although studies were mainly research based (as opposed to the application in course work) research showed for both single and multi user environments that virtual world technology in many cases surpassed traditional learning methods in areas such as subject matter understanding, memory retention, student collaboration and constructive learning methods. Some obvious disadvantages were technology constraints, cost and development and usability (Youngblut, 1998) which in most part could be contributed to the infancy of this technology, formative years of computer based learning and the lack of general use of computers by students which had yet to permeate sociality as a whole.
[[image:Education_In_Virtual_Worlds_in_1950_to_60_038.jpg]]
Figure 38. Education in Virtual World Mid 1990s
====2.9.2.3 Online Education Uses in Virtual Worlds====
As identified in the architecture considerations section, virtual worlds that are to be used in educational settings must enable content modification if learning is to consist of anything more advanced than an interactive conversation. For the purposes of this research, the researcher is choosing to focus on virtual worlds that support the dynamic delivery or streaming of content (and the building tools are provided as part of the environment), rather than those worlds where a separate level editor is required and a client resident world model must be installed on the client computer prior to use. The literature surveyed in this sub-section will therefore focus on the work done in two such environments – Active Worlds and Second Life.
=====2.9.2.3.1 Active Worlds=====
Online virtual worlds enabled educators’ access to environments without the cost and complexity of developing their own custom software. One of the first online virtual worlds that made it feasible for research and development in education (given its architecture qualities) was Active Worlds (1995, 1997). Officially known as Active Worlds Universe because it consists of many worlds, Active Worlds provided educators with the opportunity to rent or buy their own world allowing restricted access to invited guest, building tools and content management capabilities. Below is a screenshot of Active Worlds (Figure 39). As can be seen, the current client consist of four sections; left – communications and navigations options, right – integrated web browser, bottom – chat window and middle – 3D environment. This type of client is generally called a “browser” by the environment developers.
[[image:Active_Worlds_Universe_039.jpg]]
Figure 39. Early Online Social Virtual World: Active Worlds Universe
'''Active Worlds Research'''
During the late 1990s to the early 2000s several educational institutions setup up a presence in Active Worlds for various projects from research to actively using Active Worlds as an online learning environment (see Smith, 1999 for a list of Virtual Learning projects most of which being in Active Worlds). The early research into online virtual world based education using Active Worlds showed promise.
Dickey (1999, 2003, 2005) undertook research into the viability of Active Worlds being used for geographically distant learners for both formal (a university business computing skills course) and informal courses (Active Worlds building course). His research studies showed that the 3D Virtual Word offered advantages in fostering constructive learning, student and teacher collaboration, visual representation of course context and course content and student engagement and participation. Some of the disadvantages identified were essentially environment specific and included a lack of support for collaborative activities like a whiteboard or collaborative interactive writing spaces, chat tool single posting word limitation, a single shared channel for chat tool providing no separation of teacher / student discussion and no ability for turn taking and kinetics (animation) constraints such as hand raising for alerting the attention of the instructor.[20]
Dickey also identified a number of opportunities specifically enabled by a 3D environment. While some of the previously identified advantages (such as collaboration and student management and participation) might be duplicated in other forms of online education tools, the 3D modelling of the course itself (the visual representation of course context and course content) was an advantage specific to the 3D environment.
Course context modelling as provided in Dickey’s research (1999) was a 3D representation that illustrated the structure of the course by the use of individual buildings and plazas (Figure 40). Each building was a topic in the subject, which provided resources to aid learning and a meeting place where students could collaborate for group projects around this topic.
[[image:Visual_Course_Structure_in_Virtual_Buildings_040.jpg]]
Figure 40. Visual Representation of Course Structure by the use of Individual Buildings
Course content modelling as provided in Dickey’s research (1999) was a 3D representation that the student had to build in order to understand the concept of the subject material (Figure 41).
[[image:Visual_Represnetation_of_Course_Content_041.jpg]]
Figure 41. Visual Representation of Course Content
These alternative methods provide a good example of the power and adaptability of 3D modelling environment applied to education. The course context provided the student a method by which they could visualise the learning objectives and progression of the course. The student had to visit each building within a specific time frame and complete the contained content. The 3D modelling of course content enabled the learner multiple viewpoints of actual subject material which provided interactive learning that was believed to enhance the student’s understand of the subject topic.
Clark & Maher (2006) looked at the role of place and identity in a 3D virtual learning environment using Active Worlds by the analysis of chat logs and physical locality of the avatars within group discussions. They found that a sense of place can be achieved in a 3D virtual learning environment where identity and presence plays a role in establishing the context of the learning place. The students formed a strong bond with their avatar and indicated that they felt a sense of presence, as measured by a series of subjective scales, within the virtual learning environment. Similar Dickey (2003) found that the 3D virtual desktop world provided qualities of presence similar to that of an immersive virtual reality virtual world.
=====2.9.2.3.2 Second Life=====
Second Life (started 2003) consists of two worlds. These are: Second Life Teen Grid and Second Life Adult Grid. The teen grid provides access to 13-17 year olds and educational instructors. The functionality of the teen grid is the same as the adult grid with exception that all content has a PG rating. The Adult Grid is where you find all the universities and colleges for students over 17 years of age. Other educational content in Second Life is an extensive list of museums, galleries, simulations, business product development, role-playing spaces, employee and public business training course, etc. Similar to Active Worlds educators are able to rent or purchase land, allow open or closed access to the public and build and develop on land.
One major difference between Second Life and Active Worlds is that the former has an in world economy with in-built functional support enabling the trading of virtual products and services using ‘Linden dollars’, backed by content copyright and duplication controls and augmented by a provider managed exchange where real dollars can be exchanged for Linden dollars (and vice versa). This fundamental difference provides an incentive for content developers and service providers to actively support and expand the world with content and therefore enables access to a large body of pre-constructed content or access to an entire world-wide industry of content developers at extremely reasonable rates (compared to the real world 3D developers providing the similar content outside of Second Life) (Joseph, 2007). The building and scripting tools are easier to master than traditional 3D rendering tools, and delivered free as part of every user’s world browser and are sufficiently powerful that just about anything imaginable can be constructed (Schmidt et al., 2007).
Second Life’s standard interface as seen below (Figure 42) offers extensive functionality over that of Active Worlds. Some of the more common features as seen in the figure below are built-in world, content and people search facilities (left), a mini map (top right), an inventory library (bottom right), local chat channel (with a standard ranges of 15, 30 meters or 60 meters from text source) and group chat channels (worldwide range for up to 25 groups per avatar), customisable streaming media players (for sound, video and web page content), in world or external web html browser (link for both in world and outside world content), private or public multi-player voice facilities etc.
[[image:Second_Life_042.jpg]]
Figure 42. Online Virtual Social World Second Life (Circa 2008)
Another difference from Active World is avatar control, Second Life avatars can use roaming camera (whereas Active Worlds only provides First and Third person view). Roaming camera enables the user to use their mouse to control the moment around the world without the need to move their avatar. This functionality once mastered offers the users a powerful tool that provides an easy and fast way in which to navigate objects (that can even go through objects such as walls).
Due these and other technological advances over Active Worlds, Second Life has developed a large education community over the last couple of years. For instance, SIMTeach (June, 2008) the Second Life Education Wiki identifies over 200 Educational Institutions in Second Life of which 138 listed are universities, colleges and schools. The Second Life Education (SLED) list server has over 5,000 world-wide members. The New Media Consortium (NMC, a group that hosts education islands) has over 100 universities on their land and Second Life Teen Grid has over 90 educational projects (Linden & Linden, 2008). Figure 44 p88 provides some examples of the training and learning activities in Second Life representing a mixture of educational institutions, corporations and governments agencies.
The content of Second Life is entirely user created. The availability of content developers and potential students already experienced in using the environment is dependent on the take-up and expected future growth of the environment. In Figure 43 are the user base and economical statistics for the first quarter 2008 as provided by Second Life’s proprietor Linden Lab (2008a). As of November 2008 Second Life had 16,318,063 million users (60 day logons 1,344,215 million). A beak-down of Second Life’s demographics as at November 2008 can be seen in Appendix I: Second Life Demographics.
[[image:Second_Life_User_and_Econ_Stats_Q12008_043.jpg]]
Figure 43. Second Life User & Economic Statistics for Q1 2008
[[image:Second_Life_Training_and_Learning_044.jpg]]
Figure 44. Second Life Training and Learning
'''Second Life Research'''
Educators are using Second Life for both formal and informal purposes. Some Educational intuitions have set up entire virtual campuses modelling their real world campus while others are modelling purpose built virtual education structures. The relative youth of Second Life means that there is considerable variation in the maturity of educational efforts across the virtual world, and limited peer reviewed studies yet published. Many educators are still experimenting while others, having active support of their institutions are actively using the environment for partial or entire subject delivery. Here we will look at some of the current research at the time of writing that has been undertaken in Second Life most of which has been recently published since 2006 although given the technological advances that has occurred in Second Life since 2007 onwards we will specifically concentrate on the later research.
Martinez, Martinez, & Warkentin (2007) researched the implementation of a lecture to geographically distributed third year university students in Second Life. The lecture was delivered in a conventional lecture room setting using traditional chalk and talk style delivery with lecture slides and the chat channel for instruction, no voice was used.[21] According to the lecturer’s experience using text only delivery, the time to deliver the content was double that of a face to face lecture. This was also confirmed by the students in their survey. In the student survey some admitted they felt distracted by the novelty of the environment and were overly concerned with ancillary aspects such as their avatar’s appearance etc. Others admitted to being distracted by external (to the environment) concurrent activities occurring simultaneously on their PC’s such as multi tasking with other programs (e.g. MSN messaging) whilst at the lecture. Others experienced technical difficulties and could not get back into the lecture after they were accidentally logged out. In spite of these short-comings, when asked to rate the lecture experience on a scale of 1-10 the average student response was 8.5. In this study it was noted that some of these distractions and difficulties could be put down to first time user experience. The lecturer also felt that this lecture could have easily been pre-recorded and delivered online and that active learning techniques could have improved the delivery of this lecture in Second Life (Arreguin, 2007).
Joseph (2007) notes a consequence of using Second Life (or a virtual worlds in general) for teaching is that sessions generally take longer than traditional methods but believes that this is not an issue per se as time to complete the task should come second to the effectiveness of the experience. Joseph also believes (from experience) that the avatar projected on the screen and sense of presence experienced by the participants is more effective for learning than a live image of a video feed.
Kofi, Svihla, Gawel, and Bransford (2007) researched the potential that virtual worlds could provide efficiency and innovation for adaptive learning. In their study, students were present with a maze to navigate that simulated problem solving skills required for learning similar to that in a real life learning scenario. Kofi, et al found that Second Life was able to provide enough functionality and support for the learner to apply new concepts in order to solve presented problems as long as they were provided key indicators of possible outcomes. They also found that the use of 3D learning environments required the same amount of instruction that would be provided in equivalent real world learning and that simply building a model did not provide sufficient information, of itself, for the learner to learn in this instance; they also needed to be continuously prompted and guided in order to reach the end learning objective.
In another example, Second Life was used to support learning objectives of a total of 13 students aged between 19-26 for a third year level college students on a course for Digital Entertainment and Society where the students were geographically distributed around the world (Gonzalez, 2007). Both lectures and assignment work was conducted within Second Life. The lectures consisted of a video presentation and an in world field excursion. Assignment work required some in-world building, an exercise using linden dollars with a student presentation on completion. No students had used this environment before but an acclimation exercise was sufficient in providing them with the skills required to undergo course work in Second Life. At the end of the course students were given a survey with results presented below (Table 1).
{|
|Elements that Second Life Added:
|-
|
|Agree
|Disagree
|-
|Enjoyment
|100%
|0%
|-
|Technical difficulties
|100%
|0%
|-
|Interaction with tutor
|62%
|38%
|-
|Interact ion with classmates
|62%
|38%
|}
Table 1. Survey Results for Digital Entertainment and Society Second Life Subject
The technical difficulties result was explained largely by network latency experienced by the students. Each student used their own computers with an average of 512 Kbs connection speed – not especially fast, nor ideal for the use in the Second Life environment. No mention was made in the study as to whether the student computers met the Linden Lab systems requirements (2008c). As Second Life is streaming virtual world where content is downloaded on-demand from Linden Labs servers located in the USA to the local computer connection speed can an important factor in technical difficulty performance. Other major impacts from a technical perspective include the computer graphics cards and the size of onboard computer RAM. The Second Life browser does offer many settings for optimising performance on low-end machines but if the minimum system requirements are not met then the user’s experience of the virtual world will be reduced significantly with dropouts, lag and poor graphics.
==2.10 Learning & Instructional Design Theory==
===2.10.1 Introduction===
Learning in any world (real or virtual) requires well thought out instructional design. Learning is a process of the mind regardless of whether your body is present in the virtual world or real world. Instructional components for learning regardless of medium include (DONCIO et al., 2008):
*Clear, concise, and appropriately structured content
*Activities that draw relationships between concepts, challenge learners' thinking and understanding, and reinforce information
*Evaluative measures that determine if knowledge assimilation and retention have occurred
In this research the focus was on the use of new technology in education as opposed to education applied to new technology; therefore this section only provides an overview of applicable theory required to assist in the instructional design, delivery and assessment of the subject material presented to the research participants in this study. Gagne’s Nine Events of Instruction and Bloom’s Taxonomy of the Cognitive Domain were selected to assist in this task.
===2.10.2 Behaviourism and Cognitivism===
There are two main traditional schools of thought in learning theory. These are Behaviourism and Cognitivism (DONCIO et al., 2008; Lewis, 2001).
*Behaviourist (Objectivist) views the mind as a ‘black box’ no consideration of personal or past experience is taken into consideration. The mind starts off with a clean slate where a stimulus produces a response. Only when a change in behaviour is observed learning has occurred. Learning is discrete, measurable and quantifiable.
*Cognitivist (Constructivist) views the mind as a continuous organism that evolves. Knowledge is constructed based upon from past material and personal experience. Learning is unique to the individual; relating new information based upon pervious knowledge learnt.
The University of Washington, Seattle (2008) compares the two approaches of and a provides a discussion of each in terms of philosophy (Table 2, p93), learning outcomes, instructor role, student role, activities and assessment. The philosophies of these approaches are opposing and therefore produce different methods of instruction (Lewis, 2001; Nash, 2007).
Behaviourism was the first to be defined in learning theory while cognitivism developed later as a response to perceived limitations of behaviourism in understanding and adapting to new learning concepts (Lewis, 2001; Mergel, 1998).
While some constructivists argue the merits of constructivism as a distinct theory, viewing knowledge as a something constructed by a learner through the process of learning other writers view constructivist ideas as an evolution of the fundamental cognitivist school. This position is illustrated in Table 2 where the behaviourist and constructivist-enhanced-cognitivist philosophies are compared using a consistent comparative organisation of views (see Dabbagh, 2006; Mergel, 1998).
Constructivists argue a distinction between cognitive constructivism and social constructivism, in which the former emphasises the exploration and discovery on the part of each learner, while the latter emphasises the collaborative efforts of groups of learners as sources of learning, but for our purposes it is sufficient to distinguish the behaviourist and cognitive approaches. Throughout the years many practical teaching methods have evolved with concepts that encompass both approaches.
[[image:TABLE_Instructional_Design_Behaviorism_Cognitivism_045.jpg]]
Table 2. Instructional Design: Comparative Summary Behaviorism and Cognitivism
(University of Washington, 2008)
===2.10.3 Gagne’s Nine Events of Instruction===
Gagne theory of instruction can be divided into three areas (Corry, 1996); taxonomy of learning outcomes, conditions of learning and levels of instruction. There are considerable similarities between Gagne’s ‘taxonomy of learning outcomes’ and Bloom’s ‘taxonomy of the cognitive domain’ therefore a discussion of these will be provided in the next section of this thesis.
Gagne breaks down ‘conditions of learning’ into internal learning and external learning conditions. Internal learning is concerned with previous learned capabilities of the learner and external learning is the instruction or stimuli that will be presented to the learner. While Gagne’s theory takes an essentially cognitivist approach, it recognises both behaviourism and cognitivism influences to instructional learning. For our purposes, it is the ‘levels of instruction’ as outlined by Gagne that are of particular interest which we will explore in this section.
Gagne (1985) presents a systematic approach to instructional design termed the ‘nine levels of instruction’ as presented below in Figure 45 (Clarke, 2000)[22]. These nine levels have been specifically designed for the teaching of intellectual skills.
[[image:GAGNE_Nine_Steps_To_Instruction_046.gif]]
Figure 45. Robert Gagne's Nine Steps of Instruction (Clarke, 2000)
The nine instructional events with their corresponding cognitive processes can be described as follows (Clarke, 2000; Kearsley, 2008):
#'''Gaining Attention (Reception)''': Grab the attention of the participant by presenting a teaser in order to get the participant interested and motivate them to learn more about the topic that will be presented. This could be done using methods such as a movie, phrase, storytelling or a demonstration.
#'''Informing Learners of the Objective (Expectancy)''': Provide the participant with the objectives in order to assist them in organising their thoughts ready to receive the new information that will be presented.
#'''Stimulating Recall of Prior Learning (Retrieval)''': Provide the participant with any background that my assist them in building upon the new knowledge that they are about to receive. This helps to place a framework in their mind based upon previous knowledge.
#'''Presenting the Stimulus (Selective Perception)''': This is where the new learning begins. Information should be chunked and organised meaningfully in order to avoid memory overload and assist in the learning of new knowledge. Chunking the information into sequence of learning events and breaking it down into constituent parts with a structure and purpose that spans across different areas of comprehension. The revised Bloom’s taxonomy (discussed in the next section) can be used to assist in forming of the presented information.
#'''Providing Learning Guidance (Semantic Encoding)''': Assisting the participant to obtain a deeper level of understanding of the new knowledge so that information can be encoded into their long term memory. During instruction try to provide examples, non examples, analogies, graphical representation etc. to assist in semantic encoding process.
#'''Eliciting Performance (Responding)''': Letting the learner do something with the new knowledge or test their new knowledge to confirm they have a correct understanding of the information.
#'''Providing Feedback (Reinforcement)''': Analyse the learner’s understanding of the subject matter presented and provide feedback to correct any misunderstood knowledge. Immediate feedback and reinforcement of the new knowledge (e.g. question and answers).
#'''Assessing Performance (Retrieval)''': Test that the new knowledge is understood and the learning objectives have been met. This could be in the form of a test or a demonstration by the learner to assess if they have mastered the information.
#'''Enhancing Retention and Transfer (Generalisation)''': Generalise the information so that the knowledge transfer can occur, inform them of similar problems or a similar situation so that the acquired knowledge can be put into a new context.
===2.10.4 Bloom’s Taxonomy===
The Taxonomy of Educational Objectives also known as Bloom’s Taxonomy is widely used[23] to assist in the preparation of learning objectives and the assessment of learning outcomes. The learning outcomes of a student are the results of their learning experience of a course that should be a direct consequence of the course objectives (Monash University, 2008). Hence the application of Bloom’s taxonomy of educational objectives in forming course objectives provides a measure by which to assess student’s learning outcomes.
The original work of Bloom’s Taxonomy was developed by an American committee of educational psychologists chaired by Benjamin Bloom that presented over a period of time three domains: cognitive (knowledge) (Bloom, Englehart, Furst, Hill, & Krathwohl, 1956), affective (attitudes) (Krathwohl, Bloom, & Masia, 1964), and psychomotor (motor skills) (Dave, 1967, 1970; Harrow, 1972; Simpson, 1972). In forming educational course objectives Bloom’s cognitive domain is applied to assess the knowledge and intellectual component of a curriculum.
After nearly 47 years had passed Bloom’s cognitive domain was revised (Anderson et al., 2001; D R Krathwohl, 2002) by a committee of eight, two of whom had worked on the original published work (committee: Krathwohl and editor: Anderson). The revision was made as a result of many years of application and research and has since been accepted by many educators as a replacement for Bloom’s original work. The changes that were made are as follows (Figure 46) (Anderson Research Group, n.d.; D R Krathwohl, 2002):
*The names of six major categories were changed from noun to verb forms.
*Comprehension and synthesis were retitled to understand and create respectively, in order to better reflect the nature of the thinking defined in each category.
*Create was moved to the highest, that is, most complex, category.
*The revised Taxonomy is not a cumulative hierarchy.
*A taxon of remember was devised to replace that of Knowledge, and
*A two dimensional Cognitive Taxonomy Table was formed by sub dividing the original Knowledge taxon.
[[image:BLOOM_Changes_in_Cognitive_Domain_047.jpg]]
Figure 46. Changes in Bloom’s Cognitive Domain
====2.10.4.1 Revised Bloom’s Taxonomy of the Cognitive Domain====
A substantive difference is in the handling of “Knowledge”. The revised Bloom’s cognitive domain as shown in Table 3 was extended to include the dimension of Knowledge. So now the revised Bloom’s cognitive domain consists of a two dimensional table with The Knowledge Dimension and The Cognitive Process Dimension. This table provides the instructor with a tool with which to classify learning objectives where learning objectives are specific and inclusive to the discrete learning outcomes or intended results that are hoped to be achieved by the end of instruction. The instructor defines the learning objectives where these objectives are classified into the appropriate cell in the 2D matrix of cognitive and knowledge dimensions which then assists in instructional design, and assessment and provides a tool to enable balancing of the learning objectives across methods of instructional design.
[[image:BLOOM_TABLE_Revised_Taxonomy_048.jpg]]
Table 3. Revised Bloom’s Taxonomy Table
(Anderson et al., 2001, p. 28)
'''The Cognitive Process Dimension'''
The Cognitive Process Dimension is the column values for Table 3 above. This dimension provides the level of learning and comprehension required to complete a task where each differs in their complexity on a scale from 1-6. Cognitive dimensions are defined as 1.Remembering, 2.Understanding, 3.Applying, 4.Analysing, 5.Evaluating and 6.Creating each of which contain further sub-process with 19 specific cognitive processes in total. Table 4 provides an overview of each cognitive process with their defining verbs. Verbs are used to classify an objective. For example, an objective ‘to recall the 7 states of Australia’ would be classified under remembering. Recall in this instance is the verb that classifies the learning objective into level “1. Remember” of the cognitive dimension.
[[image:Cognitive_Process_Dimension_Processes_049.jpg:
Table 4. The Six Categories of The Cognitive Process Dimension And Related Cognitive Processes (Anderson et al., 2001, p. 31)
Bloom’s cognitive taxonomy was solely based upon the values contained in the cognitive dimension (with the exception of the differences previously discussed). Bloom believed that the cognitive process was a cumulative learning process in order to achieve a learning outcome. For example, in order to ‘analyse’ subject matter the student would need to have mastered using the old Bloom’s taxonomy of the cognitive domain knowledge/remember, comprehension/ understand and application/ apply whereas the revised taxonomy of the cognitive domain does not assume this cumulative hierarchy. The early Bloom’s cognitive domain took a behaviourist approach to instruction whereas the revised Bloom’s cognitive domain believes that learning can take place at any level without mastering previous levels. This is a fundamental shift in the philosophical grounding of Bloom’s taxonomy of the cognitive domain where it has moved away from the behaviourist approach of learning.
'''The Knowledge Dimension'''
The Knowledge Dimension provides an additional dimension that has been added to the taxonomy by the subdivision (and modification) of Bloom’s original knowledge category, which can be seen as row values in Table 3 above. The knowledge dimension defines how knowledge is constructed which can be Factual, Conceptual, Procedural or Metacognitive. Table 5 provides an overview of the knowledge dimension and their meanings.
The knowledge dimension separates the noun (or subject matter) from the stated learning objective. For example, continuing on from the objective discussed above ‘to recall the 7 states of Australia’ would be factual knowledge where the bolded words make up the noun construct. This noun is factual because the learner either knows the states or they don’t, to know is the basic element required in order to solve the problem.
[[image:Major_Types_and_Subtypes_Knowledge_Dimension_050.jpg]]
Table 5. The Major Types And Subtypes Of Knowledge Dimension (Anderson et al., 2001, p. 31)
The knowledge dimension has been added as it provides further insight to the type of knowledge a student is required to master. In the original work this assumption was also made as it was the first level in a cumulative hierarchy but the revised knowledge dimension provides the instructor with a greater understanding and assists in defining knowledge as a separate dimension. For example, the objective ‘to recall the 7 states of Australia’ the student needs to Remember Factual Knowledge.
The knowledge dimension like the cognitive dimension is not a cumulative hierarchy, learning can start anywhere within the knowledge dimension.
'''Using the Revised Bloom’s Cognitive Domain to Assist in Instructional Design'''
To assist in formulating instructional design Anderson et al. (2001) provides in their book for the cognitive dimension; sample objectives, corresponding assessments and assessment formats (chapter 5) and in the knowledge dimension; specific details, elements, generalisation, structures and models etc (chapter 4). This assists in the formulation of specific tasks and in defining the level of knowledge required of the student. It also assists in ensuring those objectives and testing of those objectives lie across the required range of cognitive and /or knowledge categories and that the student is being fairly assessed in areas that are directly related to the objectives.
====2.10.4.2 Bloom’s Taxonomy of the Cognitive Domain Applied to a Digital Environment====
'''Bloom’s Digital Taxonomy of the Cognitive Domain'''
Churches (2008) has extended the (revised) Bloom’s cognitive domain for digital learning by taking the cognitive process dimension and included verbs for emerging technology. As can be seen below (Figure 47) the words highlighted in blue are the digital emerging technology verbs that have been categorised by using (revised) Bloom’s cognitive levels as the basis for interpretation of complexity. For example bookmarking is a remembering process is simpler than programming (which is a creating process).
[[image:BLOOM_Revised_As_Digital_Taxonomy_051.jpg]]
Figure 47. Bloom's Digital Taxonomy
Churches further added within his classification system a rubric (scoring criteria) of these technologies similar to that that has been defined in the sub-classification system used in Bloom’s cognitive domain. For example, Table 6 displays the rubric for Bookmarking where it has been broken down from simplest to highest.
[[image:BLOOM_Bookmarking_Rubric_For_Digital_Taxonomy_052.jpg]]
Table 6. Bookmarking Rubric for Bloom’s Digital Taxonomy
'''Bloom’s Taxonomy of the Cognitive Domain applied to Games'''
Wang & Tzeng (2007) proposed using the (revised) Bloom’s taxonomy of the cognitive domain as a method for understanding the application of knowledge in digital games. They believed that players learn in various ways within computer games and recognised how little work (if any) had been done in analysing such e-learning platforms in a structured taxonomic manner and in structuring the implementation and understanding of the cognitive processes. They proposed using Bloom’s taxonomy of the cognitive domain as a method by which to assess cognitive processes in a computer game.
[[image:BLOOM_Taxonomy_For_Games_053.jpg]]
Figure 48. Bloom’s Taxonomy for Games
The research included using a game called Food Force, which was a problem solving and mission-oriented game. Figure 48 summarises the conclusion of their research. As can be seen in Figure 48, players exhibited both personal and social feedback cross Bloom’s cognitive levels. They found that the players experienced cognitive processes for individuals across all categories of the Bloom’s cognitive model and displayed social interaction for the higher level Bloom’s categories of Analyse, Evaluate and Create.
==2.11 Summary==
The acceptance of the latest crop of virtual worlds such World Of Warcraft, Second Life, Entropia Universe, There, Eve, America’s Army and others by the internet using public as an integral part of their life style is possibly the most significant paradigm shift to occur in the last 10 years. With the statistics of user volumes and retention rates shows consumption numbers in the tens of millions of users, spread evenly across ages from youth to middle age and an approximately even gender balance (at least in the social worlds) (KZERO Research, 2007; Woodcock, 2008; Yee, 2006). The growth rates of these worlds collectively have been, and are projected (by industry analysts) to continue to be, rising dramatically for the foreseeable future.
With the current convergence of disparate technologies represented by these systems, the general public now have affordable single platform multi-media collaborative environments with sufficient realism to create virtual immersive spaces where presence is achieved at a level sufficient for them to lead virtual existences and establish social networks that rival their real world existence.
The linking of these spaces with the affordable (often free) tools that enable the public to create new 3D spaces and content for these spaces over the last eight years has resulted in a world-wide content developer base that with substantial skills and a highly competitive market for purchasers of those skills at often very low rates.
With the combined market pressures of minimising education delivery costs, improving education outcomes, and reaching as wide a market as possible it is understandable that educators have shown an extended interest over many years in the possibilities of virtual environments for education delivery. So with the advent of the latest generation of creativity focused social worlds like Second Life over the last few years, it is not surprising that the uptake by universities and educators (numbering in the hundreds of institutions) has been as substantial as it is.
A brief retrospective of the work in simulators, virtual reality and 3D games, shows that the potential of these environments extends beyond the virtual ‘chalk-and-talk’ to enabling education delivery strategies for even campus based students that cannot economically be delivered using reality bound means.
With traditional real world learning environments there is an extensive body of tested knowledge that can provide clear guidance as to workable frameworks for the design of course work. The extent to which and how these methods can or should be applied to the virtual world learning space remains an open question.
</div >
[[Category:Featured Article]]
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
d875646505c796db4cb09ab9a1b432278c0a2c28
329
303
2018-10-28T00:34:00Z
Bishopj
1
/* 2.8.4.2 ACSII Virtual Worlds */
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 2: Virtual Worlds - Concepts, History, and Use in Education (Literature Review)=
==2.1 Introduction==
Gartner (2007) predicts that as many as 80% of active internet users will have a ‘Second Life’ in a virtual world by the end of 2011. Depending on your definition of ‘virtual world’ this may seem a little ambitious. Certainly, the extent to which virtual worlds are seen to include massively multi-user online environments supporting collaborative exchange of information in shared virtual space, the prediction might prove reasonably safe. To the extent that this definition is constrained to massively multi-player online games then prediction may prove a little “braver”.
Today’s virtual worlds represent the convergence of multiple technology streams, with the latest examples of the genre representing the merger of internet, telecommunications, instant messaging, virtual reality, 2D & 3D graphics, a variety of 3D modelling technologies, spatial sound, distributed databases, spatial indexing, mapping, streaming data transmission, physics, scripting languages, object-oriented software, agent theory, artificial intelligence, networking, economic modelling, online trading systems, game theory and many, many more technologies.
While the developers of many virtual worlds are content within the game space, some virtual world developers, such as Linden Research (developers of Second Life) have ambitions to be the web platform of the future (Bulkley, 2007). To this end a number of the commercial developers of virtual worlds have joined forces with a number of major corporate consumers, systems integrators and US government bodies to explore common standards for inter-operability of virtual world platforms which is a necessary first step in moving the technologies from the isolated proprietary place they now inhabit to a world-wide shared web platform (Terdiman, 2007).
This chapter explores virtual worlds, reviews the literature considering alternative definitions, characteristics, history, key architectural features, research outcomes and applications in education. The chapter concludes with an examination of traditional education taxonomy and relates that to the virtual world context as a basis for structuring an approach to exploring education affordances offered by two approaches to education in virtual worlds.
==2.2 Virtual Worlds==
===2.2.1 What is a Virtual World?===
====2.2.1.1 In Search of a Definition====
“Virtual worlds are places where the imaginary meets the real”. (Bartle, 2003, p. 1)
Virtual, as defined in the Oxford Dictionary (1989) with respect to the computing context is: “… not physically existing as such but made by software to appear to do so from the point of view of the program or the user….” and defined in the virtual reality context to be “… a notional image or environment generated by computer software, with which a user can interact realistically as by using a helmet containing a screen, gloves fitted with sensors, etc.” (1997).
The term world is defined in the Oxford Dictionary (1989) as “the ‘realm’ within which one moves or lives”.
In simple terms, therefore, a ‘virtual world’ can be defined as a generated computer software realm in which a user moves, exists or lives in a manner that appears to be real to the user.
A common definition for the term ‘virtual world’ is passionately debated in the literature (see Combs, 2004; Jennings, 2007; Reynolds, 2008; Wilson, 2007). It is a term that is used to describe many types of software environments from a simple MUD (Multi User Dungeons, also referred to as Multi User Dimensions or Domains) (Bartle, 2003; Keegan, 1997; Slator et al., 2007) to a sophisticated fully immersive 3D virtual reality environment used in gaming, physical training simulators or social interaction spaces (MetaMersion; Patel, Bailenson, Jung, Diankov, & Bajcsy, 2006; Van Dam, Forsberg, Laidlaw, LaViola, & Simpson, 2000). The term virtual world can be used to describe a single user walk-through simulated environment (Dalgarno, 2004; Youngblut, 1998) or an environment such as a massive multiplayer online role playing game (MMORPG) like World of Warcarft (Bainbridge, 2007). The term virtual world is also interchanged with other terms such as - virtual environment, synthetic world, mirror world, metaverse, virtual universe, artificial world etc[2] (Grøstad, 2007).
Bartle (2003, p. 1) provides the following definition:
<blockquote>
“Virtual worlds are implemented by a computer (or network of computers) that simulate an environment. Some -but not all- of the entities in this environment act under the direct control of individual people. Because several such people can affect the same environment simultaneously, the world is said to be shared or multi-user. The environment continues to exist and develop internally (at least to some degree) even when there are no people interacting with it; this means it is persistent.”
</blockquote>
Therefore, using Bartle’s definition in conjunction with the Oxford Dictionary definition provided above a virtual world can be defined as:
<blockquote>A shared software environment (or realm) in which a person represented as a projected entity (such as an digitally projected image, text identity or other computationally representational object) moves, exists or lives in a manner that appears to be real to the person and capable of affecting that environment and, being affected by, in a manner that simultaneously effects the experiences of other entities within the environment and which generally remains persistent once the user has left the world.
</blockquote>
The key components of this definition are:
#A shared environment in which a real-world participant shares a computationally generated artificial space with other real world participants and/or other computationally generated entities.
#The nature of the real-world participant’s projection into the computationally generated virtual space.
#The characteristics of the space, which establish a sense of realism to the participant.
#The manner and extent to which the real world participant is able to affect the shared space.
#The nature and form of persistence that the artificial space retains.
Throughout this section we will examine the current state of these components; the ideas and literature analysing contributing to the current expression of these concepts in the form of currently available virtual worlds. The realisation of virtual worlds in software has been (and continues to be) a rapidly evolving field continually consolidating mixed influences from a fiction, mechanical and electrical engineering, computer science, gaming theory, telecommunications, social science, commerce, religion and sociology. It is a field where advances are made as much in the act of amateur invention as in formal science, and a field in which the academic literature frequently lags the leading edge of the advances by a significant degree.
===2.2.2 Recognising a Virtual World by its Features===
While there is not as yet a single common set of universally accepted attributes, the literature offers a variety of feature based definitions that attempt to provide a basis for classifying whether a given application or environment is, or is not, a virtual world. Across these competing views there are some features that are most frequently repeated.
Coming from the perspective of virtual worlds as gaming platforms, Bartle (2003, pp. 3-4) proposes that a virtual world should adhere to the following conventions:
*'''Physics''': The world contains automated rules for the players that effect change in the world.
*'''Character''': The player is a part of in world experience that is represented by a character and with which they strongly identify.
*'''Interactions''': All interactions with the world are channelled thought the character.
*'''Real-time''': Interaction in the world take place in real-time.
*'''Shared''': The world is shared by others characters in common.
*'''Persistence''': The world is continuous and evolves to some degree regardless of whether or not the player is present in the word. While not present the player’s state in the game remains unchanged.
Bartle tends to use the term character, for what this thesis refers to as an avatar, and considers that the player (which will be identified as ‘the intelligence’ in this thesis) must strongly identify with that character. In the context of role playing games where the player assumes an identity not their own, this aspect of the feature list goes to recognise the effectiveness of the immersion and sense of presence the player experiences (concepts we will be exploring later), but outside of this space, where the player and the ‘character’ may be one and the same, this feature is less of a distinguishing criterion.
His use of the term Physics in the context of an application genre that may include 3D environments is perhaps a little confusing. In these spaces Physics most commonly refers to the physics engine that manages the simulation of an avatar and object dynamics in the space (such as gravity, acceleration, force, momentum and limb movement, etc). As used by Bartle, the term includes the ‘business rules’ and behaviours of the system – the rules governing all interaction, not just those simulating physical movement.
The nature of the shared space and interactive channel imply that the actions of one player affect the experience of another.
Edward Castronova (2001, pp. 5-6) proposes that a virtual world should have the following features:
*'''Interactivity''': Existing on one computer and can be accessed via a network (or the internet) by many simultaneous users. The actions of each user have influence on other users in the world.
*'''Physicality''': Users access the world by a computer, which provides a first person view of the world, the world is generally ruled by natural laws much like the real world with scarcity of resources.
*'''Persistence''': The world is continuous and evolves to some degree regardless of whether or not the player is present in the word. While not present the player’s state in the game remains unchanged.
Castronova’s feature requirements are essentially a subset of Bartle’s, although with the possible omission of the expectation that interaction is necessarily real time.
Sun Microsystems Inc (2008, p. 3) proposed the following common features of open virtual worlds (ie multi-user virtual worlds open to public access over the internet):
*Shared space, allowing multiple users to participate simultaneously.
*Users interact with one another and the environment.
*Persistence.
*Immediacy of the interactions.
*Similarities to the real world rules.
We might, perhaps reject Sun’s expectation of any need to assimilate ‘real world rules’ as this would exclude many fantasy role playing games from being classed as virtual worlds, but outside from this aspect Sun’s list is essentially consistent with the views of Bartle and Castronova.
These three sources are essentially consistent with the body of the literature, making allowance for the additional attributes and some latitude in interpretation we can establish a minimum feature list that would be generally accepted:
*The environment is shared;
*Interaction are in real-time;
*A person participates in the world through some form of representation with which they identify and are identified and that facilitates interaction and recognition (such as a character or avatar);
*Interactivity in the world is channelled though the avatar;
*Changes induced by a participant influence the experience of the space for other participants;
*Rules govern the world and interactions are shared and commonly applied; and
*The world is persistent.
==2.3 The Avatar–The Nature of a Participant’s Projection into a Virtual World==
While Bartle (2003) refers to a participant’s projection into a virtual world as a “Character”, the more widely accepted name today for a real world participant’s projection into a virtual world is an Avatar. This is the term this thesis will be adopting in this research.
The word avatar derives from avatara a Sanskrit word meaning “descent of a deity” or incarnation and utilised by the Vaishnavism religious tradition of Hinduism. The Hindi concept of an avatar is thought to originate as early as the second century B.C.E (Sheth 2002). One of the most recognised Hindu deities is Vishnu (Figure 1). In Hinduism, Vishnu, is said to have a standard list of ten avataras (collectively known as Dasavatara) with one of them said to be Buddha (Siddhārtha Gautama) the founder of Buddhism (Sheth 2002).
[[image:Vishnu_Hindu_Avatar_001.jpg]]
Figure 1. Hindu Avatara
Left: Visnu (or Vishnu) Hindu deity the protector and preserver of the universe
Right: Ten avatars of Visnu (Dasavatara)
(Vivekananda Centre, 2008)
In computing terms, little has changed from the original Hindi meaning of avatar. As with Hindu avatara, the virtual world participant can be thought of as “descending” or “projected” from reality to become a computational representational in a virtual world. In virtual worlds, an avatar is generally (although not exclusively) a graphical representation of the user’s persona (Deuchar & Nodder, 2003) although it can also be a representation of a system or a function in some applications (Sheth, 2003), a simple name in the form of a text string (in some text based MUD’s) and is evolving to include virtualisations of other senses (such as aural and tactile) (S.-Y. Lee, Kim, Ahn, Lim, & Kim, 2005). The graphical representation of an Avatar was thought to originate from a networked multi-user virtual world game called Habitat in 1984 (Bye, 2008; Morningstar & Farmer, 1990). Early research seems to suggest that the use of digital avatars in virtual worlds provides the user with reduced inhibitions and dissolves social status, or reconstructs social status among users (Dede, 1995; Dickey, 2003; Rheingold, 1993).
The projected form is not necessarily a recognisable representation of the real world human form. In his or her projected form, for example, the avatar might be represented as an image of a human, an animal, an animated mechanical object, a simple name, or any form appropriate to the virtual world, and within the technical capabilities of that world’s object management systems. For example, in Eve (a space based virtual world) all avatars are space ships whereas in Second Life (a social based virtual world) an avatar can take any form (Figure 2) but regardless of appearance your avatar’s name remains the same.
[[image:SecondLife_Digital_Avatars_002.jpg]]
Figure 2. Digital Avatars of Second Life (Levine, 2007)
In terms of today’s virtual worlds, and for the purposes of this research, an avatar should be thought of as a combination of a representation, an agent and an intelligence:
#The ''representation'' may be visual, aural, tactile or any other sense conveying the presence of the avatar to other avatars or agents in a virtual world.
#The ''agent'' is the library of capabilities of the avatar in a virtual world.
#The ''intelligence'' (or actor) provides the tactical and strategic control of the avatar, which could be artificial or natural (eg human).
In a virtual world the decisions of the intelligence are communicated to, and realised by, the agent. The consequence of the agent realising (enacting/implementing) the intelligence’s commands may result in a change in the state of both the agent and the representation, eg, in a 3D Graphical virtual world, a command to walk issued by the intelligence might result in the agent changing position and entering a movement or walking state and triggering the representation to display a walking animation (enter a walking animation state).
==2.4 A Taxonomy of Virtual Worlds==
===2.4.1 Introduction===
As might be expected, the literature contains extensive discussion of the appropriate taxa to be applied in classifying virtual worlds, and also an equal measure of disagreement among authors as to the appropriate criterion so to be applied. In spite of the range of discussions, most attempts are incomplete and therefore capable of classifying in a useable form only a portion of the genre. To be fair, this space is rapidly evolving and possibly as fast as it is classified a new entrant appears that change the paradigm, and old entrants are updated to include new capabilities.
===2.4.2 A Taxon for Virtual Worlds===
Outside of the education and virtual reality streams, possibly the largest single family of virtual worlds are those developed for games. While not actually claiming to propose a taxon, Bartle (2003, pp. 38-61), whose pedigree is essentially from the gaming stream, proposes a set of attributes that can be used to classify Virtual (game) Worlds. Not surprisingly, the attributes are most relevant to multi-user game focussed virtual worlds, but provide a workable superset of the current thought on the matter and with some adjustment can be extended to the more general examples of virtual worlds. He suggests that a virtual world can be categorised according to the following taxa:
#'''Appearance''': To a ‘newbie’ (Bartle’s term for a new user of a virtual world application) the distinction is whether the virtual world is a ‘text based’ MUD, ASCI, graphical 2D or graphical 3D etc. To an ‘oldbie’ (as described by Bartle) this is only an interface issue and therefore not as important as the other listed categories.
#'''Genre''': Is the world fantasy, cyberpunk, horror, social etc. The plot or the settings of the virtual world. This taxon is most helpful with purpose focussed virtual worlds. In the non-gaming or semi-gaming space occupied by some generalised social worlds, the virtual world is as much a platform on which other ‘sub-worlds’ can be based, and thus the genre of the virtual world can be all other genres. Examples of this might include PLATO and Second Life.
#'''Codebase''': Although not as important for the user as it is hidden from them this is an important aspect to the designer of a virtual world. The codebase defines the technical makeup of the world - reusable content and controls, scripting language, database structure etc. This researcher suggests that the codebase is not a single taxon, but perhaps should be separated into multiple taxa. In its place one might propose the content management, asset management, game engine, environment application programming interface, AI, and scripting function library within the system as more relevant technical matters.
#'''Age''': How long the virtual world lasts is an important aspect for the measure of success of the virtual world. Generally the longer you can keep a player (or user) interested the longer the virtual world survives which in turn attracts new users which adds to the player base of the virtual world.
#'''Player base''': How large is the player (or user) base of the virtual world. This measure varies depending upon what you are counting for example, the number of registered users, the number of avatars (a user can have more than one character in a virtual world but in general not for simultaneous use), simultaneous users logged in, hours played per user, access over a period of time, number of active subscriptions, etc. In some worlds the meaningful measure of player base is in fact the number of owner occupied ‘acres’ of virtual land (as opposed to general users of the virtual world). The player base measures the current success of the virtual world, its popularity so to speak, which in turn lengthens the age of the virtual world. Given the number of ways a player base can be structured and measured a single measure is open to both misinterpretation and reporting manipulation, and for some measures (like subscribed users – where some subscriptions are costed and others free) may be completely erroneous when comparing one virtual world to the next.
#'''Degree to which they can be changed''': Virtual worlds vary in the degree to which a user can change the content or add to the content of the virtual world. Virtual worlds such as World of Warcraft (and most game based virtual environments) allow no change by the player with all content created by the developers of the virtual world. Other virtual worlds such as Second Life, Active Worlds, TruePlay and PLATO rely on content created by the community. In the case of Second Life (for example) the entire virtual world is made from user created content by providing them with building tools, import and export capabilities, out-of-world interfaces and communications capabilities, an extensive library of API functions and a scripting language. The degree to which a virtual world’s content can be changed by the user adds to the technical codebase complexity and the user’s (and other user’s for multi-user virtual worlds) experience of and within the virtual world.
#'''Degree of persistence''': Bartle defines persistence to be the degree to which a world’s state remains intact if you shutdown and restart the virtual world. He classifies persistence into ‘discrete’ or ‘continuous’ groups. At the extreme a discrete virtual world would regenerate - described a ‘Ground Hog’ world (named after the movie). Here all content and the location of the player would be reset to the start of play. In a continuous virtual world the content and locations are retained through a restart.<BR />Persistence also relates to what happens to the world when a user logs off, does the virtual world continue to evolve without the individual player – and if so can the player’s state be affected while off line? A virtual world generally displays some level of persistence and is generally a term used to distinguish if a ‘virtual world’ is really a ‘world’ or in fact just a simple ‘Ground Hog’ environment (see Gehorsam, 2003). The ultimate level of persistence being that akin to the real world which is constantly evolving and changing regardless of our existence within the World.
With some modification and generalisation most of the taxa can be applied in the general case of gaming and non-gaming virtual worlds. To be applied outside of the narrow RPG (Role Playing Game) grouping, the classification system would benefit from some subdivision of elements.
We have already noted codebase as one such category. Codebase is such a wide group that is could be applied to every functional capability of the virtual world not covered by another taxon, and thus is of limited help in establishing a consistent framework for classification. For example Castronova (2001) taxonomy recognises a grouping under marketplaces (implying commercial functionality) while both Kish (2007) and Cavazza (2007) recognise groupings covering Paraverses (although they use different terms). In Bartle’s taxa these might both be covered as distinguishing characteristics under codebase, yet the one relates to the ability to conduct real-world commercial transactions in the space, while the other addresses the merging of real-world content with virtual world content.
Persistence as framed by Bartle mixes up multiple discrete concepts – host state persistence, user state persistence, environmental evolution, and scenario persistence. This last item is generally typical of games (such as quest driven environments where on restarting a ‘quest’ the user can rely on the sequence of events being a repetition of the sequence that occurred previously – effectively a ground-hog space within a larger persistent environment), and absolutely essential for simulators and learning systems where a user taking a course should be able to rely on the lesson replaying in a consistent and predictable way each time (unless variation is an intended part of the training like in a military battlefield virtual world). In order to classify virtual worlds, recognising these attributes independently of each other would be more helpful than identifying the world as persistent or not persistent, nor are the sub-features linearly related – i.e. one form of persistence does not imply the inclusion of another form of persistence (Purbrick & Greenhalgh, 2002).
===2.4.3 Applied Taxonomies===
While Bartle proposes a reasonably extensive set of attributes (taxa) for classification, some authors have proposed simpler classification regimes, although all seem as yet to avoid claiming an actual taxonomy.
Kish (2007) recognised that with the appearance of the weakly defined ‘Web 2’ technologies, virtual worlds could be seen to encompass a wider range of social networking and world-imagining spaces. Kish’s classification groups virtual environments into the broad categories (Figure 3):
#'''MMORPGs''': Massively Multiplayer Online Role Playing Games. A category which includes text and graphical gaming environments with the common theme of role playing and containing internally a hierarchical, level based player grading system to determine expertise and implied seniority, and generally plot or quest driven and goal oriented as their linking characteristic. Typical examples might include World of Warcraft, Entropia Universe, Everquest, MUDs, etc.
#'''Metaverses''': Imagined public fantasy spaces, emphasising social interaction, creativity and lacking a single plot or purpose for participation. Generally exhibiting a devolved structure without a single levelling system or clear environment imposed hierarchic seniority system[3]. Typical examples might include Habitat, Second Life, Active Worlds, Furcadia, etc
#'''Paraverses''': Spaces that intersect with the real world, incorporating content from the real world and thus could be described as virtual extensions of the real world. This group potentially includes many of the Web 2 spaces that contain sufficient functionality to create in the minds of their users a ‘real’ virtual community as strongly present to the participant as their real world existence.
#'''Intraverses''': Spaces that are otherwise Metaverses or MMOLE’s but private or closed to the broader public. Virtual reality environments could be seen generally to fall into this category as well as private/corporate implementations of public virtual world spaces. Typical examples might include Qwaq, Sun System’s Wonderland, IBM’s Metaverse, etc.
#'''MMOLEs''': Massively Multi-user Online Learning Environments. Possibly the oldest class of virtual worlds as it includes systems such as PLATO and is typified by educational environments supporting user social interaction. Primarily purpose (or although not goal) driven – such as learning, training, idea exchange, simulation, etc. This space includes the dedicated training / teaching environments of PLATO and planning / simulation management systems of SIMNET, Blackboard, Boston College’s Media Grid, etc.
[[image:Kish_Virtual_Geography_003.jpg]]
Figure 3. Virtual Geography (Kish, 2007)
Cavazza (2007) proposes that a virtual world should be open (public) and contain taxa supporting strong and generalised capabilities in each of the dimensions (Figure 4):
#Social networking
#Gaming
#Entertainment
#Business
[[image:Cavazza_Virtual_Universes_Landscape_004.jpg]]
Figure 4. Virtual Universes Landscape (Cavazza, 2007)
Consequently most of the virtual worlds identified by other authors are excluded from Cavazza definition of virtual worlds, but included under the broad category of ‘Virtual Universe’. To illustrate this idea Cavazza has classified a huge range of the existing virtual environments:
#Social
#*2.5 & 3D Chats
#*Avatar Centric
#*Social Platforms
#*Branded Universe
#*Virtual Worlds
#Game
#*MOG
#*Sports
#*MMORPG
#*Avatar Centric
#*Social Platforms
#*Branded Universe
#*Adult Games
#*Virtual Worlds
#Entertainment
#*Virtual Sex
#*Virtual City Guides
#*2.5 & 3D Chats
#*Avatar Centric
#*Branded Universe
#*Virtual World Generators
#*Virtual Worlds
#Business
#*Serious Games
#*Virtual Marketplaces
#*Adult Games
#*Virtual World Generators
#*Virtual Worlds
Cavazza’s definition and classification system is extensive, and possibly the most comprehensive to date. While Kish’s classification tends to focus on functionality, Cavazza’s emphasises purpose. Never-the-less, there is significant cross-over in their ideas. For example, both recognise the difference between games and social networking, and both accommodate the paraverses in a special category (Cavazza includes them in ‘Virtual City Guides’ among other groups). Cavazza’s analysis, however, lacks the accommodation of the education, training and simulation virtual spaces present in Kish’s categorisation, although, it might be argued that these are covered in multiple categories including ‘Virtual World Generators’ (eg PLATO, VastPark) and Serious Games (training simulators).
==2.5 What’s in a Name? – Virtual Worlds versus Virtual Reality==
Virtual Reality environments are generally a combination of user interface hardware (such as headsets and data gloves) and software. The availability of the (often costly or purpose built) user interface hardware has meant that the majority of these environments are either single user or very small scale multi-user environments (Jones & Hicks, 2004; Miller & Thorpe, 1995). A direct consequence of this is that Virtual Reality environments have tended to ignore the dimensions of user interaction, game play and collaboration in favour of the technology of immersion. This fact, possibly more than any other, has predisposed some authors to exclude virtual reality spaces from the domain of virtual worlds (Bartle, 2003; Yee, 2006).
While Bartle’s virtual world definition, contributes part of the definition we have adopted for virtual worlds in this research, the researcher departs from the entirety of Bartle’s embodiment of virtual worlds as expanded in that work. Bartle believes that a virtual world has a meaning divergent from that of virtual reality believing that “Virtual reality is primarily concerned with the mechanism by which human beings interact with computer simulations… [rather than] the nature of the simulations themselves” (2003, p. 3). To this extent Bartle’s definition specifically excludes Virtual Reality spaces from the definition of virtual worlds.
This researcher adopts a view consistent with some other writers in the field that excluding the body of work in virtual reality from the concept of a virtual world by writing virtual reality spaces out of the definition, places the emphasis narrowly on the social and gaming dimensions of these worlds, and away from the immersive experience thus excluding the vast body of research that predates or has been done in parallel to the development of gaming virtual worlds (Cosby, 1999; Heilig, 1955; Pimentel & Teixeira, 1994; Rheingold, 1992; Schroeder, 1997; Steuer, 1992; Sutherland, 1965; Walker, 1990; Woolley, 1994) and constrains the consideration of these environments in the education context to their collaborative and scripting capabilities.
Other authors have adopted definitions wider than that posited by Bartle of the virtual world concept, although in most cases constrained from some portion of the body of work that has contributed to the space. Dickey (2005, p. 439) implies an exclusion of 2D and non visual environments while providing: “Three-dimensional virtual worlds are a networked desktop virtual reality in which users move and interact in simulated 3D spaces.” Similarly, McLellan (2004) presents 10 classifications of virtual reality, a single virtual world being classified as ‘through the window’ where as a multi-user virtual world would be classified as ‘cyberspace’. Mazuryk and Gervautz (1996) make no distinction in the number of users in the virtual world but define a virtual world to be a ‘desktop VR (virtual reality)’ or a ‘Window on World (WoW)’ system. Biocca and Delaney (1995) defines a virtual world to be a ‘window system’ a computer generated three-dimensional virtual world viewed either by a computer screen or with the assistance of a head mounted display.
This researcher’s view is that all of these definitions are correct, but incomplete and that a definition that allows the participation of all of these examples is the most useful and appropriate in the education context. To appreciate the reasoning behind this argument we must look at some of the history of the development of the technologies and concepts that have contributed to the current family of virtual worlds and the problems and purposes these stepping-stones intended to resolve or achieve.
Authors adopting Bartle’s view have generally also adopted the view that virtual reality is essentially a hardware interfacing technology and hence the environments managed in this space are of no consequence. The misconception that virtual reality is a collection of hardware (data glove, head mounted displays etc) neglects the very meaning of virtual reality, which seeks to evoke a feeling of immersion and presence within the virtual space. In virtual reality research stream, using external hardware devices to enter a virtual world is only one method by which immersion and presence is achieved (Briggs, 1996; Steuer, 1992). No external device will ensure a user’s experience of immersion if the world they enter is an unconvincing generator of an alternative reality for the participant. Furthermore, if virtual reality is to be excluded from the scope of the definition of virtual worlds, then the existence of VR plug-and-play devices such as stereoscopic headsets, data gloves or haptic controls that are readily available to use with many mass market virtual worlds (that otherwise would fall within Bartle’s definition) for example, Vuzix iWear headset, Evolution Motion Glove of PS1, Wii Remote for Nintendo Wii, MS Force Feedback controller for Flight Simulator etc. would seem to contradict the proposed disconnect between the study of virtual worlds and virtual reality. Lastly the exclusion of virtual reality environments from the definition of virtual worlds ignores that fact that in the 3D virtual world space many of the technologies and concepts utilised were contributed by the virtual reality research stream (as will become clear from the history presented in the following sections).
In the education context, virtual reality technologies (as expressed for example in simulators) are a critical and essential contribution to the pantheon of virtual (training) worlds (Bailenson et al., 2007; Dede, 2004). In this researcher’s view, virtual reality environments are a subset of the virtual worlds, which are increasingly converging, if the space has not already converged in current virtual world examples such as America’s Army, Second Life, etc and massive multiplayer training environments like SIMNET (Lang, Maclntyre, & Zugaza, 2008; Lenoir, 2003; Zyda, 2005).
==2.6 Dimensioning Virtual Worlds==
===2.6.1 The Degree of Virtuality===
The degree to which a world is ‘virtual’ can be looked at as a sliding scale between physical and virtual. Milgram and Kishino (1994) presents a taxonomy for mixed reality visual displays called a ‘reality-virtuality continuum’ (Figure 5). On the left hand side of the scale is the ‘real environment’, which is equivalent to the real or tangible world, while on the extreme right is the ‘virtual environment’, which is equivalent to an artificially generated world. Between these two extremes is classified as ‘mixed reality’ (MR) made up of combination of both real and virtual matter.[4]
[[image:Reality_Virtuality_Continuum_005.jpg]]
Figure 5. Reality-Virtuality Continuum: Representation Scale for Visual Display
(Milgram & Kishino, 1994)
Figure 6 illustrates an example of the use of the reality-virtuality continuum taken from the MagicBook Project (Billinghurst, Kato, & Poupyrev, 2001). On the left of the figure is a book that is real (ie. the real world environment); in the middle the same book but viewed though an Augmented Reality (AR) Display where figures appear like pop-up characters on top of the book (ie. mixed reality or augmented reality); while on the right the same book but viewed within a virtual environment where the “reader” becomes the characters within the book.
[[image:The_Magic_Project_006.jpg]]
Figure 6. The MagicBook Project: An Example Of The Full Reality-Virtuality Continuum
While the MagicBook project was conceived around the integration of physical (tangible) real world objects with digital virtual world generated objects, when the real world objects are themselves digital or intangible – such as with course materials of photographic images, text, or other digital content the merging of the ‘Real World’ and the ‘Virtual World’ become less obvious. For example, real world authors Pamela Woodard and Wilbur Witt have published their works in the Second Life virtual world first or simultaneously with publication in the real world (Bell, 2006). Second Life virtual world can integrate conventional HTML web page content directly into the virtual environment (Release Candidate, 2008). Content developers and particularly trainers and presenters in Second Life routinely import textures and slides and stream sound and video from outside of the virtual world into the virtual space.
In the context of Milgram and Kishino’s reality-virtuality continuum, this research focuses on the right hand end of the scale i.e. using a desktop display of a virtual world in which all content is delivered virtually. In contrast to the MagicBook project this research considers (in the education context) the affordances from two virtualisation strategies – a direct reproduction of the real world delivery into the virtual (in part, by importing the non virtual world generated materials into the virtual world), and a transformation of the real world material into virtual material (in part, by recasting the non virtual world materials into virtually generated form).
===2.6.2 The Degree of Immersion and Presence===
====2.6.2.1 Introduction====
Virtual reality literature often separates a user’s experience of a virtual environment into physical and psychological components (Benford, Greenhalgh, Reynard, Brown, & Koleva, 1998; Biocca & Delaney, 1995; Sheridan, 1992; Mal Slater, 1999; Mal Slater & Wilbur, 1997; Steuer, 1992). The psychological components include the interaction (or connectedness) and belief where contribution of the participant or their willingness to believe in the reality of which they would otherwise know to be unreal and the physical aided by external mechanical and functional capabilities of the system.
In exploring the factors determining the effectiveness of Virtual Reality environments, Burdea and Coiffet (2003) determined that the aim of virtual reality is to achieve a trio of ‘Immersion, Interaction and Imagination’ (Figure 7. The Three I's of Virtual Reality), each of which holds equal significance to the user’s experience of virtual reality systems. A virtual reality system seeks fully to engage the user in the virtual space. They proposed that excluding any one of these features exposed a user to passive participation, and ultimately detracted from the perceived ‘reality’ of the experience.
[[image:Immersion_Interaction_Imagination_007.jpg]]
Figure 7. The Three I's of Virtual Reality
Slater (1992) defined user involvement to be a combination of the human experience which in turn is dependent on the technology (Figure 8). Telepresence (or presence) is the human sensation of ‘being there’ in a virtual environment[5] and seen influenced in part by the technology in terms of vividness (richness, realism) and interactivity (response) of the environment.
[[image:Steuer_Variables_Influencing_Telepresence_008.jpg]]
Figure 8. Technological Variables Influencing Telepresence (Steuer, 1992)
Slater and Wilbur (1999; 1997) revisited these concepts in later work, defining a user’s experience in terms of immersion and presence. Immersion is seen as an objective measure of ‘systems immersion’ technology such as field of view, quality of display etc and while presence is seen as a subjective measure, a psychological sensation of ‘being there’. From here on we will be using the terms immersion and presence as defined by Wilbur and Slater.
====2.6.2.2 Immersion====
Benford et al. (1998) propose classifications of artificiality and transportation for collaborative environments (Figure 9) that extends Milgram and Kishino’s reality-virtuality continuum. Artificiality (physical-synthetic) is equivalent to the reality-virtuality continuum. Transportation (local-remote) is the degree to which a participant becomes removed from their local space to operate in a remote space, which they define to be a similar to the concept to immersion. For example, CVEs (Collaborative Virtual Environments[6]) are placed on a scale of partial to remote transportation where a fully immersive CVE would be the ultimate level of transportation in a virtual reality system using devices such as HMD, data gloves, tactical and aural equipment that allowed for no outside distraction, the participant would be operating completely within virtual environment and be fully remote form their local environment[7]. Whereas, a desktop CVE is partially immersive as ones local surroundings form a part of the virtual environment eg field of view that allows for head turning away from the virtual space etc (Sheridan, 1992). In the context of Benford et al. transportation scale this research is conducted using desktop CVEs and is therefore only partially immersive according to their scale.
[[image:Artificiality_Transportation_as_SS_Metrics_009.jpg]]
Figure 9. Shared Space Technology According to Artificiality and Transportation
====2.6.2.3 Presence====
Research in online gaming virtual worlds has tended to focus on the human experience (presence) of virtual worlds rather than the ‘systems immersion’ aspects, while studies of virtual reality environments have tended to consider both. This is possibly a function of the common standard interface for massively multiplayer game environments that has traditionally been the desktop computer equipped with a mouse and keyboard. Although various more advanced mass market input devices (head mounted displays and 3D mice, etc) have been available to the mass-market for many years, they are not yet widely utilised.
The degree of presence is often linked to the effectiveness of a virtual environments (Witmer & Singer, 1998) which due to its subjective nature is possibly the most difficult to comprehend and therefore measure (Mal Slater & Usoh, 1993). Hence, this area has been a widely researched with various explanations as to what constitutes presence in a virtual environment (Schuemie, Straaten, Krijn, & Mast, 2001). The sense of ‘being there’ in the environment is subjective as Slater and Usoh (1993; 1994) describe presence is similar to a person’s ‘willingness to suspend disbelief’, a concept derived from British poet and literary critic Samuel Coleridge (1772-1834) in his autobiography (1817) where he describes the phenomena of when a person becomes so engaged in a narrative that they are willing to believe an event is true if even for only a brief moment. Although suspension of disbelief is often linked today with mediums such as film and literature, virtual worlds (especially Role Playing Game (RPG) worlds) provide many of the same traits in which the user can be thought of as an actor within the virtual world that forms a part of the storyline.
A number of presence classification strategies have been proposed by various authors. We will consider:
#Schroeder - focussing on the importance of social interaction
#Bartle – focussing on the degree of commitment in the environment
Schroeder (2006) presents presence in a continuum of shared virtual environments (SVE) within a three-dimensional model (Figure 10). Presence (x), copresence (y) and connected presence (z) can be described respectively as ‘being there’, ‘being there together’ and ‘being connected together’. Connected presence can be thought as the extent to which a relationship is mediated when presence and copresence exist. Mapping is done on a comparison with a physical face-to-face relationship (0,0,0) and an entirely immersive environment such as a networked Cave (1,1,1). For example, face-to-face is (0,0,0) there is no presence (and thus no copresence) as no meeting is taking place in a virtual environment whereas in the case of a networked Cave (1,1,1) the entire relationship (and environment) is virtual where affordances are such for high connected presence.
[[image:Presence_Copresence_Connected-Presence_010.jpg]]
Figure 10. Presence, Copresence, and Connected Presence
In different media for being there together
Of interest in Schroeder’s model is the comparison of desktop SVE and online computer games. The example given in the model for a desktop SVE is Active Worlds which is a massively multiplayer online (MMO) social virtual world and the example provided in his paper for an online game is Quake, which at the time provided for up to 16 players sharing a common space. Both are virtual worlds, use text chat and sound, and use avatars to project the participant into the virtual world (although Quake takes a first person view exclusively). For the purpose of the analysis the main differences were perceived as the number of simultaneous players sharing the common virtual space and the imposition of clear game driven objectives in Quake, and the absence of those same game driven objectives in Active Worlds. Yet, Active Worlds was seen as providing a higher level of connected presence. So why does Active World provide a higher level of connected presence? The distinction here was seen to be the in the concept of the ‘game’ rather than number of players when you compare it to other SVEs presented in this model above. Active World is a social world in which no plot is provided to measure success or failure of an individual, unlike Quake where the measure of success is clear and the entire activity and function of the environment is the relentless pursuit of that individual success. Therefore it was deduced that a social (game) world provide for more connected presence than that of an individually focussed plot driven gaming virtual world (at least as analysed by Schroeder).
Schroeder observation of higher connected presence in social virtual worlds seems to fit with Heeter’s (1992; 2003) definition of social presence where she defines presence in terms of individual presence, social presence and environmental presence. Presence of an individual is increased when social relationships are formed which is based upon the social component of perceptual stimuli. When an environment or situation is focused on the relationship (rather than killing a monster like in RPGs) a higher social presence will be achieved.[8]
Bartle (2003, p. 42) identifies a system of levels of immersion (which in this paper we have defined as presence[9]) based upon a linear scale of the; Player (the real person), Avatar (the digital puppet), Character (representation in the world e.g. character name, role etc) and Persona (your identity in the virtual world where the player is the Character and is in the virtual world). Persona is similar to the concept presence, if your character is killed ‘you feel like you have died’ there is no distinction between the character and the player, they are one, the Persona. Bartle believes that the avatars and character are just steps along the way to persona. Persona is when a person ‘stops playing the world and starts living in the virtual world’.
==2.7 Influences on Virtual Worlds from Art and Literature==
===2.7.1 Introduction===
The concept of a virtual world is by no means unique to computing. The thought of exploring an imaginary realm has captivated people’s imagination throughout time.
“If we define that a virtual world is a place described by words and/or projected through pictures, which creates a space in the imagination real enough that you can feel you are inside of it, then the painted caves of our ancestors, shadow puppetry, the 17th-century Lanterna Magica, a good book, play or movie are all gateways to virtual worlds. Humanity’s most powerful new tool, the digital computer, was also destined to become a purveyor of virtual worlds, but with a new twist: The computer enables the virtual world to be both inhabited and co-created by people participating from different physical locations.”(Damer, 2007, p. 2)
At least with respect to the massively multiplayer online virtual worlds/role playing games (MMOVW, or MMORPG), all of today’s exhibits can trace their paradigms to literature. Some such as Eve, Entropia Universe, World of Warcraft are amalgams of a body of works and ideas while others such as MUD1 (Sword of the Phoenix (Howard, 1932)) and Second Life (Snow Crash (Stephenson, 1992)) are direct inspirations of specific literary works.
Consequently, to properly understand the ‘state of the art’ represented by today’s multi-user, connected together, virtual worlds and the gaming, social and business rules they have adopted to govern them, it is essential to consider the context from which they have been derived, and the art that has influenced their creators. While some operational paradigms in virtual worlds are technology constraints, functional capability constraints can be as much a condition of the imagined world being implemented as a real constraint of the technology of the day. To appreciate this fact one need only compare the camera controls of Project Entropia versus those of Second Life – two environments of comparable age, or the commercial capabilities of these two environments versus those of World of Warcraft. In each case the differences and apparent restrictions are a game design decision rather than a technology constraint.
===2.7.2 Virtual Worlds of the Arts===
James Pearson (2002) believes that from as early as 30,000 years ago in the Chauvet Cave in France shaman used cave art as a means to document their experiences of travel to the dream world. Packer and Jordan (2002) also draw this similarity in their book on virtual reality describing how the Cro Magnon in 15,000 BC in the Lascaux caves of south-western France used cave art (Figure 11) with candles and the acid aroma of animal fat for a magical theatre of the senses.
[[image:Cave_Art_BC_011.jpg]]
Figure 11. The caves of Lascaux: Cave Art 15,000 BC
The German composer Richard Wagner (1813-1883) (Figure 12) concept of Gesamtkunstwerk (total artwork) has also been cited as an early pioneer in the concept of immersion and presence in virtual worlds (Grau, 1999; Klich, 2007; Packer & Jordan, 2002). Wagner believed that a “Artistic Man can only fully content himself by uniting every branch of Art into the common Artwork” a synergy that not only includes the performance but all that surrounds so that mankind “...forgets the confines of the auditorium, and lives and breathes now only in the artwork which seems to it as Life itself, and on the stage which seems the wide expanse of the whole World” (Wagner, 1849, p. 184 & 186).
[[image:Wagner_Gesamtkunstwerk_012.jpg]]
Figure 12. Richard Wagner's Gesamtkunstwerk (Total Artwork)
===2.7.3 Virtual Worlds of Fiction and Fantasy===
There are numerous examples of virtual world that have been explored though fiction and fantasy. Each has contributed to the illusion of virtual worlds becoming a reality (Bartle, 2003; Chesher, 1994).
In Lewis Carroll’s novel, Alice's Adventures in Wonderland (1865), Alice fell down a rabbit hole to explore a fantasy world inhabited by peculiar and anthropomorphic creatures. Similarly, in Carroll’s follow on novel, Through the Looking Glass (1871), Alice explores a world behind a mirror. Hattori (1991) saw Lewis Carroll’s novels as a paradigm for modern virtual reality systems (Figure 13) blending the physical space with fantasy in a rapidly changing environment. To this extent, Carroll’s works provide a perfect analogy for the design and the development of virtual worlds (Rosenblum, 1995; West Virginia University, 2008). An explorative virtual world was realised as a children’s computer game called The Manhole (1988-2007) where it was based upon Carroll’s novel Alice’s Adventure in Wonderland (Wikipedia, 2008a).
[[image:Alice_via_Caroll_and_Hattori_013.jpg]]
Figure 13. 'Through the Looking Glass' Carroll (1871) & 'The World of Virtual Reality' Hattori (1991)
Within the fantasy literary genre, a key influence has been the works of J R R Tolkien starting with The Hobbit (1937) and its sequel The Lord of the Rings (1954, 1955) (Figure 14). An adventure fantasy that takes place in an imaginary world called Middle-Earth containing races such as Hobbits, Wizards, Elves, Orcs, Dwarves and Trolls. Tolkien’s literature style was so popular that the Oxford dictionary termed his literature approach as tolkienesque[10].
[[image:JRR_Tolkein_Book_Covers_014.jpg]]
Figure 14. The Hobbit & The Lord of the Rings by J. R. R. Tolkien (1937, 1954, 1955)
With respect to today’s virtual worlds, Tolkien’s contribution has not been merely in the construction of a raft of characters, racial groups and social concepts for role playing game inhabitants and interaction rules, but most importantly in his deep backgrounding of the imagined worlds. He did not merely describe his characters within the context and flow of the story line, but extended beyond that which was needed to tell a story, into that which was needed to make us believe of the real existence of his virtual worlds, Tolkien provides the reader with immaculate detail and descriptions to immerse them into the world Middle-Earth. Both books contained land maps (Figure 14) and the final book to The Lord of the Rings (released in 3 parts) containing appendices describing chronologies, histories, family trees, languages and translations and a calendar and dating system. Being a professor at Leeds and Oxford University he approached his work more like an academic anthropological study of an imagined world than a novelist (Macmillan, 2008).
In so doing Tolkien demonstrated a fundamental understanding a core strategy in establishing convincing presence – the necessity for a consistent, credible back story underpinning the virtual world. It is an early example of the depth of design that many later virtual worlds would exhibit in order to create a convincing sense of presence for the participant (Bartle, 2003; Schmidt, Kinzer, & Greenbaum, 2007).
A couple of virtual worlds that has been translated from Tolkien’s literature are the online virtual world ‘Lord of the Rings Online’ (2007) and PLATO’s MUD virtual world ‘Mines of Moria’ (1974).
More recently, literature has turned to imagining realities in which computational virtual worlds are a fundamental component of the plot. It is from this group that many of the terms now used to describe aspects and elements of virtual worlds are derived or were popularised, such as ‘avatar’, ‘metaverse’, ‘cyber-space’, etc. Some recent examples of novels containing a plot of computation virtual world are True Names (Vinge, 1981), Neuromancer (Gibson, 1984) and Snow Cash (Stephenson, 1992) (Figure 15).
[[image:Recent_VR_Literature_Covers_015.jpg]]
Figure 15. Recent Literature True Names (Vinge, 1981), Neuromancer (Gibson, 1984), Snow Cash (Stephenson, 1992)
'''Vernor Vinge’s True Names''' is not as well know as other novels in this genre but it was the first to present the concept of a person entering a computational virtual worlds meeting other people in ‘the other plane’ (Kelly, 1995). It was also unique in bringing the concept of anonymity to the digital world with one’s digital persona (handle) being different from one’s real self and where there was a necessity to hide one’s real identity thus your true name (and hence the title). It was translated to the computational virtual world in the form of ‘Habitat’ – the first graphical social networking virtual world (Farmer, 1992).
'''William Gibson’s Neuromancer''' a true cyberpunk[11] novel is possibly the most widely quoted in the virtual environment space (Chesher, 1994) . In this novel Gibson coined the term cyberspace with the concept of a viable parallel online world capable of critically impacting events and commerce in the real world.
'''Neal Stephenson's Snow Crash''' is where the term Metaverse was coined. Metaverse is a planet-sized city that has one continuous street 65,536 kilometres (216 km) in length where millions of people (known as avatars) travel up and down daily in search for entertainment, trade or social interaction. Although similar, in one sense, to Neuromancer it came from a different perspective in that people actually lived in the Metaverse not as cyberpunks getting up to mischief but as everyday people living a mainstream life real life in the virtual world. In this world real commerce was conducted and virtual artefacts were bought and sold with real world consequences which has been realised in the development of the virtual world Second Life.
Hollywood also contributed to the fantasy of the reality of virtual worlds. Films such as Tron (Lisberger, 1982), The Lawnmower Man (Leonard, 1992) and The Matrix (Wachowski & Wachowski, 1999) (Figure 16) just to name a few gave us the visual of virtual worlds that the books could only describe, and in some cases explored the haptic interfaces now being realised (Chesher, 1994).
[[image:VW_Films_Tron_LawnmowerMan_Matrix_016.jpg]]
Figure 16. Hollywood Films
Tron (Lisberger, 1982), The Lawnmower Man (Leonard, 1992), The Matrix (Wachowski & Wachowski, 1999)
At the time of their release, the novels and movies discussed above may have seemed futuristic and the concepts unobtainable but today we are much closer (if not already past) with advances in networking, computational processing power and understanding of the sociology of virtual environments. Maybe a ‘jack-in’ device that stimulates our nervous system to travel into cyberspace (Neuromancer, Gibson, 1984) is still a little way off (and may be too intrusive for some), or smelling odours or feeling textures within a virtual world may never be quite the same as the real life experience but what once seemed unimaginable in these works has become reality today. With technological advances and the rapid adoption of internet enabled online virtual worlds many of these concepts are less science fiction and more science fact than they once were.
==2.8 The History of Computational Virtual Worlds==
===2.8.1 Introduction===
In a lecture delivered by Ivan Sutherland in 1965 the first steps were made to combine computer design, construction, navigation and habitation of software generated virtual worlds (Packer & Jordan, 2002). Here Sutherland laid down a vision for the development of virtual worlds, as paraphrased by Brooks (1999, p. 16):
<blockquote>
“Don’t think of that thing as a screen, think of it as a window, a window through which one looks into a virtual world. The challenge to computer graphics is to make that virtual world look real, sound real, move and respond to interaction in real-time and even feel real.”
</blockquote>
The new-born medium of the graphical, digital virtual world experienced a “Cambrian Explosion” of diversity in the 1980s and ‘90s, with offspring species of many genres: first-person shooters, fantasy role-playing games, simulators, shared board and game tables, and social virtual worlds. (Damer, 2007)
The massively multiplayer online virtual worlds of today, with their world-wide user bases, are essentially a consequence of the mass adoption of the internet which commenced in the early 1990’s. Since the internet first achieved general acceptance they have advanced substantially in technical capabilities, graphics and number of subscribers (Figure 17) (Woodcock, 2008). See Appendix B: MMOG Analysis, for a break-down of MMOGs contained in this graph.
[[image:MMOVW_Growth_Rate_017.jpg]]
Figure 17. Massive Multiplayer Online Virtual World Growth Chart 98-2008
The virtual worlds of today (such as World of Warcraft, Entropia Universe, America’s Army, and Second Life, etc) represent a convergence of several disparate computational, technical and social origins and drivers. Current virtual worlds combine 3D visualisation, game theory, text messaging, animations, context and text sensitive gesturing, natural language processing, spatial voice & audio, artificial intelligence, agency theory, physics, connectedness, persistence, business strategy, sensory hardware and haptic interfaces, telecommunications, 2D image processing, video chroma-keying, social networking and many other influences to achieve their sense of immersion and presence. In this section we explore some of the milestones along these convergent paths.
As many of the influences that have contributed to our latest virtual world are derived from research streams that were concurrently pursued over more than 50 years, we shall look at the history of virtual worlds in six streams:
#Hardware based user interfaces and virtual reality environments
#Early graphical computer games
#Text and Text+ based Virtual Worlds
#2.5 and 3D graphical multi-player virtual worlds, broken down into:
#: a. MMORPGs
#: b. Social Virtual Worlds
#Simulation and Training Worlds
It should be noted that, while we will be considering the history in these streams, some virtual worlds necessarily exist in more than one stream. The grouping is that of the researcher, based on an extensive assessment of the literature, rather than the view of any one author.
===2.8.2 Hardware Based User Interfaces and Virtual Reality Systems===
====2.8.2.1 Introduction====
These two areas are grouped together, not because Virtual Reality (VR) Systems are a hardware solution, but rather because the work done in virtual reality worlds has generally aimed for extremely high levels of both immersion and presence and has therefore generally (although not always) been coupled with hardware in the form of purpose built user interfaces, designed to assist the sense of immersion such as headsets, or data gloves, etc.
The importance of the progress in VR systems to virtual worlds is that they have contributed or assisted much of the fundamental graphical rendering technologies, 3D animations studies and spatial awareness research and conceptualised the immersive aspects of virtual worlds.
====2.8.2.2 Sensorama====
One of the earliest inventions in the genre of virtual world simulators was developed by a cinematographer Morton Heilig. Inspired by the work of Fred Walker’s with Cinerama[12], Heilig presented a paper in 1955 ‘The Cinema of the Future’ (reprinted in Packer & Jordan, 2002). In an extension of Wagner’s (1849) Gesamtkunstwerk (total artwork) concept (Holmberg, 2003), Heilig believed that the logical extension of cinema was to provide the audience a first person experience of film using all their senses – “Open your eyes, listen, smell, and feel—sense the world in all its magnificent colors, depth, sounds, odors, and textures—this is the cinema of the future! (Packer & Jordan, 2002, p. 246)”
[[image:Morton_Heilig_Sensorama_Simulator_018.jpg]]
Figure 18. Morton Heilig, Sensorama Simulator, U.S. Patent #3050870, 1962
Heilig developed and patented the Sensorama Simulator (Figure 18) in 1962. The Sensorama was a single person simulator that offered the viewer a multi-sensory fully immersive theatre. The viewer could sit to watch a short three-dimensional stereoscopic movie that included stereo sound, an odour generator, force feedback handle bars, chair motion and wind on the viewers face (Rheingold, 1992). Heilig believed that the Sensorama Simulator could be next generation of theatres placed in hotels and lobbies or any small space that could fit his miniature theatre (Heilig, 1955, p. 345).
Heilig also recognised that the Sensorama Simulator offered training and learning potential for educational and industrial intuitions (Rheingold, 1992, p. 58) but unfortunately the Sensorama Simulator never took off, it was “a time when the business community couldn’t figure out what to do with it” (Laurel, 1991, p. 52). This may have been different a decade later when Pong kicked-off the arcade game industry and when education, industry and government saw great potential from investing in virtual world technology as they did with the Head Mounted Display (HMD).
====2.8.2.3 Head-Mounted Display====
In 1968 Ivan Sutherland presented the first computerised graphical HMD (Figure 19) (Sutherland, 1968)[13]. The HMD had a cathode ray tube (CRT) for each eye with a three-dimension simple wire-frame view of a room with motion tracking when the viewer moved their head. Known as ‘The Sword of Damocles’ based upon a Greek legend of a man placed in a precarious position of luxury with a sword above his head (Oxford Dictionary, 1989), similarly the HMD had a computer suspended above the users head attached by a mechanical arm (Figure 19, right) (Carlson, 2003).
[[image:HUD_The_Sword_of_Damocles_019.jpg]]
Figure 19. Head Mounted Display first called The Sword of Damocles (Sutherland,1968)
The HMD was a significant milestone in the development of virtual reality technology, which has since been used in a variety of applications in virtual worlds. Holding advantages over a traditional computer monitor such as total head and body movement, non interrupted viewing for total immersive HMDs and simultaneous viewing of real world and virtual world artefacts in ‘see-though’ HMDs or sometimes called Augmented Reality Displays (Rolland & Hua, 2005).
Today’s HMDs are more compact than Sutherland’s 1960s prototype (Figure 20). In the figure is shown on the left a HMD used for mixed reality environments similar to that designed by Sutherland and right a immersive HMD which is compatible with several online and gaming virtual worlds.
[[image:HUD_See_Through_and_Immersive_020.jpg]]
Figure 20. Today's Head Mounted Displays - Left: See-Though HMD - Right: Immersive HMD
===2.8.3 Early Graphical Computer Games===
Computer games have had a large influence on the evolution of virtual worlds both in the development and use of the technology. The contribution of games includes computational game theory, 2D and 3D graphics, social modelling, simulation, strategies for achieving presence, artificial intelligence, computational game physics and, possibly most significant delivery of a massive consumer market to fund and drive the investment needed for innovation and technology improvement. By far the majority of today’s online virtual worlds were conceived and/or delivered as games, they have subsequently evolved into general business or training platforms which are sometime referred to as Serious Games (Annetta, Murray, Laird, Bohr, & Park, 2006).
The early computer games that can be traced to a few innovative applications (Figure 21):
*'''Tennis for Two''': In 1958 William Higinbotham developed the first electronic game simulator using an oscilloscope display that demonstrated a two-dimensional side view of a tennis court. It was a two player game that the user could control the direction of the bouncing ball by turning a knob on a hand held device. Originally developed by Higinbotham to occupy visitors to Brookhaven National Laboratory during open days the game had queues of people waiting to play (Brookhaven National Laboratory, n.d.). Tennis for Two introduced the concepts of a shared multi-player electronic game experience, a rule based environment managed by a machine, and an electronic space where the actions of one player in the shared space affected the experience of another. The attention the game attracted demonstrated the willingness of participants to accept the visual and sensory limitations of a machine managed game environment and immerse themselves in the experience.
*'''Spacewar!''' The idea originated in 1961 by Steve Russell at the Massachusetts Institute of Technology (MIT) by 1962 the game was released with assistance from his colleges. Spacewar! was the first official release of a two-dimensional computer game.[14] A two player game each with a spaceship that would fire bullets at each other before being pulled into the middle by the sun. Developed originally to demonstrate the power of the new PDP-1 computer, this game was a good demonstration of both the graphic capabilities and the process power of the machine (Computer History Museum, n.d.; Markowitz, 2000). Later in 1969 Rick Blomme modified the game to run on PLATO which made this the first game to be networked (Koster, 2002; Mulligan, 2002). While Tennis for Two was the first multiplayer electronic game, Spacewar was the first computer based multiplayer game. It thus contributed the same key concepts and ideas as Tennis, only for the first time on a computer managed environment.
*'''Maze War''': In 1973-1974 Steve Colley developed the first three-dimensional ‘first person shooter’ (FPS) game Maze War at NASA Ames Research Center. A player would navigate around a maze searching for other players to shoot. As seen below (top right) the player had a first person view, (the eyeball seen in this picture is the other player). This is called a ‘first person’ game, placing the player ‘in-world’ as a part of the game is a significant concept of virtual world games. Maze War also provided other innovations now common to virtual worlds such as instant messaging, levelling and non player robot characters (Damer, 2007). This game which started as a two player game was eventually connected to ARPANET (the forerunner of our current internet network technology) allowing several users from remote locations to play and interact (Colley, n.d.; Damer, 2004). Maze War can therefore lay claim to being the progenitor of virtual worlds but not an actual virtual world because of its lack of persistence.
[[image:Early_Computer_Games_1958_To_1974_021.jpg]]
Figure 21. Early Computer Games 1958 - 1974
*'''DOOM (1993) (II, 1994)''' a 3D FPS game was influential both on a conceptual and technical level (Friedl, 2002; Mulligan, 2000). In DOOM the concept of Maze War was re-implemented in a much more graphically rich 3D environment. Although only a single player game, the key innovation of relevance was the method used to manage the rendering of the 3D space to allow multiple non-player characters to participate in the 3D environment with the player. The strategy adopted was essentially to divide the world into many small rooms surrounded on all sides by walls (essentially a cave system) by rendering only a single room at a time the entire resources of the computer could be devoted to a known confined rendering space, thus achieving the illusion of a highly detailed rendering with the limited computational resources available on the PC’s of the day. Although higher quality 3D rendered games were available some seven years earlier on the Amiga computers from 1986 (including some utilising real-time ray tracing technology), these relied on dedicated proprietary games architected graphics cards and did not provide a 3D space management paradigm that could be easily translated to the future demands of online 3D games. The Doom model could, precisely because it was architected for the graphically and processor challenged generalised home PC’s of the day, rather than proprietary games machines such as the Amiga. The Doom games engine was utilised in many subsequent games and later formed the basis for the model adopted for the online game Quake (Petrich, n.d.; Wikipedia Doom, 2008).
Around the time of DOOM the game industry realised the importance of connecting people together for online gaming, seeing the opportunity they started adding a modem and LAN play and later TCP/IP functionality to their games that allowed both single and multiplayer connectivity. Early games allowed up to 4 players but today’s games can have up to 64 players in a single game session (Quake Wars[15]). Some of the better known brand names included:
*'''Quake''' (1996, a multiplayer extension of DOOM) saw over 80,000 people connected to 10,000 + simultaneous game session (Mulligan, 2000).
Warcraft (1994) (II, 1995) that eventually would become the basis to the largest MMORPG today World of Warcraft (2004) which now has over 11 millions subscribed users (Blizzard Entertainment Inc, 2008).
===2.8.4 Text Based Virtual Worlds===
====2.8.4.1 Text Virtual Worlds: MUDs====
In 1978 the first MUD (Multi User Dungeon) outside of the PLATO system (discussed under Training and Simulators) was created by a Computer Science undergraduate Roy Trubshaw (and shortly afterwards joined by Richard Bartle) from Essex University in England. A text based virtual world, coined a MUD by Bartle was based upon Robert E Howard’s (1932) fictional tale ‘The Phoenix on the Sword’. MUD1[16] was an adventure role playing game, with game levelling and chat rooms which allowed up to 32 players to connect simultaneously over a remote connection (Figure 22) (Bartle, 2003).
[[image:Bartle_The_First_MUD_022.jpg]]
Figure 22. The First MUD: Roy Trubshaw and Richard Bartle (1978)
Early in the game’s history Essex University on whose computers the game was hosted became a part of ARPANET (the forerunner of the internet) and soon after MUD was distributed through that network and being played on universities throughout the world. Some of these institutions were also open for public access. Although copyrighted many variations of MUD1 were made and distributed freely from what Bartle (2003) describes as either player inspiration or pure frustration with the 32 player limitation which made it impossible to play when dial-in lines were fully allocated.
Keegan (1997) identifies two main classification of MUDs developed during this time (Figure 23) - the Essex MUDs (Trubshaw and Bartle’s) and Scepter of Goth (1978). Unfortunately Scepter died an early death, the game was sold and soon afterwards passed onto the creditors when the purchasing company ran out money (Bartle, 2003). Most MUDs were therefore based upon the ideas and technical structure of Trubshaw and Bartle’s MUD (Bartle, 2003; Keegan, 1997).
[[image:Basic_MUD_Tree_Structure_023.jpg]]
Figure 23. Basic Tree Structure for MUD classification
MUD1 introduced a number of concepts retained by most of today’s virtual worlds. Among which are:
*The role and effectiveness of the text based narrative and text communication that contributed to, rather than detracts from the sense of presence.
*Persistence in game play.
*Shared game space and cooperative (team based) activity.
*Non-player artificial intelligences called AI’s (or non player characters) as part of the experience.
*Region based environment management.
*Role-playing as a central game theme.
*Characters and avatars (all be it text based in the early MUDs).
*Game defined goals but player implemented plots.
Region based environment management is a computational aid that warrants particular attention. It was also used by the DOOM 3D graphics engine to manage multi-user environments allowing the computer to render the shared space in a single discrete region at a time. In DOOM this was a room, in MUD1 it was a cave in more recent virtual worlds it may be as much as a 65,000 sqm area (Second Life). This strategy provides a method of scaling the virtual worlds to many regions by distributing the region management across many discrete servers but imposes practical limits on the number of players that can be present in any given region at an instant in time (Hu & Liao, 2004).
MUD1 had a significant impact on virtual world design and development that dominated the online game space until the mid 1990s therefore MUD1 is often marked as the beginning of the first generation in online virtual worlds (Bartle, 2003). MUD1 can still be played online today at british-legends.com (CompuServe, 2007).
====2.8.4.2 ASCII Virtual Worlds====
In the early 1980’s pseudo graphical interfaces were added to some MUDs in the form of ASCII virtual worlds. ACSII (American Standard Code for Information Interchange) is the most widely adopted character encoding on western computer systems. ASCII virtual worlds provided a pseudo-graphical display making use of shape symbols and character positioning escape sequences to create crude planar maps of the terrain (dungeon) environment. The maps enhance the description of the room provided by the text.
ASCII pseudo graphical virtual worlds provided the player with a view of the world improved over the simple text prompt and description of MUDs. An example of an ASCII game can be seen below (Figure 24) Islands of Kesmai (IOK). Developed in 1982 and released in 1984 the game provided a player with a 3rd person view - overhead view of the world. Walls were denoted by [], fire **) and the players were letters (Bartle, 1990). IOK was Compuserve’s (USA ISP) best selling game with players paying up to $12.50 per hour to play (based upon connection time not game played) which usually had between 10-60 players online simultaneously (Bartle, 1990). Other ASCII games around this time were MegaWars I & MegaWars III (1983), NetHack (1987 (O'Donnell, 2003)), Sniper! and The Spy (Bartle, 1990).
[[image:RPG_Islands_Of_Kesmai_024.jpg]]
Figure 24. Islands of Kesmai ASCII Text Role Playing Game (1982-84)
By the mid to late 1980s home computing and online networking service providers opened the gates to huge expansion for on line virtual world. People paid for networking services by the hours, which gave a huge incentive to these providers to get their subscribers hooked on virtual worlds. There was big money to be made with 70% of revenue from one provider (Genie) in the early 1990s being made from games. By 1993 a study showed that 10% of the NSFNET backbone (precursor to the internet consisting of mainly government and universities) network traffic belonged to MUDs (Bartle, 2003).
===2.8.5 Graphical Virtual Worlds===
The text based MUDs evolved into two different streams: the 3D First Person Shooters such as DOOM and Quake which adopted the room at a time view of the world for 3D rendering, and the 2D graphical online virtual worlds that appeared in the early 1990s. Early examples include NeverWinter Nights (1991-1997), Shadow of Yserbius (1992-1996) and Kingdom of Drakkar (1992-Current) (Figure 25).
[[image:Graphical_2D_Virtual_Worlds_025.jpg]]
Figure 25. Graphical 2D Virtual Worlds
Unlike Habitat and Worldsaway (discussed under Social Networking Virtual Worlds) that predated these games appearing in the mid-1980’s, the graphically enhanced text based games were fantasy role playing games -- basically MUDs with graphics. Although 2D some of these games displayed isometrically on an angle which gave an illusion of a three-dimensional view for the player, for this reason these games are sometimes referred to as 2 ½D worlds (Bartle, 2003). These games used more sophisticated graphics (than the pseudo graphical solutions) to improve the sense of presence experienced by the players, while retaining the text based narrative.
By the mid 1990s with nearly 10 million internet hosts (Figure 26) (Slater III, 2002; Zakon, 2006) and price wars between providers the internet opened to doors to millions which saw hordes of inexpert computer users wanting to play games (Bartle, 2003). Game design had improved long with the graphical elements of virtual worlds with graphics rendering capabilities on standard PC’s and the emergence of common graphics file standards which made development of virtual worlds possible, practical and more economical.
[[image:InternetParticipatingHosts_Count_1990_to_1998_026.jpg]]
Figure 26. The Internet No. of Participating Hosts Oct. ‘90 - Apr. ‘98
====2.8.5.1 MMORPGs====
By the mid 1990s we saw the first 3D virtual world online Meridian 59 (1996-2000 & 2002-Current) although technically it used a pseudo-3D graphics engine (Axon, 2008; Bartle, 2003) providing a first person view where the player could view all angles in the environment (Figure 27). We saw the beginnings of a new era of virtual worlds with a massive 25,000 people signing up for the beta release (Axon, 2008), which unfortunately met with limited commercial success (Bartle, 2003; Friedl, 2002) and was shut down in 2000 but resurrected again in 2002 with the updated version online today at meridian59.neardeathstudios.com.
[[image:Meridian_59_First_3D_Online_Virtual_World_027.jpg ]]
Figure 27. Meridian 59 First 3D Online Virtual World (1996)
The turning point for online virtual worlds was Ultima Online (1997-Current). Ultima had already had met with success with the Ultima computer games series. With its online launch it had 50,000 subscribers within 3 months and was the first online virtual world to crack the 100,000 threshold within 12 months of release (which it did so in under 6 months) (Bartle, 2003; Woodcock, 2008). This added a new dimension to the term multiplayer where it has now come to know as a Massive Multiplayer Online, Role Playing Game or MMORPG. Subscription peaked at 250,000 in 2003 with 75,000 being reported in December 2007 (Woodcock, 2008).
Ultima Online consisting of a 2½D graphical virtual world has remained visual much the same (Figure 28) although recently the client that runs the game (the same concept as a web browser) has had a makeover in 2007 with the Kingdom Reborn (right). This game has received regular expansions to the world, which provides for new challenges and adventures for its player. Back in 2001 the client was upgraded to 3D (Wikipedia Ultima, 2008) but recently Electronic Arts announced they will be de-supporting their 3D client continuing only to support the 2D client going forward (Electronic Arts, 2007).
[[image:Ultima_Online_028.jpg]]
Figure 28. Ultima Online (1997-Current)
Other MMORPGs that started around the mid to late 1990s, which can still be played online today, are Furcadia (1996, longest running), The Realm (1996, second longest 15 days out from Furcadia), Lineage (1998), EverQuest (1999) and Asheron's Call (1999).
The more recent MMORPGs of today, not much has changed in game design from the original RPGs but technically they have improved and do provide much better graphics for the player (Figure 29). They have also increased substantially in popularity with the largest subscription based MMORPG World of Warcraft recently climbing to over 11 million players (Blizzard Entertainment Inc, 2008). Although these players do not play in one virtual world they are separated into different realms, the same game but with different people. This contrasts quite differently to the social virtual worlds like Second Life where all the users share one virtual world. In the next section we discuss social online virtual worlds which although they can be a MMORPG within the world itself (as mentioned early) their model of a virtual world is very different than the dedicated MMORPGs.
[[image:MMOZRG_Eve_and_WOW_029.jpg]]
Figure 29. MMORPG's Eve & World of Warcraft
====2.8.5.2 Social Virtual Worlds====
The first attempt for a commercial large scale multi-user game was made by George Lucas’s Lucasfilm Games. Habitat developed by Chip Morningstar and Randall Farmer started development in 1985 (McLellan, 2004; Ray, 2008; Slator et al., 2007). Habitat was built to support thousands of simultaneous users to run on the home computer Commodore 64 to be distributed via Quantum Link network service providers (later known as AOL). Inspired by a science fiction novel ‘True Names’ (Vinge, 1981) the world contained a fully-fledge economy where citizens of the world could own a virtual business, build a house, fall in love, get married and even established their own self governing laws (Morningstar & Farmer, 1990). Habitat a 2D graphical world looked similar to a cartoon (Figure 30, left) with the avatar (digital self) taking a third person view of the world. The storyline was based upon life rather than the fictional storyline of the MUDs, which placed greater emphasis on the social aspect of the world. Lucasfilm's Habitat was first released as a pilot in 1986 then later in 1988 as Club Caribe in North America which reportedly sustained a population of 15,000 participants by 1990 (Morningstar & Farmer, 1990). In 1990 it was released in Japan as Fujitsu Habitat and after extensive modifications. Habitat was realised again in 1995 as WorldsAway (Figure 30, Right) (Damer, 2007) and again as Dreamscape in 2008.
[[image:VW_Habitat_and_Worldsaway_030.jpg]]
Figure 30. Habitat (86) First Graphical Virtual World Precursor to Worldsaway (95)
Habitat introduced some key concepts in virtual worlds;
*The term ‘Avatar’ into the general virtual world community;
*The idea of focussing on social networking as a key form of game play;
*An economy where people could trade both in world currency and artefacts; and
*The most important, the concept that living in a virtual world and leading an alternate life that was not dictated by rules of a game (like with the dedicated MMORPG environments).
More recent social networking virtual worlds include Active Worlds (1995, 1997-current)[17], Second Life (2003-current) and There (2003-current) (Figure 31) – all of which have achieved a significant volume of educational interest as platforms for delivery of learning. The generalised nature of the social networking sites means that they tend to be more diverse in the range of facilities provided and the purposes to which they can be applied than the role playing game systems. They have generally provided participants with some form of content creation tools including the importing and/or exporting of non-virtual world artefacts. In the next section we discuss further the aspect of education in virtual worlds.
[[image:VW_SecondLife_and_There_031.jpg]]
Figure 31. Social Virtual Worlds: Second Life & There
===2.8.6 Simulation and Learning Systems===
====2.8.6.1 PLATO====
PLATO (Programmed Logic for Automated Teaching Operations) was a system designed for computer based education at University of Illinois that started in the early 1960s. Originally developed as a class room course system (Figure 32) with improvements in mainframe technology by 1972 saw up to a thousand simultaneous online users making it the first public online community that featured electronic course delivery, online chat, bulletin boards, 512 x 512 resolution monitors and 1200 baud connection speed (Unger, 1979; Woolley, 1994). With over 15,000 hours of instructional development PLATO was possibility the largest ever investment in educational technology (Garson, 2000).
[[image:PLATO_Lab_Image032.jpg]]
Figure 32. University of Illinois PLATO Lab & Terminal (1961-2006)
By the mid 1970s games made their way onto the university mainframes with great success. Between 1978 and May, 1985 about 20% of time spent on PLATO was game usage (Woolley, 1994). Games appeared such as Spacewar! (1969 game discussed earlier), Empire (1973, multi user space shooter game based upon Star Trek), DND, (1974, MUD[18] based upon the game Dungeons and Dragons), Mines of Moria (1974, MUD, 248 mazes based upon Tolkien’s Lord of the Rings), SPASIM (1974, 32 multi-user, FPS space ship game)[19], Airfight (1974-75 a 3D flight Simulator precursor to Microsoft’s Flight Simulator), Oubliette (1977, first person 3D MUD) and Avatar (19977-79 first person 3D MUD) (Bartle, 2003; Lowood, 2008; Pellett; Wikipedia, 2008b; Woolley, 1994). See below (Figure 33) for some examples of MUDs held on PLATO. Many of the games on PLATO were recreated for commercial use for arcade or personal computer games (Goldberg, 2002; Mulligan, 2002; Woolley, 1994).
[[image:PLATO_Popular_MUD_Games_Developed_For_PLATO_033.jpg]]
Figure 33. PLATO: Some Popular MUDs Games Developed for use on PLATO (1974-1979)
By 1985 after going commercial PLATO had established a systems of over 100 campuses worldwide (Garson, 2000). Known as the ‘ultimate electronic information and communication utility’ offering over 200,000 hours of courseware (Figure 34), with local dial-up of 300 or 1200 baud connection speed, access to both a social and educational contacts were among the many advances of PLATO that made it an attractive system for the academic community at large (Small & Small, 1984). Over time, with improvements in technology, and the cost of maintaining old technology the final PLATO system was turned off in 2006 (Wikipedia, 2008b).
[[image:PLATO_Online_Course_Count_1984_034.jpg]]
Figure 34. PLATO Over 200,000 online courses by 1984
A web site has been established for preservation of PLATO at cyber1.org (VCampus Corporation, 2008) which holds many of PLATO’s games and courseware for public download.
====2.8.6.2 SIMNET====
Military virtual world simulators started with a project called SIMNET (SIMulator NETworking). SIMNET was a DARPA project that enabled the first large scale real-time networked battlefield simulator. Development and implementation occurred on several levels between 1983 and 1990 (Cosby, 1999; Miller & Thorpe, 1995).
Prior to SIMNET military simulators consisted of immersive virtual reality training devices such as cockpit simulators. Cockpit simulators offered a replicated environment of the ‘real thing’ for example, an aeroplane cabin would be built in its entirety with motion and sensory feedback using pre-programmed software to produce repetitive simulations to provide an individual with mastery skills such as low to ground dog-fighting or missile avoidance training (Miller & Thorpe, 1995). SIMNET provided a cheaper alternative for certain types of training than the cockpit simulators and further offered ‘collective skills’ which Miller and Thorpe (1995) define to be cohesive team operations skills distinguished from individual mastery skills taught in cockpit simulators.
SIMNET a multiuser virtual world (Figure 35) consisted of real battle grounds with manned vehicles (tanks and helicopters), command posts, semi-automated forces where a single operator could control many vehicles in the simulation and the ability to record simulations from any view point (known as the flying carpet) so that it could replayed and statistically analysed and reported upon. At the conclusion of the program there were 250 simulators operating in nine locations (4 of which were in Europe) which provided real-time battle engagements that was directly under the control of the participants (Lenoir, 2003; Miller & Thorpe, 1995).
[[image:SIMNET_Battlefield_Simulator_035.jpg]]
Figure 35. SIMNET: Battlefield Simulator at Fort Knox USA (1983-1990)
SIMNET had a substantial impact on military training after being recognised as the key success factor in winning the 3 day ‘Battle of 73 Easting’ in the Gulf War (1991) which lead to several projects based upon the SIMNET technology (Figure 36) (Foley & Gifford, 2002) with the USA government commissioning $2,549 million dollars in 1997 for modelling and simulation projects (Lenoir, 2003).
[[image:US_Military_Networked_Simlator_Projects_1938_To_2001_036.jpg]]
Figure 36. Timeline of US Military Network Modelling and Simulator Projects (1983-2001)
In 1997 a project named Synthetic Theater of War (SToW) commenced which was a program to construct an environment to combine varies simulators into one large-scale distributed battle simulator capable of involving thousands of participants (Budge, Strini, Dehncke, & Hunt, 1998; Tiernan, 1996). This project has since become Joint Semi-Automated Forces (JSAF) (Hardy et al., 2001) which now enables more than 100,000 simultaneous simulations at a time (US Joint Forces Command, 2008). Australia military has also adopted the JSAF platform to build their our own Course Of Action Simulation (COA-Sim) for joint military operations training, exercises and planning (Carless, 2006; Gabrisch & Burgess, 2005)
====2.8.6.3 Military Use of Commercial Games Engines & The America’s Army====
In 1996, General Krulak of the US Marines tasked the Marine Combat Development Command to explore and approve the use of commercial games engines for military training purposes. One outcome of this effort was the collaboratively developed Marine Doom, based on the Software Id Corporation’s shareware Doom engine and Doom Level Editor. The simulation could be configured for simulation of special missions (such as hostage rescue) immediately prior to engagement and used to rehearse the planned mission (Lenoir, 2003).
In July of 2002 the US Military released a milestone in multi-user training game simulators in the form of America’s Army: Operations (Lenoir, 2003; Zyda, 2005). Based on Epic Games ‘Unreal’ games engine, the game created a virtual world that reproduced aspects of a career in the US Army, including ‘boot-camp’ commencement and weapons and tactical training through to various operations scenarios. Although originally developed and released as a recruitment tool, the game was also claimed to be utilised to improve training outcomes by army instructors at Fort Benning (Zyda, 2005).
Now, with 26 subsequent releases (as of 2008) and available for the PC, cell phone and Xbox, the game has more than 9 million registered users exploring entry level to advanced training, and operations in small units (Figure 37). Beyond a focus on realism that extends to accurate tree placement in training courses at the simulated training camps, the game adds an added dimension of presence to the participants through the active involvement of current and former real-world soldiers as players in the game (designated with a star icon in player profiles), interacting with non-military participants (Department of the Army, 2008).
[[image:Americas_Army_037.jpg]]
Figure 37 America's Army (2002)
From a training perspective anecdotal evidence from army trainers regarding the game is that sessions in the training scenarios such as the firing range or obstacle courses improve subsequent results in the real-life versions of these activities (Zyda, 2005). The US Army possibly one of the largest investors of virtual world game technology recently announced their plans to spend $50 million USD over the next 5 years to create 70 gaming systems in 53 locations around the world for combat training (Robson, 2008).
==2.9 Virtual Worlds for Education==
===2.9.1 Architecture Considerations===
====2.9.1.1 Introduction====
To appreciate properly the discussion of the literature examining educational directions in virtual worlds, the researcher considers a brief overview of the key architectural differences to assist the reader. This material is based on the researcher’s examination of a variety of game environments and virtual worlds, and discussions with experienced and knowledgeable users of these environments, rather than sourced from the work of other authors. As such the discussion is interpretive rather than authoritative.
Some of these environments have existed for only a few years, and have not yet enjoyed a comparative analysis undertaken by the academic community. As such, this discussion might not normally reside in the literature review, but it is felt that the placement of this discussion in this sub-section will assist the reader in better appreciating the issues explored in the literature discussion throughout the remainder of the section.
====2.9.1.2 Considerations of Operational Design====
While all of today’s major virtual worlds include capabilities for user interaction, sharing of the environment, persistence, avatars, business rules, streamed audio and text there are substantial differences in the technologies used to deliver the virtual experience. While some of these differences may create marginal differences in the world experience of the casual user, from the perspective of the educator and content creator the differences are substantial.
The major offerings can be viewed under the following groups (note: in each category the researcher has selected only a few example worlds, in most cases other options also exist):
#Proprietary closed engine (e.g. World of Warcraft, Everquest)
#Client resident closed content and world model with open engine (e.g. Shareware Doom )
#Streamed (or semi streamed) closed content and world model with closed engine (Entropia Universe)
#Open client resident content and world model with closed engine (Flight Simulator X, America’s Army, Unreal games, Quake, Doom)
#Open streamed content and world model (Hipi Hi, TruePlay, Active Worlds)
#Open streamed content and world model with out-of-world interfaces (Second Life V1, VastPark)
#Open streamed content and world model with out-of-world interfaces and open client (Second Life V1.2)
#Open streamed content and world model with out-of-world interfaces, open client and open server (DeepSim)
Architectural Components and Implications in Education
Below are some of the architectural components and implementations on the structure of a virtual education environment.
{| border="1"
|'''Architectural Components'''
|'''Implications in Education'''
|-
|Closed Proprietary System
|A closed proprietary system cannot generally be altered. These systems are generally not appropriate for education purposes unless the existing virtual world itself is built for the purpose of the training (such as a purpose built simulator). Closed systems can be used in education for group interaction and discussions, if not for lectures or anything requiring more than text or audio (assuming the system supports group audio communications).
|-
|Closed or Open Environment
|Whether content and world model is closed or open determines whether the textures, objects and artefacts of the world can be modified or created by users. This ability is essential if the world is to be utilised in education as anything more than a 3D discussion forum.
|-
|World Content
|Whether the content and world model is client resident or streamed goes to the complexity of distributing course content, and the dynamics available in delivery. If the content is streamed, it can be changed in real time, but will usually require a high speed internet connection. Systems supporting streamed content generally also include the tools for developing some if not all of the streamable content. If the content is client resident, client interfacing speeds can generally be slower, but the content must be centrally published and distributed to client systems and installed locally prior to use. It cannot be changed in real time, and content production will not generally be supported directly in the virtual world tool set, and will often require advanced 3D modelling skills in dedicated 3D modelling environments.
|-
|World Interfaces
|The existence of out-of-world interfaces goes to whether content from other sources such as the internet web pages, audio or video, etc can be streamed into the world and integrated with the world content and model. Systems capable of providing this capability with streamable open content offer the greatest potential for in-expensive production of course material and publication distribution of that material to students.
|-
|Client / Server Engine
|Whether the client or server engine is open or closed goes to whether the hosting software itself can be modified. Generally this should not be necessary for education if the capabilities of the engines driving the world are otherwise sufficient. Where the content / world are otherwise closed, but the engines are open, the existing content and world could be replaced by interfacing the games engine to a new world with new content.
|}
====2.9.1.3 Options for Content Modification====
The ability to modify the content of a virtual world is essential if the educator is to deliver course content in-world beyond that of an interactive discussion, or monologue.
There are essentially three ways content can be modified by end-users in current virtual world environments (as opposed to systems providers or publishers) depending on the operational design of the environment:
#'''Level Editor''' (eg: Doom, Half Life, America’s Army, Flight Simulator). Applicable to client resident worlds (i.e. systems where the world is stored on each client computer and distributed as a separately published down load). A level editor is a content editing tool that allows an entire simulation to be created including the world model, textures, characters, behaviours, etc. They usually support importation of textures and animations, etc into the ‘level’ and then distribution of the entire level to a central server for redistribution to clients.
#'''Client Content Editing Tool''' with import/export (eg: Second Life, Vast Park, etc). For environments where building and content creation is part of the ‘game play’ the client will have a content editor provided. These environments provide a simplified model for constructing shapes and objects (e.g. Second Life’s prims) and some means for importing complex objects such as organic shapes, textures, animations, sound, etc.
#'''Out-of-world interface''' (e.g. Second Life, Active Worlds). Potentially available in both client resident and server resident (streamed) worlds. An out-of-world interface allows for the connection of some aspect of the user experience while in world to be drawn directly and live from an off world location like a web page, internet resident database or streaming SoundCast server, etc.
====2.9.1.4 Implications of differential content capabilities====
Virtual world are comprised of components (objects) and functions that are managed by the virtual world (or game) engine and together comprise the capabilities of the world. Not all worlds have the same object management capabilities built into their engines. For the purposes of this discussion, the range of capabilities will be considered to be:
#'''Terrain''' – the land form or map of the virtual space. Essentially all virtual worlds offer some form of terrain map (although the terrain map may not be ground, but rather simply a 3D space.
#'''Avatars''' – Discussed extensively already, the avatar is the user’s projection into the virtual world and may or may not be customisable.
#'''Structural objects''' – Including buildings, furniture, ornaments, statues, models, etc. These are the virtual world equivalent to objects in the real world. They may or may not be animatible and scriptable. If they are scriptable they may be able to become autonomous agents, depending in the capabilities of the scripting engine.
#'''Textures''' – The visual covering of any object, terrain, or even avatars. The ability to display and upload/import textures is (generally) essential to the ability to ability to display lecture materials like slides, etc (but note the existence of streams as a potential alternative).
#'''Animations''' – An avatar and a non-player character appears to walk, sit, stand, change facial expressions, etc because of the animation it is playing at the time. Without animations an object might move from one point to another, but it will not change it apparent state. The ability to modify animations is advantageous for creating a sense of realism, but possibly not generally essential for the ability to deliver a lecture or every type of simulation. All virtual worlds examined, offered some range of built-in animations within their worlds. Some allow the animations to be imported or modified, or strung together to create more complex animations.
#'''Scripts''' – Scripting is a capability to programme the objects and behaviours in the world. In worlds modified by level editors and programming language is generally provided as part of the level editing environment and ‘compiled into’ the level before it is published and distributed. In user modifiable worlds, where scripting is supported (like Second Life) the scripting editor and compiler is provided as part of the client application and scripts are dynamically modifiable. In some architectures the scripts are stored in the objects and distributed with the objects (and therefore if the object is moved between worlds/simulators the script and behaviours move with it), whereas in others the scripts are centrally stored controlled for the world/level and not available outside of the world or level or simulator (as appropriate). Scripts govern the behaviour (movement, animations, actions, sounds, appearance, world responses, inter-object communication, etc) of objects. The capability and simplicity of language design of the scripting engine is critical to the options available for educators in building a simulation.
#'''Streams''' – Streams include any media that is streamable such as audio, video, web-page content, etc. The availability of streams is an extension (or possible an alternative) to the ability to import textures. From an educational standpoint it represents the ability to deliver video or sound presentations, or draw lecture materials directly from the internet. Depending on the world engine, stream content may be able to be dynamically published (drawn down to the client as required – such as in Second Life) or packaged into the client resident world (such as in America’s Army).
#'''Non-player Characters''' (also called Bots, AI’s or MOBs – mobile objects) - These are essentially characters that look like avatars but are completely controlled and managed by the engine. They interact with players/avatars in a semi-intelligent manner. The availability and capability of these vary significantly across worlds. In HalfLife and America’s Army, the AI capability is available within the engine and has considerable ‘intelligence’ and in some cases the ability to learn and modify their behaviour. In other worlds (such as Second Life) they are not directly supported by the virtual world engine at all. The existence of non-player characters can directly impact on the type of learning simulation that an educator can build as it can provide user feedback and the feeling of presence within the environment for the user (if implemented to provide a realistic experience for the user).
#'''Text Communication''' - Text chat (including instant messages, group communication chat, etc) is the standard communication strategy in all worlds. It is always instant and dynamic (in that it does not have to be pre-packaged into the world). It is a functional capability rather than an object, and may or may not be logged or copied depending on the client capabilities.
#'''Multi-way Voice Communication''' - Most virtual worlds do not support voice directly, although this has been an increasingly offered function over the last twelve months. Multi-way voice communication enables a group of players to converse as if they were in a conference call, without the necessity to type all communication in text. It is different from streams, in that every client can be a sound source to every other client, whereas streams are a one-way communication from a point source to many destination receivers. Clearly the availability of voice communication impacts both the type student and the form of discussion that can be undertaken in a learning situation.
In selecting the platform for delivering an educational experience, the extent to which the educator requires any or all of these capabilities within a virtual world will probably influence the decision. Some of these capabilities have only recently become generally available, and others are still in only rudimentary forms. In the literature review that follows, the approaches and content adopted, and the outcomes achieved have necessarily been constrained by capabilities of the technology options available at the time and the architectural constraints of the virtual world used.
===2.9.2 Education Applications in Virtual Worlds===
====2.9.2.1 Introduction====
During the 1970’s, 1980’s and early 1990’s, perhaps the most significant multi-user online environment for education was the PLATO system. From the mid 1990’s onwards, the influence of this system waned as it was progressively superseded in user interface capabilities by the emerging 3D online games, social networking systems and custom built virtual worlds for the specific application of subject matter.
Today the use of public online virtual worlds for is gaining popularity with educators with a recent special purpose committee of educators (The New Media Consortium & EDCAUSE, 2007) identifying that virtual worlds will have a significant impact in the future of teaching, learning, or creative expression within higher education. In the next section we will discuss some of the research findings of virtual worlds being used for educational purposes.
====2.9.2.2 Education Uses in Virtual Worlds====
Early work in education using text based MUDs showed that they offered support for constructive knowledge-building communities that offered affordances of coordinated presence with evidence for interactive learning and collaboration across time and space (Dickey, 2003).
The period of the late 1990’s until today has been typified by educators experimenting with the potential for mass market games engines (and more recently virtual worlds) to be re-tasked as education environments (Annetta et al., 2006; Beedle & Wright, 2007; Gikas & Van Eck, 2004). In some cases, such as America’s Army the ‘game’ environment was built with the specific goal of recruitment and training in mind (Zyda, 2005), or as with MicroSoft’s Flight Simulator a game evolved over time with the assistance of subject matter experts to create an accurate simulation tool for the games audience (Lenoir, 2003). In other cases a games engine (the operating system of a game) has been adapted to create a purpose built learning tool, such as educators and students at MIT utilising the Neverwinter Nights tools to create a historical game based on a battle in the Revolutionary War or MIT's Games-to-Teach Project produced playable prototypes of four games, including Biohazard, developed jointly by MIT and the Entertainment Technology Center at Carnegie Mellon University which trained emergency workers to deal with a cataclysmic attack (King, 2003).
The early 3D virtual worlds with their simplistic graphics bearing little resemblance to the real world provided students with advantages over traditional learning methods whilst fostering collaboration in multiuser virtual worlds. An extensive study of virtual reality technology in education was performed by Youngblut (1998) where she looked at 35 different research studies in education that varied in technology use, subject discipline and age group from 1993-1998. Below is an example of VARI House and Virtual Physics both of which were custom built (Figure 38), VARI a single user virtual world and Virtual Physics a multiuser virtual world. Although studies were mainly research based (as opposed to the application in course work) research showed for both single and multi user environments that virtual world technology in many cases surpassed traditional learning methods in areas such as subject matter understanding, memory retention, student collaboration and constructive learning methods. Some obvious disadvantages were technology constraints, cost and development and usability (Youngblut, 1998) which in most part could be contributed to the infancy of this technology, formative years of computer based learning and the lack of general use of computers by students which had yet to permeate sociality as a whole.
[[image:Education_In_Virtual_Worlds_in_1950_to_60_038.jpg]]
Figure 38. Education in Virtual World Mid 1990s
====2.9.2.3 Online Education Uses in Virtual Worlds====
As identified in the architecture considerations section, virtual worlds that are to be used in educational settings must enable content modification if learning is to consist of anything more advanced than an interactive conversation. For the purposes of this research, the researcher is choosing to focus on virtual worlds that support the dynamic delivery or streaming of content (and the building tools are provided as part of the environment), rather than those worlds where a separate level editor is required and a client resident world model must be installed on the client computer prior to use. The literature surveyed in this sub-section will therefore focus on the work done in two such environments – Active Worlds and Second Life.
=====2.9.2.3.1 Active Worlds=====
Online virtual worlds enabled educators’ access to environments without the cost and complexity of developing their own custom software. One of the first online virtual worlds that made it feasible for research and development in education (given its architecture qualities) was Active Worlds (1995, 1997). Officially known as Active Worlds Universe because it consists of many worlds, Active Worlds provided educators with the opportunity to rent or buy their own world allowing restricted access to invited guest, building tools and content management capabilities. Below is a screenshot of Active Worlds (Figure 39). As can be seen, the current client consist of four sections; left – communications and navigations options, right – integrated web browser, bottom – chat window and middle – 3D environment. This type of client is generally called a “browser” by the environment developers.
[[image:Active_Worlds_Universe_039.jpg]]
Figure 39. Early Online Social Virtual World: Active Worlds Universe
'''Active Worlds Research'''
During the late 1990s to the early 2000s several educational institutions setup up a presence in Active Worlds for various projects from research to actively using Active Worlds as an online learning environment (see Smith, 1999 for a list of Virtual Learning projects most of which being in Active Worlds). The early research into online virtual world based education using Active Worlds showed promise.
Dickey (1999, 2003, 2005) undertook research into the viability of Active Worlds being used for geographically distant learners for both formal (a university business computing skills course) and informal courses (Active Worlds building course). His research studies showed that the 3D Virtual Word offered advantages in fostering constructive learning, student and teacher collaboration, visual representation of course context and course content and student engagement and participation. Some of the disadvantages identified were essentially environment specific and included a lack of support for collaborative activities like a whiteboard or collaborative interactive writing spaces, chat tool single posting word limitation, a single shared channel for chat tool providing no separation of teacher / student discussion and no ability for turn taking and kinetics (animation) constraints such as hand raising for alerting the attention of the instructor.[20]
Dickey also identified a number of opportunities specifically enabled by a 3D environment. While some of the previously identified advantages (such as collaboration and student management and participation) might be duplicated in other forms of online education tools, the 3D modelling of the course itself (the visual representation of course context and course content) was an advantage specific to the 3D environment.
Course context modelling as provided in Dickey’s research (1999) was a 3D representation that illustrated the structure of the course by the use of individual buildings and plazas (Figure 40). Each building was a topic in the subject, which provided resources to aid learning and a meeting place where students could collaborate for group projects around this topic.
[[image:Visual_Course_Structure_in_Virtual_Buildings_040.jpg]]
Figure 40. Visual Representation of Course Structure by the use of Individual Buildings
Course content modelling as provided in Dickey’s research (1999) was a 3D representation that the student had to build in order to understand the concept of the subject material (Figure 41).
[[image:Visual_Represnetation_of_Course_Content_041.jpg]]
Figure 41. Visual Representation of Course Content
These alternative methods provide a good example of the power and adaptability of 3D modelling environment applied to education. The course context provided the student a method by which they could visualise the learning objectives and progression of the course. The student had to visit each building within a specific time frame and complete the contained content. The 3D modelling of course content enabled the learner multiple viewpoints of actual subject material which provided interactive learning that was believed to enhance the student’s understand of the subject topic.
Clark & Maher (2006) looked at the role of place and identity in a 3D virtual learning environment using Active Worlds by the analysis of chat logs and physical locality of the avatars within group discussions. They found that a sense of place can be achieved in a 3D virtual learning environment where identity and presence plays a role in establishing the context of the learning place. The students formed a strong bond with their avatar and indicated that they felt a sense of presence, as measured by a series of subjective scales, within the virtual learning environment. Similar Dickey (2003) found that the 3D virtual desktop world provided qualities of presence similar to that of an immersive virtual reality virtual world.
=====2.9.2.3.2 Second Life=====
Second Life (started 2003) consists of two worlds. These are: Second Life Teen Grid and Second Life Adult Grid. The teen grid provides access to 13-17 year olds and educational instructors. The functionality of the teen grid is the same as the adult grid with exception that all content has a PG rating. The Adult Grid is where you find all the universities and colleges for students over 17 years of age. Other educational content in Second Life is an extensive list of museums, galleries, simulations, business product development, role-playing spaces, employee and public business training course, etc. Similar to Active Worlds educators are able to rent or purchase land, allow open or closed access to the public and build and develop on land.
One major difference between Second Life and Active Worlds is that the former has an in world economy with in-built functional support enabling the trading of virtual products and services using ‘Linden dollars’, backed by content copyright and duplication controls and augmented by a provider managed exchange where real dollars can be exchanged for Linden dollars (and vice versa). This fundamental difference provides an incentive for content developers and service providers to actively support and expand the world with content and therefore enables access to a large body of pre-constructed content or access to an entire world-wide industry of content developers at extremely reasonable rates (compared to the real world 3D developers providing the similar content outside of Second Life) (Joseph, 2007). The building and scripting tools are easier to master than traditional 3D rendering tools, and delivered free as part of every user’s world browser and are sufficiently powerful that just about anything imaginable can be constructed (Schmidt et al., 2007).
Second Life’s standard interface as seen below (Figure 42) offers extensive functionality over that of Active Worlds. Some of the more common features as seen in the figure below are built-in world, content and people search facilities (left), a mini map (top right), an inventory library (bottom right), local chat channel (with a standard ranges of 15, 30 meters or 60 meters from text source) and group chat channels (worldwide range for up to 25 groups per avatar), customisable streaming media players (for sound, video and web page content), in world or external web html browser (link for both in world and outside world content), private or public multi-player voice facilities etc.
[[image:Second_Life_042.jpg]]
Figure 42. Online Virtual Social World Second Life (Circa 2008)
Another difference from Active World is avatar control, Second Life avatars can use roaming camera (whereas Active Worlds only provides First and Third person view). Roaming camera enables the user to use their mouse to control the moment around the world without the need to move their avatar. This functionality once mastered offers the users a powerful tool that provides an easy and fast way in which to navigate objects (that can even go through objects such as walls).
Due these and other technological advances over Active Worlds, Second Life has developed a large education community over the last couple of years. For instance, SIMTeach (June, 2008) the Second Life Education Wiki identifies over 200 Educational Institutions in Second Life of which 138 listed are universities, colleges and schools. The Second Life Education (SLED) list server has over 5,000 world-wide members. The New Media Consortium (NMC, a group that hosts education islands) has over 100 universities on their land and Second Life Teen Grid has over 90 educational projects (Linden & Linden, 2008). Figure 44 p88 provides some examples of the training and learning activities in Second Life representing a mixture of educational institutions, corporations and governments agencies.
The content of Second Life is entirely user created. The availability of content developers and potential students already experienced in using the environment is dependent on the take-up and expected future growth of the environment. In Figure 43 are the user base and economical statistics for the first quarter 2008 as provided by Second Life’s proprietor Linden Lab (2008a). As of November 2008 Second Life had 16,318,063 million users (60 day logons 1,344,215 million). A beak-down of Second Life’s demographics as at November 2008 can be seen in Appendix I: Second Life Demographics.
[[image:Second_Life_User_and_Econ_Stats_Q12008_043.jpg]]
Figure 43. Second Life User & Economic Statistics for Q1 2008
[[image:Second_Life_Training_and_Learning_044.jpg]]
Figure 44. Second Life Training and Learning
'''Second Life Research'''
Educators are using Second Life for both formal and informal purposes. Some Educational intuitions have set up entire virtual campuses modelling their real world campus while others are modelling purpose built virtual education structures. The relative youth of Second Life means that there is considerable variation in the maturity of educational efforts across the virtual world, and limited peer reviewed studies yet published. Many educators are still experimenting while others, having active support of their institutions are actively using the environment for partial or entire subject delivery. Here we will look at some of the current research at the time of writing that has been undertaken in Second Life most of which has been recently published since 2006 although given the technological advances that has occurred in Second Life since 2007 onwards we will specifically concentrate on the later research.
Martinez, Martinez, & Warkentin (2007) researched the implementation of a lecture to geographically distributed third year university students in Second Life. The lecture was delivered in a conventional lecture room setting using traditional chalk and talk style delivery with lecture slides and the chat channel for instruction, no voice was used.[21] According to the lecturer’s experience using text only delivery, the time to deliver the content was double that of a face to face lecture. This was also confirmed by the students in their survey. In the student survey some admitted they felt distracted by the novelty of the environment and were overly concerned with ancillary aspects such as their avatar’s appearance etc. Others admitted to being distracted by external (to the environment) concurrent activities occurring simultaneously on their PC’s such as multi tasking with other programs (e.g. MSN messaging) whilst at the lecture. Others experienced technical difficulties and could not get back into the lecture after they were accidentally logged out. In spite of these short-comings, when asked to rate the lecture experience on a scale of 1-10 the average student response was 8.5. In this study it was noted that some of these distractions and difficulties could be put down to first time user experience. The lecturer also felt that this lecture could have easily been pre-recorded and delivered online and that active learning techniques could have improved the delivery of this lecture in Second Life (Arreguin, 2007).
Joseph (2007) notes a consequence of using Second Life (or a virtual worlds in general) for teaching is that sessions generally take longer than traditional methods but believes that this is not an issue per se as time to complete the task should come second to the effectiveness of the experience. Joseph also believes (from experience) that the avatar projected on the screen and sense of presence experienced by the participants is more effective for learning than a live image of a video feed.
Kofi, Svihla, Gawel, and Bransford (2007) researched the potential that virtual worlds could provide efficiency and innovation for adaptive learning. In their study, students were present with a maze to navigate that simulated problem solving skills required for learning similar to that in a real life learning scenario. Kofi, et al found that Second Life was able to provide enough functionality and support for the learner to apply new concepts in order to solve presented problems as long as they were provided key indicators of possible outcomes. They also found that the use of 3D learning environments required the same amount of instruction that would be provided in equivalent real world learning and that simply building a model did not provide sufficient information, of itself, for the learner to learn in this instance; they also needed to be continuously prompted and guided in order to reach the end learning objective.
In another example, Second Life was used to support learning objectives of a total of 13 students aged between 19-26 for a third year level college students on a course for Digital Entertainment and Society where the students were geographically distributed around the world (Gonzalez, 2007). Both lectures and assignment work was conducted within Second Life. The lectures consisted of a video presentation and an in world field excursion. Assignment work required some in-world building, an exercise using linden dollars with a student presentation on completion. No students had used this environment before but an acclimation exercise was sufficient in providing them with the skills required to undergo course work in Second Life. At the end of the course students were given a survey with results presented below (Table 1).
{|
|Elements that Second Life Added:
|-
|
|Agree
|Disagree
|-
|Enjoyment
|100%
|0%
|-
|Technical difficulties
|100%
|0%
|-
|Interaction with tutor
|62%
|38%
|-
|Interact ion with classmates
|62%
|38%
|}
Table 1. Survey Results for Digital Entertainment and Society Second Life Subject
The technical difficulties result was explained largely by network latency experienced by the students. Each student used their own computers with an average of 512 Kbs connection speed – not especially fast, nor ideal for the use in the Second Life environment. No mention was made in the study as to whether the student computers met the Linden Lab systems requirements (2008c). As Second Life is streaming virtual world where content is downloaded on-demand from Linden Labs servers located in the USA to the local computer connection speed can an important factor in technical difficulty performance. Other major impacts from a technical perspective include the computer graphics cards and the size of onboard computer RAM. The Second Life browser does offer many settings for optimising performance on low-end machines but if the minimum system requirements are not met then the user’s experience of the virtual world will be reduced significantly with dropouts, lag and poor graphics.
==2.10 Learning & Instructional Design Theory==
===2.10.1 Introduction===
Learning in any world (real or virtual) requires well thought out instructional design. Learning is a process of the mind regardless of whether your body is present in the virtual world or real world. Instructional components for learning regardless of medium include (DONCIO et al., 2008):
*Clear, concise, and appropriately structured content
*Activities that draw relationships between concepts, challenge learners' thinking and understanding, and reinforce information
*Evaluative measures that determine if knowledge assimilation and retention have occurred
In this research the focus was on the use of new technology in education as opposed to education applied to new technology; therefore this section only provides an overview of applicable theory required to assist in the instructional design, delivery and assessment of the subject material presented to the research participants in this study. Gagne’s Nine Events of Instruction and Bloom’s Taxonomy of the Cognitive Domain were selected to assist in this task.
===2.10.2 Behaviourism and Cognitivism===
There are two main traditional schools of thought in learning theory. These are Behaviourism and Cognitivism (DONCIO et al., 2008; Lewis, 2001).
*Behaviourist (Objectivist) views the mind as a ‘black box’ no consideration of personal or past experience is taken into consideration. The mind starts off with a clean slate where a stimulus produces a response. Only when a change in behaviour is observed learning has occurred. Learning is discrete, measurable and quantifiable.
*Cognitivist (Constructivist) views the mind as a continuous organism that evolves. Knowledge is constructed based upon from past material and personal experience. Learning is unique to the individual; relating new information based upon pervious knowledge learnt.
The University of Washington, Seattle (2008) compares the two approaches of and a provides a discussion of each in terms of philosophy (Table 2, p93), learning outcomes, instructor role, student role, activities and assessment. The philosophies of these approaches are opposing and therefore produce different methods of instruction (Lewis, 2001; Nash, 2007).
Behaviourism was the first to be defined in learning theory while cognitivism developed later as a response to perceived limitations of behaviourism in understanding and adapting to new learning concepts (Lewis, 2001; Mergel, 1998).
While some constructivists argue the merits of constructivism as a distinct theory, viewing knowledge as a something constructed by a learner through the process of learning other writers view constructivist ideas as an evolution of the fundamental cognitivist school. This position is illustrated in Table 2 where the behaviourist and constructivist-enhanced-cognitivist philosophies are compared using a consistent comparative organisation of views (see Dabbagh, 2006; Mergel, 1998).
Constructivists argue a distinction between cognitive constructivism and social constructivism, in which the former emphasises the exploration and discovery on the part of each learner, while the latter emphasises the collaborative efforts of groups of learners as sources of learning, but for our purposes it is sufficient to distinguish the behaviourist and cognitive approaches. Throughout the years many practical teaching methods have evolved with concepts that encompass both approaches.
[[image:TABLE_Instructional_Design_Behaviorism_Cognitivism_045.jpg]]
Table 2. Instructional Design: Comparative Summary Behaviorism and Cognitivism
(University of Washington, 2008)
===2.10.3 Gagne’s Nine Events of Instruction===
Gagne theory of instruction can be divided into three areas (Corry, 1996); taxonomy of learning outcomes, conditions of learning and levels of instruction. There are considerable similarities between Gagne’s ‘taxonomy of learning outcomes’ and Bloom’s ‘taxonomy of the cognitive domain’ therefore a discussion of these will be provided in the next section of this thesis.
Gagne breaks down ‘conditions of learning’ into internal learning and external learning conditions. Internal learning is concerned with previous learned capabilities of the learner and external learning is the instruction or stimuli that will be presented to the learner. While Gagne’s theory takes an essentially cognitivist approach, it recognises both behaviourism and cognitivism influences to instructional learning. For our purposes, it is the ‘levels of instruction’ as outlined by Gagne that are of particular interest which we will explore in this section.
Gagne (1985) presents a systematic approach to instructional design termed the ‘nine levels of instruction’ as presented below in Figure 45 (Clarke, 2000)[22]. These nine levels have been specifically designed for the teaching of intellectual skills.
[[image:GAGNE_Nine_Steps_To_Instruction_046.gif]]
Figure 45. Robert Gagne's Nine Steps of Instruction (Clarke, 2000)
The nine instructional events with their corresponding cognitive processes can be described as follows (Clarke, 2000; Kearsley, 2008):
#'''Gaining Attention (Reception)''': Grab the attention of the participant by presenting a teaser in order to get the participant interested and motivate them to learn more about the topic that will be presented. This could be done using methods such as a movie, phrase, storytelling or a demonstration.
#'''Informing Learners of the Objective (Expectancy)''': Provide the participant with the objectives in order to assist them in organising their thoughts ready to receive the new information that will be presented.
#'''Stimulating Recall of Prior Learning (Retrieval)''': Provide the participant with any background that my assist them in building upon the new knowledge that they are about to receive. This helps to place a framework in their mind based upon previous knowledge.
#'''Presenting the Stimulus (Selective Perception)''': This is where the new learning begins. Information should be chunked and organised meaningfully in order to avoid memory overload and assist in the learning of new knowledge. Chunking the information into sequence of learning events and breaking it down into constituent parts with a structure and purpose that spans across different areas of comprehension. The revised Bloom’s taxonomy (discussed in the next section) can be used to assist in forming of the presented information.
#'''Providing Learning Guidance (Semantic Encoding)''': Assisting the participant to obtain a deeper level of understanding of the new knowledge so that information can be encoded into their long term memory. During instruction try to provide examples, non examples, analogies, graphical representation etc. to assist in semantic encoding process.
#'''Eliciting Performance (Responding)''': Letting the learner do something with the new knowledge or test their new knowledge to confirm they have a correct understanding of the information.
#'''Providing Feedback (Reinforcement)''': Analyse the learner’s understanding of the subject matter presented and provide feedback to correct any misunderstood knowledge. Immediate feedback and reinforcement of the new knowledge (e.g. question and answers).
#'''Assessing Performance (Retrieval)''': Test that the new knowledge is understood and the learning objectives have been met. This could be in the form of a test or a demonstration by the learner to assess if they have mastered the information.
#'''Enhancing Retention and Transfer (Generalisation)''': Generalise the information so that the knowledge transfer can occur, inform them of similar problems or a similar situation so that the acquired knowledge can be put into a new context.
===2.10.4 Bloom’s Taxonomy===
The Taxonomy of Educational Objectives also known as Bloom’s Taxonomy is widely used[23] to assist in the preparation of learning objectives and the assessment of learning outcomes. The learning outcomes of a student are the results of their learning experience of a course that should be a direct consequence of the course objectives (Monash University, 2008). Hence the application of Bloom’s taxonomy of educational objectives in forming course objectives provides a measure by which to assess student’s learning outcomes.
The original work of Bloom’s Taxonomy was developed by an American committee of educational psychologists chaired by Benjamin Bloom that presented over a period of time three domains: cognitive (knowledge) (Bloom, Englehart, Furst, Hill, & Krathwohl, 1956), affective (attitudes) (Krathwohl, Bloom, & Masia, 1964), and psychomotor (motor skills) (Dave, 1967, 1970; Harrow, 1972; Simpson, 1972). In forming educational course objectives Bloom’s cognitive domain is applied to assess the knowledge and intellectual component of a curriculum.
After nearly 47 years had passed Bloom’s cognitive domain was revised (Anderson et al., 2001; D R Krathwohl, 2002) by a committee of eight, two of whom had worked on the original published work (committee: Krathwohl and editor: Anderson). The revision was made as a result of many years of application and research and has since been accepted by many educators as a replacement for Bloom’s original work. The changes that were made are as follows (Figure 46) (Anderson Research Group, n.d.; D R Krathwohl, 2002):
*The names of six major categories were changed from noun to verb forms.
*Comprehension and synthesis were retitled to understand and create respectively, in order to better reflect the nature of the thinking defined in each category.
*Create was moved to the highest, that is, most complex, category.
*The revised Taxonomy is not a cumulative hierarchy.
*A taxon of remember was devised to replace that of Knowledge, and
*A two dimensional Cognitive Taxonomy Table was formed by sub dividing the original Knowledge taxon.
[[image:BLOOM_Changes_in_Cognitive_Domain_047.jpg]]
Figure 46. Changes in Bloom’s Cognitive Domain
====2.10.4.1 Revised Bloom’s Taxonomy of the Cognitive Domain====
A substantive difference is in the handling of “Knowledge”. The revised Bloom’s cognitive domain as shown in Table 3 was extended to include the dimension of Knowledge. So now the revised Bloom’s cognitive domain consists of a two dimensional table with The Knowledge Dimension and The Cognitive Process Dimension. This table provides the instructor with a tool with which to classify learning objectives where learning objectives are specific and inclusive to the discrete learning outcomes or intended results that are hoped to be achieved by the end of instruction. The instructor defines the learning objectives where these objectives are classified into the appropriate cell in the 2D matrix of cognitive and knowledge dimensions which then assists in instructional design, and assessment and provides a tool to enable balancing of the learning objectives across methods of instructional design.
[[image:BLOOM_TABLE_Revised_Taxonomy_048.jpg]]
Table 3. Revised Bloom’s Taxonomy Table
(Anderson et al., 2001, p. 28)
'''The Cognitive Process Dimension'''
The Cognitive Process Dimension is the column values for Table 3 above. This dimension provides the level of learning and comprehension required to complete a task where each differs in their complexity on a scale from 1-6. Cognitive dimensions are defined as 1.Remembering, 2.Understanding, 3.Applying, 4.Analysing, 5.Evaluating and 6.Creating each of which contain further sub-process with 19 specific cognitive processes in total. Table 4 provides an overview of each cognitive process with their defining verbs. Verbs are used to classify an objective. For example, an objective ‘to recall the 7 states of Australia’ would be classified under remembering. Recall in this instance is the verb that classifies the learning objective into level “1. Remember” of the cognitive dimension.
[[image:Cognitive_Process_Dimension_Processes_049.jpg:
Table 4. The Six Categories of The Cognitive Process Dimension And Related Cognitive Processes (Anderson et al., 2001, p. 31)
Bloom’s cognitive taxonomy was solely based upon the values contained in the cognitive dimension (with the exception of the differences previously discussed). Bloom believed that the cognitive process was a cumulative learning process in order to achieve a learning outcome. For example, in order to ‘analyse’ subject matter the student would need to have mastered using the old Bloom’s taxonomy of the cognitive domain knowledge/remember, comprehension/ understand and application/ apply whereas the revised taxonomy of the cognitive domain does not assume this cumulative hierarchy. The early Bloom’s cognitive domain took a behaviourist approach to instruction whereas the revised Bloom’s cognitive domain believes that learning can take place at any level without mastering previous levels. This is a fundamental shift in the philosophical grounding of Bloom’s taxonomy of the cognitive domain where it has moved away from the behaviourist approach of learning.
'''The Knowledge Dimension'''
The Knowledge Dimension provides an additional dimension that has been added to the taxonomy by the subdivision (and modification) of Bloom’s original knowledge category, which can be seen as row values in Table 3 above. The knowledge dimension defines how knowledge is constructed which can be Factual, Conceptual, Procedural or Metacognitive. Table 5 provides an overview of the knowledge dimension and their meanings.
The knowledge dimension separates the noun (or subject matter) from the stated learning objective. For example, continuing on from the objective discussed above ‘to recall the 7 states of Australia’ would be factual knowledge where the bolded words make up the noun construct. This noun is factual because the learner either knows the states or they don’t, to know is the basic element required in order to solve the problem.
[[image:Major_Types_and_Subtypes_Knowledge_Dimension_050.jpg]]
Table 5. The Major Types And Subtypes Of Knowledge Dimension (Anderson et al., 2001, p. 31)
The knowledge dimension has been added as it provides further insight to the type of knowledge a student is required to master. In the original work this assumption was also made as it was the first level in a cumulative hierarchy but the revised knowledge dimension provides the instructor with a greater understanding and assists in defining knowledge as a separate dimension. For example, the objective ‘to recall the 7 states of Australia’ the student needs to Remember Factual Knowledge.
The knowledge dimension like the cognitive dimension is not a cumulative hierarchy, learning can start anywhere within the knowledge dimension.
'''Using the Revised Bloom’s Cognitive Domain to Assist in Instructional Design'''
To assist in formulating instructional design Anderson et al. (2001) provides in their book for the cognitive dimension; sample objectives, corresponding assessments and assessment formats (chapter 5) and in the knowledge dimension; specific details, elements, generalisation, structures and models etc (chapter 4). This assists in the formulation of specific tasks and in defining the level of knowledge required of the student. It also assists in ensuring those objectives and testing of those objectives lie across the required range of cognitive and /or knowledge categories and that the student is being fairly assessed in areas that are directly related to the objectives.
====2.10.4.2 Bloom’s Taxonomy of the Cognitive Domain Applied to a Digital Environment====
'''Bloom’s Digital Taxonomy of the Cognitive Domain'''
Churches (2008) has extended the (revised) Bloom’s cognitive domain for digital learning by taking the cognitive process dimension and included verbs for emerging technology. As can be seen below (Figure 47) the words highlighted in blue are the digital emerging technology verbs that have been categorised by using (revised) Bloom’s cognitive levels as the basis for interpretation of complexity. For example bookmarking is a remembering process is simpler than programming (which is a creating process).
[[image:BLOOM_Revised_As_Digital_Taxonomy_051.jpg]]
Figure 47. Bloom's Digital Taxonomy
Churches further added within his classification system a rubric (scoring criteria) of these technologies similar to that that has been defined in the sub-classification system used in Bloom’s cognitive domain. For example, Table 6 displays the rubric for Bookmarking where it has been broken down from simplest to highest.
[[image:BLOOM_Bookmarking_Rubric_For_Digital_Taxonomy_052.jpg]]
Table 6. Bookmarking Rubric for Bloom’s Digital Taxonomy
'''Bloom’s Taxonomy of the Cognitive Domain applied to Games'''
Wang & Tzeng (2007) proposed using the (revised) Bloom’s taxonomy of the cognitive domain as a method for understanding the application of knowledge in digital games. They believed that players learn in various ways within computer games and recognised how little work (if any) had been done in analysing such e-learning platforms in a structured taxonomic manner and in structuring the implementation and understanding of the cognitive processes. They proposed using Bloom’s taxonomy of the cognitive domain as a method by which to assess cognitive processes in a computer game.
[[image:BLOOM_Taxonomy_For_Games_053.jpg]]
Figure 48. Bloom’s Taxonomy for Games
The research included using a game called Food Force, which was a problem solving and mission-oriented game. Figure 48 summarises the conclusion of their research. As can be seen in Figure 48, players exhibited both personal and social feedback cross Bloom’s cognitive levels. They found that the players experienced cognitive processes for individuals across all categories of the Bloom’s cognitive model and displayed social interaction for the higher level Bloom’s categories of Analyse, Evaluate and Create.
==2.11 Summary==
The acceptance of the latest crop of virtual worlds such World Of Warcraft, Second Life, Entropia Universe, There, Eve, America’s Army and others by the internet using public as an integral part of their life style is possibly the most significant paradigm shift to occur in the last 10 years. With the statistics of user volumes and retention rates shows consumption numbers in the tens of millions of users, spread evenly across ages from youth to middle age and an approximately even gender balance (at least in the social worlds) (KZERO Research, 2007; Woodcock, 2008; Yee, 2006). The growth rates of these worlds collectively have been, and are projected (by industry analysts) to continue to be, rising dramatically for the foreseeable future.
With the current convergence of disparate technologies represented by these systems, the general public now have affordable single platform multi-media collaborative environments with sufficient realism to create virtual immersive spaces where presence is achieved at a level sufficient for them to lead virtual existences and establish social networks that rival their real world existence.
The linking of these spaces with the affordable (often free) tools that enable the public to create new 3D spaces and content for these spaces over the last eight years has resulted in a world-wide content developer base that with substantial skills and a highly competitive market for purchasers of those skills at often very low rates.
With the combined market pressures of minimising education delivery costs, improving education outcomes, and reaching as wide a market as possible it is understandable that educators have shown an extended interest over many years in the possibilities of virtual environments for education delivery. So with the advent of the latest generation of creativity focused social worlds like Second Life over the last few years, it is not surprising that the uptake by universities and educators (numbering in the hundreds of institutions) has been as substantial as it is.
A brief retrospective of the work in simulators, virtual reality and 3D games, shows that the potential of these environments extends beyond the virtual ‘chalk-and-talk’ to enabling education delivery strategies for even campus based students that cannot economically be delivered using reality bound means.
With traditional real world learning environments there is an extensive body of tested knowledge that can provide clear guidance as to workable frameworks for the design of course work. The extent to which and how these methods can or should be applied to the virtual world learning space remains an open question.
</div >
[[Category:Featured Article]]
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
d875646505c796db4cb09ab9a1b432278c0a2c28
357
329
2018-10-28T00:34:00Z
Bishopj
1
/* 2.8.4.2 ACSII Virtual Worlds */
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 2: Virtual Worlds - Concepts, History, and Use in Education (Literature Review)=
==2.1 Introduction==
Gartner (2007) predicts that as many as 80% of active internet users will have a ‘Second Life’ in a virtual world by the end of 2011. Depending on your definition of ‘virtual world’ this may seem a little ambitious. Certainly, the extent to which virtual worlds are seen to include massively multi-user online environments supporting collaborative exchange of information in shared virtual space, the prediction might prove reasonably safe. To the extent that this definition is constrained to massively multi-player online games then prediction may prove a little “braver”.
Today’s virtual worlds represent the convergence of multiple technology streams, with the latest examples of the genre representing the merger of internet, telecommunications, instant messaging, virtual reality, 2D & 3D graphics, a variety of 3D modelling technologies, spatial sound, distributed databases, spatial indexing, mapping, streaming data transmission, physics, scripting languages, object-oriented software, agent theory, artificial intelligence, networking, economic modelling, online trading systems, game theory and many, many more technologies.
While the developers of many virtual worlds are content within the game space, some virtual world developers, such as Linden Research (developers of Second Life) have ambitions to be the web platform of the future (Bulkley, 2007). To this end a number of the commercial developers of virtual worlds have joined forces with a number of major corporate consumers, systems integrators and US government bodies to explore common standards for inter-operability of virtual world platforms which is a necessary first step in moving the technologies from the isolated proprietary place they now inhabit to a world-wide shared web platform (Terdiman, 2007).
This chapter explores virtual worlds, reviews the literature considering alternative definitions, characteristics, history, key architectural features, research outcomes and applications in education. The chapter concludes with an examination of traditional education taxonomy and relates that to the virtual world context as a basis for structuring an approach to exploring education affordances offered by two approaches to education in virtual worlds.
==2.2 Virtual Worlds==
===2.2.1 What is a Virtual World?===
====2.2.1.1 In Search of a Definition====
“Virtual worlds are places where the imaginary meets the real”. (Bartle, 2003, p. 1)
Virtual, as defined in the Oxford Dictionary (1989) with respect to the computing context is: “… not physically existing as such but made by software to appear to do so from the point of view of the program or the user….” and defined in the virtual reality context to be “… a notional image or environment generated by computer software, with which a user can interact realistically as by using a helmet containing a screen, gloves fitted with sensors, etc.” (1997).
The term world is defined in the Oxford Dictionary (1989) as “the ‘realm’ within which one moves or lives”.
In simple terms, therefore, a ‘virtual world’ can be defined as a generated computer software realm in which a user moves, exists or lives in a manner that appears to be real to the user.
A common definition for the term ‘virtual world’ is passionately debated in the literature (see Combs, 2004; Jennings, 2007; Reynolds, 2008; Wilson, 2007). It is a term that is used to describe many types of software environments from a simple MUD (Multi User Dungeons, also referred to as Multi User Dimensions or Domains) (Bartle, 2003; Keegan, 1997; Slator et al., 2007) to a sophisticated fully immersive 3D virtual reality environment used in gaming, physical training simulators or social interaction spaces (MetaMersion; Patel, Bailenson, Jung, Diankov, & Bajcsy, 2006; Van Dam, Forsberg, Laidlaw, LaViola, & Simpson, 2000). The term virtual world can be used to describe a single user walk-through simulated environment (Dalgarno, 2004; Youngblut, 1998) or an environment such as a massive multiplayer online role playing game (MMORPG) like World of Warcarft (Bainbridge, 2007). The term virtual world is also interchanged with other terms such as - virtual environment, synthetic world, mirror world, metaverse, virtual universe, artificial world etc[2] (Grøstad, 2007).
Bartle (2003, p. 1) provides the following definition:
<blockquote>
“Virtual worlds are implemented by a computer (or network of computers) that simulate an environment. Some -but not all- of the entities in this environment act under the direct control of individual people. Because several such people can affect the same environment simultaneously, the world is said to be shared or multi-user. The environment continues to exist and develop internally (at least to some degree) even when there are no people interacting with it; this means it is persistent.”
</blockquote>
Therefore, using Bartle’s definition in conjunction with the Oxford Dictionary definition provided above a virtual world can be defined as:
<blockquote>A shared software environment (or realm) in which a person represented as a projected entity (such as an digitally projected image, text identity or other computationally representational object) moves, exists or lives in a manner that appears to be real to the person and capable of affecting that environment and, being affected by, in a manner that simultaneously effects the experiences of other entities within the environment and which generally remains persistent once the user has left the world.
</blockquote>
The key components of this definition are:
#A shared environment in which a real-world participant shares a computationally generated artificial space with other real world participants and/or other computationally generated entities.
#The nature of the real-world participant’s projection into the computationally generated virtual space.
#The characteristics of the space, which establish a sense of realism to the participant.
#The manner and extent to which the real world participant is able to affect the shared space.
#The nature and form of persistence that the artificial space retains.
Throughout this section we will examine the current state of these components; the ideas and literature analysing contributing to the current expression of these concepts in the form of currently available virtual worlds. The realisation of virtual worlds in software has been (and continues to be) a rapidly evolving field continually consolidating mixed influences from a fiction, mechanical and electrical engineering, computer science, gaming theory, telecommunications, social science, commerce, religion and sociology. It is a field where advances are made as much in the act of amateur invention as in formal science, and a field in which the academic literature frequently lags the leading edge of the advances by a significant degree.
===2.2.2 Recognising a Virtual World by its Features===
While there is not as yet a single common set of universally accepted attributes, the literature offers a variety of feature based definitions that attempt to provide a basis for classifying whether a given application or environment is, or is not, a virtual world. Across these competing views there are some features that are most frequently repeated.
Coming from the perspective of virtual worlds as gaming platforms, Bartle (2003, pp. 3-4) proposes that a virtual world should adhere to the following conventions:
*'''Physics''': The world contains automated rules for the players that effect change in the world.
*'''Character''': The player is a part of in world experience that is represented by a character and with which they strongly identify.
*'''Interactions''': All interactions with the world are channelled thought the character.
*'''Real-time''': Interaction in the world take place in real-time.
*'''Shared''': The world is shared by others characters in common.
*'''Persistence''': The world is continuous and evolves to some degree regardless of whether or not the player is present in the word. While not present the player’s state in the game remains unchanged.
Bartle tends to use the term character, for what this thesis refers to as an avatar, and considers that the player (which will be identified as ‘the intelligence’ in this thesis) must strongly identify with that character. In the context of role playing games where the player assumes an identity not their own, this aspect of the feature list goes to recognise the effectiveness of the immersion and sense of presence the player experiences (concepts we will be exploring later), but outside of this space, where the player and the ‘character’ may be one and the same, this feature is less of a distinguishing criterion.
His use of the term Physics in the context of an application genre that may include 3D environments is perhaps a little confusing. In these spaces Physics most commonly refers to the physics engine that manages the simulation of an avatar and object dynamics in the space (such as gravity, acceleration, force, momentum and limb movement, etc). As used by Bartle, the term includes the ‘business rules’ and behaviours of the system – the rules governing all interaction, not just those simulating physical movement.
The nature of the shared space and interactive channel imply that the actions of one player affect the experience of another.
Edward Castronova (2001, pp. 5-6) proposes that a virtual world should have the following features:
*'''Interactivity''': Existing on one computer and can be accessed via a network (or the internet) by many simultaneous users. The actions of each user have influence on other users in the world.
*'''Physicality''': Users access the world by a computer, which provides a first person view of the world, the world is generally ruled by natural laws much like the real world with scarcity of resources.
*'''Persistence''': The world is continuous and evolves to some degree regardless of whether or not the player is present in the word. While not present the player’s state in the game remains unchanged.
Castronova’s feature requirements are essentially a subset of Bartle’s, although with the possible omission of the expectation that interaction is necessarily real time.
Sun Microsystems Inc (2008, p. 3) proposed the following common features of open virtual worlds (ie multi-user virtual worlds open to public access over the internet):
*Shared space, allowing multiple users to participate simultaneously.
*Users interact with one another and the environment.
*Persistence.
*Immediacy of the interactions.
*Similarities to the real world rules.
We might, perhaps reject Sun’s expectation of any need to assimilate ‘real world rules’ as this would exclude many fantasy role playing games from being classed as virtual worlds, but outside from this aspect Sun’s list is essentially consistent with the views of Bartle and Castronova.
These three sources are essentially consistent with the body of the literature, making allowance for the additional attributes and some latitude in interpretation we can establish a minimum feature list that would be generally accepted:
*The environment is shared;
*Interaction are in real-time;
*A person participates in the world through some form of representation with which they identify and are identified and that facilitates interaction and recognition (such as a character or avatar);
*Interactivity in the world is channelled though the avatar;
*Changes induced by a participant influence the experience of the space for other participants;
*Rules govern the world and interactions are shared and commonly applied; and
*The world is persistent.
==2.3 The Avatar–The Nature of a Participant’s Projection into a Virtual World==
While Bartle (2003) refers to a participant’s projection into a virtual world as a “Character”, the more widely accepted name today for a real world participant’s projection into a virtual world is an Avatar. This is the term this thesis will be adopting in this research.
The word avatar derives from avatara a Sanskrit word meaning “descent of a deity” or incarnation and utilised by the Vaishnavism religious tradition of Hinduism. The Hindi concept of an avatar is thought to originate as early as the second century B.C.E (Sheth 2002). One of the most recognised Hindu deities is Vishnu (Figure 1). In Hinduism, Vishnu, is said to have a standard list of ten avataras (collectively known as Dasavatara) with one of them said to be Buddha (Siddhārtha Gautama) the founder of Buddhism (Sheth 2002).
[[image:Vishnu_Hindu_Avatar_001.jpg]]
Figure 1. Hindu Avatara
Left: Visnu (or Vishnu) Hindu deity the protector and preserver of the universe
Right: Ten avatars of Visnu (Dasavatara)
(Vivekananda Centre, 2008)
In computing terms, little has changed from the original Hindi meaning of avatar. As with Hindu avatara, the virtual world participant can be thought of as “descending” or “projected” from reality to become a computational representational in a virtual world. In virtual worlds, an avatar is generally (although not exclusively) a graphical representation of the user’s persona (Deuchar & Nodder, 2003) although it can also be a representation of a system or a function in some applications (Sheth, 2003), a simple name in the form of a text string (in some text based MUD’s) and is evolving to include virtualisations of other senses (such as aural and tactile) (S.-Y. Lee, Kim, Ahn, Lim, & Kim, 2005). The graphical representation of an Avatar was thought to originate from a networked multi-user virtual world game called Habitat in 1984 (Bye, 2008; Morningstar & Farmer, 1990). Early research seems to suggest that the use of digital avatars in virtual worlds provides the user with reduced inhibitions and dissolves social status, or reconstructs social status among users (Dede, 1995; Dickey, 2003; Rheingold, 1993).
The projected form is not necessarily a recognisable representation of the real world human form. In his or her projected form, for example, the avatar might be represented as an image of a human, an animal, an animated mechanical object, a simple name, or any form appropriate to the virtual world, and within the technical capabilities of that world’s object management systems. For example, in Eve (a space based virtual world) all avatars are space ships whereas in Second Life (a social based virtual world) an avatar can take any form (Figure 2) but regardless of appearance your avatar’s name remains the same.
[[image:SecondLife_Digital_Avatars_002.jpg]]
Figure 2. Digital Avatars of Second Life (Levine, 2007)
In terms of today’s virtual worlds, and for the purposes of this research, an avatar should be thought of as a combination of a representation, an agent and an intelligence:
#The ''representation'' may be visual, aural, tactile or any other sense conveying the presence of the avatar to other avatars or agents in a virtual world.
#The ''agent'' is the library of capabilities of the avatar in a virtual world.
#The ''intelligence'' (or actor) provides the tactical and strategic control of the avatar, which could be artificial or natural (eg human).
In a virtual world the decisions of the intelligence are communicated to, and realised by, the agent. The consequence of the agent realising (enacting/implementing) the intelligence’s commands may result in a change in the state of both the agent and the representation, eg, in a 3D Graphical virtual world, a command to walk issued by the intelligence might result in the agent changing position and entering a movement or walking state and triggering the representation to display a walking animation (enter a walking animation state).
==2.4 A Taxonomy of Virtual Worlds==
===2.4.1 Introduction===
As might be expected, the literature contains extensive discussion of the appropriate taxa to be applied in classifying virtual worlds, and also an equal measure of disagreement among authors as to the appropriate criterion so to be applied. In spite of the range of discussions, most attempts are incomplete and therefore capable of classifying in a useable form only a portion of the genre. To be fair, this space is rapidly evolving and possibly as fast as it is classified a new entrant appears that change the paradigm, and old entrants are updated to include new capabilities.
===2.4.2 A Taxon for Virtual Worlds===
Outside of the education and virtual reality streams, possibly the largest single family of virtual worlds are those developed for games. While not actually claiming to propose a taxon, Bartle (2003, pp. 38-61), whose pedigree is essentially from the gaming stream, proposes a set of attributes that can be used to classify Virtual (game) Worlds. Not surprisingly, the attributes are most relevant to multi-user game focussed virtual worlds, but provide a workable superset of the current thought on the matter and with some adjustment can be extended to the more general examples of virtual worlds. He suggests that a virtual world can be categorised according to the following taxa:
#'''Appearance''': To a ‘newbie’ (Bartle’s term for a new user of a virtual world application) the distinction is whether the virtual world is a ‘text based’ MUD, ASCI, graphical 2D or graphical 3D etc. To an ‘oldbie’ (as described by Bartle) this is only an interface issue and therefore not as important as the other listed categories.
#'''Genre''': Is the world fantasy, cyberpunk, horror, social etc. The plot or the settings of the virtual world. This taxon is most helpful with purpose focussed virtual worlds. In the non-gaming or semi-gaming space occupied by some generalised social worlds, the virtual world is as much a platform on which other ‘sub-worlds’ can be based, and thus the genre of the virtual world can be all other genres. Examples of this might include PLATO and Second Life.
#'''Codebase''': Although not as important for the user as it is hidden from them this is an important aspect to the designer of a virtual world. The codebase defines the technical makeup of the world - reusable content and controls, scripting language, database structure etc. This researcher suggests that the codebase is not a single taxon, but perhaps should be separated into multiple taxa. In its place one might propose the content management, asset management, game engine, environment application programming interface, AI, and scripting function library within the system as more relevant technical matters.
#'''Age''': How long the virtual world lasts is an important aspect for the measure of success of the virtual world. Generally the longer you can keep a player (or user) interested the longer the virtual world survives which in turn attracts new users which adds to the player base of the virtual world.
#'''Player base''': How large is the player (or user) base of the virtual world. This measure varies depending upon what you are counting for example, the number of registered users, the number of avatars (a user can have more than one character in a virtual world but in general not for simultaneous use), simultaneous users logged in, hours played per user, access over a period of time, number of active subscriptions, etc. In some worlds the meaningful measure of player base is in fact the number of owner occupied ‘acres’ of virtual land (as opposed to general users of the virtual world). The player base measures the current success of the virtual world, its popularity so to speak, which in turn lengthens the age of the virtual world. Given the number of ways a player base can be structured and measured a single measure is open to both misinterpretation and reporting manipulation, and for some measures (like subscribed users – where some subscriptions are costed and others free) may be completely erroneous when comparing one virtual world to the next.
#'''Degree to which they can be changed''': Virtual worlds vary in the degree to which a user can change the content or add to the content of the virtual world. Virtual worlds such as World of Warcraft (and most game based virtual environments) allow no change by the player with all content created by the developers of the virtual world. Other virtual worlds such as Second Life, Active Worlds, TruePlay and PLATO rely on content created by the community. In the case of Second Life (for example) the entire virtual world is made from user created content by providing them with building tools, import and export capabilities, out-of-world interfaces and communications capabilities, an extensive library of API functions and a scripting language. The degree to which a virtual world’s content can be changed by the user adds to the technical codebase complexity and the user’s (and other user’s for multi-user virtual worlds) experience of and within the virtual world.
#'''Degree of persistence''': Bartle defines persistence to be the degree to which a world’s state remains intact if you shutdown and restart the virtual world. He classifies persistence into ‘discrete’ or ‘continuous’ groups. At the extreme a discrete virtual world would regenerate - described a ‘Ground Hog’ world (named after the movie). Here all content and the location of the player would be reset to the start of play. In a continuous virtual world the content and locations are retained through a restart.<BR />Persistence also relates to what happens to the world when a user logs off, does the virtual world continue to evolve without the individual player – and if so can the player’s state be affected while off line? A virtual world generally displays some level of persistence and is generally a term used to distinguish if a ‘virtual world’ is really a ‘world’ or in fact just a simple ‘Ground Hog’ environment (see Gehorsam, 2003). The ultimate level of persistence being that akin to the real world which is constantly evolving and changing regardless of our existence within the World.
With some modification and generalisation most of the taxa can be applied in the general case of gaming and non-gaming virtual worlds. To be applied outside of the narrow RPG (Role Playing Game) grouping, the classification system would benefit from some subdivision of elements.
We have already noted codebase as one such category. Codebase is such a wide group that is could be applied to every functional capability of the virtual world not covered by another taxon, and thus is of limited help in establishing a consistent framework for classification. For example Castronova (2001) taxonomy recognises a grouping under marketplaces (implying commercial functionality) while both Kish (2007) and Cavazza (2007) recognise groupings covering Paraverses (although they use different terms). In Bartle’s taxa these might both be covered as distinguishing characteristics under codebase, yet the one relates to the ability to conduct real-world commercial transactions in the space, while the other addresses the merging of real-world content with virtual world content.
Persistence as framed by Bartle mixes up multiple discrete concepts – host state persistence, user state persistence, environmental evolution, and scenario persistence. This last item is generally typical of games (such as quest driven environments where on restarting a ‘quest’ the user can rely on the sequence of events being a repetition of the sequence that occurred previously – effectively a ground-hog space within a larger persistent environment), and absolutely essential for simulators and learning systems where a user taking a course should be able to rely on the lesson replaying in a consistent and predictable way each time (unless variation is an intended part of the training like in a military battlefield virtual world). In order to classify virtual worlds, recognising these attributes independently of each other would be more helpful than identifying the world as persistent or not persistent, nor are the sub-features linearly related – i.e. one form of persistence does not imply the inclusion of another form of persistence (Purbrick & Greenhalgh, 2002).
===2.4.3 Applied Taxonomies===
While Bartle proposes a reasonably extensive set of attributes (taxa) for classification, some authors have proposed simpler classification regimes, although all seem as yet to avoid claiming an actual taxonomy.
Kish (2007) recognised that with the appearance of the weakly defined ‘Web 2’ technologies, virtual worlds could be seen to encompass a wider range of social networking and world-imagining spaces. Kish’s classification groups virtual environments into the broad categories (Figure 3):
#'''MMORPGs''': Massively Multiplayer Online Role Playing Games. A category which includes text and graphical gaming environments with the common theme of role playing and containing internally a hierarchical, level based player grading system to determine expertise and implied seniority, and generally plot or quest driven and goal oriented as their linking characteristic. Typical examples might include World of Warcraft, Entropia Universe, Everquest, MUDs, etc.
#'''Metaverses''': Imagined public fantasy spaces, emphasising social interaction, creativity and lacking a single plot or purpose for participation. Generally exhibiting a devolved structure without a single levelling system or clear environment imposed hierarchic seniority system[3]. Typical examples might include Habitat, Second Life, Active Worlds, Furcadia, etc
#'''Paraverses''': Spaces that intersect with the real world, incorporating content from the real world and thus could be described as virtual extensions of the real world. This group potentially includes many of the Web 2 spaces that contain sufficient functionality to create in the minds of their users a ‘real’ virtual community as strongly present to the participant as their real world existence.
#'''Intraverses''': Spaces that are otherwise Metaverses or MMOLE’s but private or closed to the broader public. Virtual reality environments could be seen generally to fall into this category as well as private/corporate implementations of public virtual world spaces. Typical examples might include Qwaq, Sun System’s Wonderland, IBM’s Metaverse, etc.
#'''MMOLEs''': Massively Multi-user Online Learning Environments. Possibly the oldest class of virtual worlds as it includes systems such as PLATO and is typified by educational environments supporting user social interaction. Primarily purpose (or although not goal) driven – such as learning, training, idea exchange, simulation, etc. This space includes the dedicated training / teaching environments of PLATO and planning / simulation management systems of SIMNET, Blackboard, Boston College’s Media Grid, etc.
[[image:Kish_Virtual_Geography_003.jpg]]
Figure 3. Virtual Geography (Kish, 2007)
Cavazza (2007) proposes that a virtual world should be open (public) and contain taxa supporting strong and generalised capabilities in each of the dimensions (Figure 4):
#Social networking
#Gaming
#Entertainment
#Business
[[image:Cavazza_Virtual_Universes_Landscape_004.jpg]]
Figure 4. Virtual Universes Landscape (Cavazza, 2007)
Consequently most of the virtual worlds identified by other authors are excluded from Cavazza definition of virtual worlds, but included under the broad category of ‘Virtual Universe’. To illustrate this idea Cavazza has classified a huge range of the existing virtual environments:
#Social
#*2.5 & 3D Chats
#*Avatar Centric
#*Social Platforms
#*Branded Universe
#*Virtual Worlds
#Game
#*MOG
#*Sports
#*MMORPG
#*Avatar Centric
#*Social Platforms
#*Branded Universe
#*Adult Games
#*Virtual Worlds
#Entertainment
#*Virtual Sex
#*Virtual City Guides
#*2.5 & 3D Chats
#*Avatar Centric
#*Branded Universe
#*Virtual World Generators
#*Virtual Worlds
#Business
#*Serious Games
#*Virtual Marketplaces
#*Adult Games
#*Virtual World Generators
#*Virtual Worlds
Cavazza’s definition and classification system is extensive, and possibly the most comprehensive to date. While Kish’s classification tends to focus on functionality, Cavazza’s emphasises purpose. Never-the-less, there is significant cross-over in their ideas. For example, both recognise the difference between games and social networking, and both accommodate the paraverses in a special category (Cavazza includes them in ‘Virtual City Guides’ among other groups). Cavazza’s analysis, however, lacks the accommodation of the education, training and simulation virtual spaces present in Kish’s categorisation, although, it might be argued that these are covered in multiple categories including ‘Virtual World Generators’ (eg PLATO, VastPark) and Serious Games (training simulators).
==2.5 What’s in a Name? – Virtual Worlds versus Virtual Reality==
Virtual Reality environments are generally a combination of user interface hardware (such as headsets and data gloves) and software. The availability of the (often costly or purpose built) user interface hardware has meant that the majority of these environments are either single user or very small scale multi-user environments (Jones & Hicks, 2004; Miller & Thorpe, 1995). A direct consequence of this is that Virtual Reality environments have tended to ignore the dimensions of user interaction, game play and collaboration in favour of the technology of immersion. This fact, possibly more than any other, has predisposed some authors to exclude virtual reality spaces from the domain of virtual worlds (Bartle, 2003; Yee, 2006).
While Bartle’s virtual world definition, contributes part of the definition we have adopted for virtual worlds in this research, the researcher departs from the entirety of Bartle’s embodiment of virtual worlds as expanded in that work. Bartle believes that a virtual world has a meaning divergent from that of virtual reality believing that “Virtual reality is primarily concerned with the mechanism by which human beings interact with computer simulations… [rather than] the nature of the simulations themselves” (2003, p. 3). To this extent Bartle’s definition specifically excludes Virtual Reality spaces from the definition of virtual worlds.
This researcher adopts a view consistent with some other writers in the field that excluding the body of work in virtual reality from the concept of a virtual world by writing virtual reality spaces out of the definition, places the emphasis narrowly on the social and gaming dimensions of these worlds, and away from the immersive experience thus excluding the vast body of research that predates or has been done in parallel to the development of gaming virtual worlds (Cosby, 1999; Heilig, 1955; Pimentel & Teixeira, 1994; Rheingold, 1992; Schroeder, 1997; Steuer, 1992; Sutherland, 1965; Walker, 1990; Woolley, 1994) and constrains the consideration of these environments in the education context to their collaborative and scripting capabilities.
Other authors have adopted definitions wider than that posited by Bartle of the virtual world concept, although in most cases constrained from some portion of the body of work that has contributed to the space. Dickey (2005, p. 439) implies an exclusion of 2D and non visual environments while providing: “Three-dimensional virtual worlds are a networked desktop virtual reality in which users move and interact in simulated 3D spaces.” Similarly, McLellan (2004) presents 10 classifications of virtual reality, a single virtual world being classified as ‘through the window’ where as a multi-user virtual world would be classified as ‘cyberspace’. Mazuryk and Gervautz (1996) make no distinction in the number of users in the virtual world but define a virtual world to be a ‘desktop VR (virtual reality)’ or a ‘Window on World (WoW)’ system. Biocca and Delaney (1995) defines a virtual world to be a ‘window system’ a computer generated three-dimensional virtual world viewed either by a computer screen or with the assistance of a head mounted display.
This researcher’s view is that all of these definitions are correct, but incomplete and that a definition that allows the participation of all of these examples is the most useful and appropriate in the education context. To appreciate the reasoning behind this argument we must look at some of the history of the development of the technologies and concepts that have contributed to the current family of virtual worlds and the problems and purposes these stepping-stones intended to resolve or achieve.
Authors adopting Bartle’s view have generally also adopted the view that virtual reality is essentially a hardware interfacing technology and hence the environments managed in this space are of no consequence. The misconception that virtual reality is a collection of hardware (data glove, head mounted displays etc) neglects the very meaning of virtual reality, which seeks to evoke a feeling of immersion and presence within the virtual space. In virtual reality research stream, using external hardware devices to enter a virtual world is only one method by which immersion and presence is achieved (Briggs, 1996; Steuer, 1992). No external device will ensure a user’s experience of immersion if the world they enter is an unconvincing generator of an alternative reality for the participant. Furthermore, if virtual reality is to be excluded from the scope of the definition of virtual worlds, then the existence of VR plug-and-play devices such as stereoscopic headsets, data gloves or haptic controls that are readily available to use with many mass market virtual worlds (that otherwise would fall within Bartle’s definition) for example, Vuzix iWear headset, Evolution Motion Glove of PS1, Wii Remote for Nintendo Wii, MS Force Feedback controller for Flight Simulator etc. would seem to contradict the proposed disconnect between the study of virtual worlds and virtual reality. Lastly the exclusion of virtual reality environments from the definition of virtual worlds ignores that fact that in the 3D virtual world space many of the technologies and concepts utilised were contributed by the virtual reality research stream (as will become clear from the history presented in the following sections).
In the education context, virtual reality technologies (as expressed for example in simulators) are a critical and essential contribution to the pantheon of virtual (training) worlds (Bailenson et al., 2007; Dede, 2004). In this researcher’s view, virtual reality environments are a subset of the virtual worlds, which are increasingly converging, if the space has not already converged in current virtual world examples such as America’s Army, Second Life, etc and massive multiplayer training environments like SIMNET (Lang, Maclntyre, & Zugaza, 2008; Lenoir, 2003; Zyda, 2005).
==2.6 Dimensioning Virtual Worlds==
===2.6.1 The Degree of Virtuality===
The degree to which a world is ‘virtual’ can be looked at as a sliding scale between physical and virtual. Milgram and Kishino (1994) presents a taxonomy for mixed reality visual displays called a ‘reality-virtuality continuum’ (Figure 5). On the left hand side of the scale is the ‘real environment’, which is equivalent to the real or tangible world, while on the extreme right is the ‘virtual environment’, which is equivalent to an artificially generated world. Between these two extremes is classified as ‘mixed reality’ (MR) made up of combination of both real and virtual matter.[4]
[[image:Reality_Virtuality_Continuum_005.jpg]]
Figure 5. Reality-Virtuality Continuum: Representation Scale for Visual Display
(Milgram & Kishino, 1994)
Figure 6 illustrates an example of the use of the reality-virtuality continuum taken from the MagicBook Project (Billinghurst, Kato, & Poupyrev, 2001). On the left of the figure is a book that is real (ie. the real world environment); in the middle the same book but viewed though an Augmented Reality (AR) Display where figures appear like pop-up characters on top of the book (ie. mixed reality or augmented reality); while on the right the same book but viewed within a virtual environment where the “reader” becomes the characters within the book.
[[image:The_Magic_Project_006.jpg]]
Figure 6. The MagicBook Project: An Example Of The Full Reality-Virtuality Continuum
While the MagicBook project was conceived around the integration of physical (tangible) real world objects with digital virtual world generated objects, when the real world objects are themselves digital or intangible – such as with course materials of photographic images, text, or other digital content the merging of the ‘Real World’ and the ‘Virtual World’ become less obvious. For example, real world authors Pamela Woodard and Wilbur Witt have published their works in the Second Life virtual world first or simultaneously with publication in the real world (Bell, 2006). Second Life virtual world can integrate conventional HTML web page content directly into the virtual environment (Release Candidate, 2008). Content developers and particularly trainers and presenters in Second Life routinely import textures and slides and stream sound and video from outside of the virtual world into the virtual space.
In the context of Milgram and Kishino’s reality-virtuality continuum, this research focuses on the right hand end of the scale i.e. using a desktop display of a virtual world in which all content is delivered virtually. In contrast to the MagicBook project this research considers (in the education context) the affordances from two virtualisation strategies – a direct reproduction of the real world delivery into the virtual (in part, by importing the non virtual world generated materials into the virtual world), and a transformation of the real world material into virtual material (in part, by recasting the non virtual world materials into virtually generated form).
===2.6.2 The Degree of Immersion and Presence===
====2.6.2.1 Introduction====
Virtual reality literature often separates a user’s experience of a virtual environment into physical and psychological components (Benford, Greenhalgh, Reynard, Brown, & Koleva, 1998; Biocca & Delaney, 1995; Sheridan, 1992; Mal Slater, 1999; Mal Slater & Wilbur, 1997; Steuer, 1992). The psychological components include the interaction (or connectedness) and belief where contribution of the participant or their willingness to believe in the reality of which they would otherwise know to be unreal and the physical aided by external mechanical and functional capabilities of the system.
In exploring the factors determining the effectiveness of Virtual Reality environments, Burdea and Coiffet (2003) determined that the aim of virtual reality is to achieve a trio of ‘Immersion, Interaction and Imagination’ (Figure 7. The Three I's of Virtual Reality), each of which holds equal significance to the user’s experience of virtual reality systems. A virtual reality system seeks fully to engage the user in the virtual space. They proposed that excluding any one of these features exposed a user to passive participation, and ultimately detracted from the perceived ‘reality’ of the experience.
[[image:Immersion_Interaction_Imagination_007.jpg]]
Figure 7. The Three I's of Virtual Reality
Slater (1992) defined user involvement to be a combination of the human experience which in turn is dependent on the technology (Figure 8). Telepresence (or presence) is the human sensation of ‘being there’ in a virtual environment[5] and seen influenced in part by the technology in terms of vividness (richness, realism) and interactivity (response) of the environment.
[[image:Steuer_Variables_Influencing_Telepresence_008.jpg]]
Figure 8. Technological Variables Influencing Telepresence (Steuer, 1992)
Slater and Wilbur (1999; 1997) revisited these concepts in later work, defining a user’s experience in terms of immersion and presence. Immersion is seen as an objective measure of ‘systems immersion’ technology such as field of view, quality of display etc and while presence is seen as a subjective measure, a psychological sensation of ‘being there’. From here on we will be using the terms immersion and presence as defined by Wilbur and Slater.
====2.6.2.2 Immersion====
Benford et al. (1998) propose classifications of artificiality and transportation for collaborative environments (Figure 9) that extends Milgram and Kishino’s reality-virtuality continuum. Artificiality (physical-synthetic) is equivalent to the reality-virtuality continuum. Transportation (local-remote) is the degree to which a participant becomes removed from their local space to operate in a remote space, which they define to be a similar to the concept to immersion. For example, CVEs (Collaborative Virtual Environments[6]) are placed on a scale of partial to remote transportation where a fully immersive CVE would be the ultimate level of transportation in a virtual reality system using devices such as HMD, data gloves, tactical and aural equipment that allowed for no outside distraction, the participant would be operating completely within virtual environment and be fully remote form their local environment[7]. Whereas, a desktop CVE is partially immersive as ones local surroundings form a part of the virtual environment eg field of view that allows for head turning away from the virtual space etc (Sheridan, 1992). In the context of Benford et al. transportation scale this research is conducted using desktop CVEs and is therefore only partially immersive according to their scale.
[[image:Artificiality_Transportation_as_SS_Metrics_009.jpg]]
Figure 9. Shared Space Technology According to Artificiality and Transportation
====2.6.2.3 Presence====
Research in online gaming virtual worlds has tended to focus on the human experience (presence) of virtual worlds rather than the ‘systems immersion’ aspects, while studies of virtual reality environments have tended to consider both. This is possibly a function of the common standard interface for massively multiplayer game environments that has traditionally been the desktop computer equipped with a mouse and keyboard. Although various more advanced mass market input devices (head mounted displays and 3D mice, etc) have been available to the mass-market for many years, they are not yet widely utilised.
The degree of presence is often linked to the effectiveness of a virtual environments (Witmer & Singer, 1998) which due to its subjective nature is possibly the most difficult to comprehend and therefore measure (Mal Slater & Usoh, 1993). Hence, this area has been a widely researched with various explanations as to what constitutes presence in a virtual environment (Schuemie, Straaten, Krijn, & Mast, 2001). The sense of ‘being there’ in the environment is subjective as Slater and Usoh (1993; 1994) describe presence is similar to a person’s ‘willingness to suspend disbelief’, a concept derived from British poet and literary critic Samuel Coleridge (1772-1834) in his autobiography (1817) where he describes the phenomena of when a person becomes so engaged in a narrative that they are willing to believe an event is true if even for only a brief moment. Although suspension of disbelief is often linked today with mediums such as film and literature, virtual worlds (especially Role Playing Game (RPG) worlds) provide many of the same traits in which the user can be thought of as an actor within the virtual world that forms a part of the storyline.
A number of presence classification strategies have been proposed by various authors. We will consider:
#Schroeder - focussing on the importance of social interaction
#Bartle – focussing on the degree of commitment in the environment
Schroeder (2006) presents presence in a continuum of shared virtual environments (SVE) within a three-dimensional model (Figure 10). Presence (x), copresence (y) and connected presence (z) can be described respectively as ‘being there’, ‘being there together’ and ‘being connected together’. Connected presence can be thought as the extent to which a relationship is mediated when presence and copresence exist. Mapping is done on a comparison with a physical face-to-face relationship (0,0,0) and an entirely immersive environment such as a networked Cave (1,1,1). For example, face-to-face is (0,0,0) there is no presence (and thus no copresence) as no meeting is taking place in a virtual environment whereas in the case of a networked Cave (1,1,1) the entire relationship (and environment) is virtual where affordances are such for high connected presence.
[[image:Presence_Copresence_Connected-Presence_010.jpg]]
Figure 10. Presence, Copresence, and Connected Presence
In different media for being there together
Of interest in Schroeder’s model is the comparison of desktop SVE and online computer games. The example given in the model for a desktop SVE is Active Worlds which is a massively multiplayer online (MMO) social virtual world and the example provided in his paper for an online game is Quake, which at the time provided for up to 16 players sharing a common space. Both are virtual worlds, use text chat and sound, and use avatars to project the participant into the virtual world (although Quake takes a first person view exclusively). For the purpose of the analysis the main differences were perceived as the number of simultaneous players sharing the common virtual space and the imposition of clear game driven objectives in Quake, and the absence of those same game driven objectives in Active Worlds. Yet, Active Worlds was seen as providing a higher level of connected presence. So why does Active World provide a higher level of connected presence? The distinction here was seen to be the in the concept of the ‘game’ rather than number of players when you compare it to other SVEs presented in this model above. Active World is a social world in which no plot is provided to measure success or failure of an individual, unlike Quake where the measure of success is clear and the entire activity and function of the environment is the relentless pursuit of that individual success. Therefore it was deduced that a social (game) world provide for more connected presence than that of an individually focussed plot driven gaming virtual world (at least as analysed by Schroeder).
Schroeder observation of higher connected presence in social virtual worlds seems to fit with Heeter’s (1992; 2003) definition of social presence where she defines presence in terms of individual presence, social presence and environmental presence. Presence of an individual is increased when social relationships are formed which is based upon the social component of perceptual stimuli. When an environment or situation is focused on the relationship (rather than killing a monster like in RPGs) a higher social presence will be achieved.[8]
Bartle (2003, p. 42) identifies a system of levels of immersion (which in this paper we have defined as presence[9]) based upon a linear scale of the; Player (the real person), Avatar (the digital puppet), Character (representation in the world e.g. character name, role etc) and Persona (your identity in the virtual world where the player is the Character and is in the virtual world). Persona is similar to the concept presence, if your character is killed ‘you feel like you have died’ there is no distinction between the character and the player, they are one, the Persona. Bartle believes that the avatars and character are just steps along the way to persona. Persona is when a person ‘stops playing the world and starts living in the virtual world’.
==2.7 Influences on Virtual Worlds from Art and Literature==
===2.7.1 Introduction===
The concept of a virtual world is by no means unique to computing. The thought of exploring an imaginary realm has captivated people’s imagination throughout time.
“If we define that a virtual world is a place described by words and/or projected through pictures, which creates a space in the imagination real enough that you can feel you are inside of it, then the painted caves of our ancestors, shadow puppetry, the 17th-century Lanterna Magica, a good book, play or movie are all gateways to virtual worlds. Humanity’s most powerful new tool, the digital computer, was also destined to become a purveyor of virtual worlds, but with a new twist: The computer enables the virtual world to be both inhabited and co-created by people participating from different physical locations.”(Damer, 2007, p. 2)
At least with respect to the massively multiplayer online virtual worlds/role playing games (MMOVW, or MMORPG), all of today’s exhibits can trace their paradigms to literature. Some such as Eve, Entropia Universe, World of Warcraft are amalgams of a body of works and ideas while others such as MUD1 (Sword of the Phoenix (Howard, 1932)) and Second Life (Snow Crash (Stephenson, 1992)) are direct inspirations of specific literary works.
Consequently, to properly understand the ‘state of the art’ represented by today’s multi-user, connected together, virtual worlds and the gaming, social and business rules they have adopted to govern them, it is essential to consider the context from which they have been derived, and the art that has influenced their creators. While some operational paradigms in virtual worlds are technology constraints, functional capability constraints can be as much a condition of the imagined world being implemented as a real constraint of the technology of the day. To appreciate this fact one need only compare the camera controls of Project Entropia versus those of Second Life – two environments of comparable age, or the commercial capabilities of these two environments versus those of World of Warcraft. In each case the differences and apparent restrictions are a game design decision rather than a technology constraint.
===2.7.2 Virtual Worlds of the Arts===
James Pearson (2002) believes that from as early as 30,000 years ago in the Chauvet Cave in France shaman used cave art as a means to document their experiences of travel to the dream world. Packer and Jordan (2002) also draw this similarity in their book on virtual reality describing how the Cro Magnon in 15,000 BC in the Lascaux caves of south-western France used cave art (Figure 11) with candles and the acid aroma of animal fat for a magical theatre of the senses.
[[image:Cave_Art_BC_011.jpg]]
Figure 11. The caves of Lascaux: Cave Art 15,000 BC
The German composer Richard Wagner (1813-1883) (Figure 12) concept of Gesamtkunstwerk (total artwork) has also been cited as an early pioneer in the concept of immersion and presence in virtual worlds (Grau, 1999; Klich, 2007; Packer & Jordan, 2002). Wagner believed that a “Artistic Man can only fully content himself by uniting every branch of Art into the common Artwork” a synergy that not only includes the performance but all that surrounds so that mankind “...forgets the confines of the auditorium, and lives and breathes now only in the artwork which seems to it as Life itself, and on the stage which seems the wide expanse of the whole World” (Wagner, 1849, p. 184 & 186).
[[image:Wagner_Gesamtkunstwerk_012.jpg]]
Figure 12. Richard Wagner's Gesamtkunstwerk (Total Artwork)
===2.7.3 Virtual Worlds of Fiction and Fantasy===
There are numerous examples of virtual world that have been explored though fiction and fantasy. Each has contributed to the illusion of virtual worlds becoming a reality (Bartle, 2003; Chesher, 1994).
In Lewis Carroll’s novel, Alice's Adventures in Wonderland (1865), Alice fell down a rabbit hole to explore a fantasy world inhabited by peculiar and anthropomorphic creatures. Similarly, in Carroll’s follow on novel, Through the Looking Glass (1871), Alice explores a world behind a mirror. Hattori (1991) saw Lewis Carroll’s novels as a paradigm for modern virtual reality systems (Figure 13) blending the physical space with fantasy in a rapidly changing environment. To this extent, Carroll’s works provide a perfect analogy for the design and the development of virtual worlds (Rosenblum, 1995; West Virginia University, 2008). An explorative virtual world was realised as a children’s computer game called The Manhole (1988-2007) where it was based upon Carroll’s novel Alice’s Adventure in Wonderland (Wikipedia, 2008a).
[[image:Alice_via_Caroll_and_Hattori_013.jpg]]
Figure 13. 'Through the Looking Glass' Carroll (1871) & 'The World of Virtual Reality' Hattori (1991)
Within the fantasy literary genre, a key influence has been the works of J R R Tolkien starting with The Hobbit (1937) and its sequel The Lord of the Rings (1954, 1955) (Figure 14). An adventure fantasy that takes place in an imaginary world called Middle-Earth containing races such as Hobbits, Wizards, Elves, Orcs, Dwarves and Trolls. Tolkien’s literature style was so popular that the Oxford dictionary termed his literature approach as tolkienesque[10].
[[image:JRR_Tolkein_Book_Covers_014.jpg]]
Figure 14. The Hobbit & The Lord of the Rings by J. R. R. Tolkien (1937, 1954, 1955)
With respect to today’s virtual worlds, Tolkien’s contribution has not been merely in the construction of a raft of characters, racial groups and social concepts for role playing game inhabitants and interaction rules, but most importantly in his deep backgrounding of the imagined worlds. He did not merely describe his characters within the context and flow of the story line, but extended beyond that which was needed to tell a story, into that which was needed to make us believe of the real existence of his virtual worlds, Tolkien provides the reader with immaculate detail and descriptions to immerse them into the world Middle-Earth. Both books contained land maps (Figure 14) and the final book to The Lord of the Rings (released in 3 parts) containing appendices describing chronologies, histories, family trees, languages and translations and a calendar and dating system. Being a professor at Leeds and Oxford University he approached his work more like an academic anthropological study of an imagined world than a novelist (Macmillan, 2008).
In so doing Tolkien demonstrated a fundamental understanding a core strategy in establishing convincing presence – the necessity for a consistent, credible back story underpinning the virtual world. It is an early example of the depth of design that many later virtual worlds would exhibit in order to create a convincing sense of presence for the participant (Bartle, 2003; Schmidt, Kinzer, & Greenbaum, 2007).
A couple of virtual worlds that has been translated from Tolkien’s literature are the online virtual world ‘Lord of the Rings Online’ (2007) and PLATO’s MUD virtual world ‘Mines of Moria’ (1974).
More recently, literature has turned to imagining realities in which computational virtual worlds are a fundamental component of the plot. It is from this group that many of the terms now used to describe aspects and elements of virtual worlds are derived or were popularised, such as ‘avatar’, ‘metaverse’, ‘cyber-space’, etc. Some recent examples of novels containing a plot of computation virtual world are True Names (Vinge, 1981), Neuromancer (Gibson, 1984) and Snow Cash (Stephenson, 1992) (Figure 15).
[[image:Recent_VR_Literature_Covers_015.jpg]]
Figure 15. Recent Literature True Names (Vinge, 1981), Neuromancer (Gibson, 1984), Snow Cash (Stephenson, 1992)
'''Vernor Vinge’s True Names''' is not as well know as other novels in this genre but it was the first to present the concept of a person entering a computational virtual worlds meeting other people in ‘the other plane’ (Kelly, 1995). It was also unique in bringing the concept of anonymity to the digital world with one’s digital persona (handle) being different from one’s real self and where there was a necessity to hide one’s real identity thus your true name (and hence the title). It was translated to the computational virtual world in the form of ‘Habitat’ – the first graphical social networking virtual world (Farmer, 1992).
'''William Gibson’s Neuromancer''' a true cyberpunk[11] novel is possibly the most widely quoted in the virtual environment space (Chesher, 1994) . In this novel Gibson coined the term cyberspace with the concept of a viable parallel online world capable of critically impacting events and commerce in the real world.
'''Neal Stephenson's Snow Crash''' is where the term Metaverse was coined. Metaverse is a planet-sized city that has one continuous street 65,536 kilometres (216 km) in length where millions of people (known as avatars) travel up and down daily in search for entertainment, trade or social interaction. Although similar, in one sense, to Neuromancer it came from a different perspective in that people actually lived in the Metaverse not as cyberpunks getting up to mischief but as everyday people living a mainstream life real life in the virtual world. In this world real commerce was conducted and virtual artefacts were bought and sold with real world consequences which has been realised in the development of the virtual world Second Life.
Hollywood also contributed to the fantasy of the reality of virtual worlds. Films such as Tron (Lisberger, 1982), The Lawnmower Man (Leonard, 1992) and The Matrix (Wachowski & Wachowski, 1999) (Figure 16) just to name a few gave us the visual of virtual worlds that the books could only describe, and in some cases explored the haptic interfaces now being realised (Chesher, 1994).
[[image:VW_Films_Tron_LawnmowerMan_Matrix_016.jpg]]
Figure 16. Hollywood Films
Tron (Lisberger, 1982), The Lawnmower Man (Leonard, 1992), The Matrix (Wachowski & Wachowski, 1999)
At the time of their release, the novels and movies discussed above may have seemed futuristic and the concepts unobtainable but today we are much closer (if not already past) with advances in networking, computational processing power and understanding of the sociology of virtual environments. Maybe a ‘jack-in’ device that stimulates our nervous system to travel into cyberspace (Neuromancer, Gibson, 1984) is still a little way off (and may be too intrusive for some), or smelling odours or feeling textures within a virtual world may never be quite the same as the real life experience but what once seemed unimaginable in these works has become reality today. With technological advances and the rapid adoption of internet enabled online virtual worlds many of these concepts are less science fiction and more science fact than they once were.
==2.8 The History of Computational Virtual Worlds==
===2.8.1 Introduction===
In a lecture delivered by Ivan Sutherland in 1965 the first steps were made to combine computer design, construction, navigation and habitation of software generated virtual worlds (Packer & Jordan, 2002). Here Sutherland laid down a vision for the development of virtual worlds, as paraphrased by Brooks (1999, p. 16):
<blockquote>
“Don’t think of that thing as a screen, think of it as a window, a window through which one looks into a virtual world. The challenge to computer graphics is to make that virtual world look real, sound real, move and respond to interaction in real-time and even feel real.”
</blockquote>
The new-born medium of the graphical, digital virtual world experienced a “Cambrian Explosion” of diversity in the 1980s and ‘90s, with offspring species of many genres: first-person shooters, fantasy role-playing games, simulators, shared board and game tables, and social virtual worlds. (Damer, 2007)
The massively multiplayer online virtual worlds of today, with their world-wide user bases, are essentially a consequence of the mass adoption of the internet which commenced in the early 1990’s. Since the internet first achieved general acceptance they have advanced substantially in technical capabilities, graphics and number of subscribers (Figure 17) (Woodcock, 2008). See Appendix B: MMOG Analysis, for a break-down of MMOGs contained in this graph.
[[image:MMOVW_Growth_Rate_017.jpg]]
Figure 17. Massive Multiplayer Online Virtual World Growth Chart 98-2008
The virtual worlds of today (such as World of Warcraft, Entropia Universe, America’s Army, and Second Life, etc) represent a convergence of several disparate computational, technical and social origins and drivers. Current virtual worlds combine 3D visualisation, game theory, text messaging, animations, context and text sensitive gesturing, natural language processing, spatial voice & audio, artificial intelligence, agency theory, physics, connectedness, persistence, business strategy, sensory hardware and haptic interfaces, telecommunications, 2D image processing, video chroma-keying, social networking and many other influences to achieve their sense of immersion and presence. In this section we explore some of the milestones along these convergent paths.
As many of the influences that have contributed to our latest virtual world are derived from research streams that were concurrently pursued over more than 50 years, we shall look at the history of virtual worlds in six streams:
#Hardware based user interfaces and virtual reality environments
#Early graphical computer games
#Text and Text+ based Virtual Worlds
#2.5 and 3D graphical multi-player virtual worlds, broken down into:
#: a. MMORPGs
#: b. Social Virtual Worlds
#Simulation and Training Worlds
It should be noted that, while we will be considering the history in these streams, some virtual worlds necessarily exist in more than one stream. The grouping is that of the researcher, based on an extensive assessment of the literature, rather than the view of any one author.
===2.8.2 Hardware Based User Interfaces and Virtual Reality Systems===
====2.8.2.1 Introduction====
These two areas are grouped together, not because Virtual Reality (VR) Systems are a hardware solution, but rather because the work done in virtual reality worlds has generally aimed for extremely high levels of both immersion and presence and has therefore generally (although not always) been coupled with hardware in the form of purpose built user interfaces, designed to assist the sense of immersion such as headsets, or data gloves, etc.
The importance of the progress in VR systems to virtual worlds is that they have contributed or assisted much of the fundamental graphical rendering technologies, 3D animations studies and spatial awareness research and conceptualised the immersive aspects of virtual worlds.
====2.8.2.2 Sensorama====
One of the earliest inventions in the genre of virtual world simulators was developed by a cinematographer Morton Heilig. Inspired by the work of Fred Walker’s with Cinerama[12], Heilig presented a paper in 1955 ‘The Cinema of the Future’ (reprinted in Packer & Jordan, 2002). In an extension of Wagner’s (1849) Gesamtkunstwerk (total artwork) concept (Holmberg, 2003), Heilig believed that the logical extension of cinema was to provide the audience a first person experience of film using all their senses – “Open your eyes, listen, smell, and feel—sense the world in all its magnificent colors, depth, sounds, odors, and textures—this is the cinema of the future! (Packer & Jordan, 2002, p. 246)”
[[image:Morton_Heilig_Sensorama_Simulator_018.jpg]]
Figure 18. Morton Heilig, Sensorama Simulator, U.S. Patent #3050870, 1962
Heilig developed and patented the Sensorama Simulator (Figure 18) in 1962. The Sensorama was a single person simulator that offered the viewer a multi-sensory fully immersive theatre. The viewer could sit to watch a short three-dimensional stereoscopic movie that included stereo sound, an odour generator, force feedback handle bars, chair motion and wind on the viewers face (Rheingold, 1992). Heilig believed that the Sensorama Simulator could be next generation of theatres placed in hotels and lobbies or any small space that could fit his miniature theatre (Heilig, 1955, p. 345).
Heilig also recognised that the Sensorama Simulator offered training and learning potential for educational and industrial intuitions (Rheingold, 1992, p. 58) but unfortunately the Sensorama Simulator never took off, it was “a time when the business community couldn’t figure out what to do with it” (Laurel, 1991, p. 52). This may have been different a decade later when Pong kicked-off the arcade game industry and when education, industry and government saw great potential from investing in virtual world technology as they did with the Head Mounted Display (HMD).
====2.8.2.3 Head-Mounted Display====
In 1968 Ivan Sutherland presented the first computerised graphical HMD (Figure 19) (Sutherland, 1968)[13]. The HMD had a cathode ray tube (CRT) for each eye with a three-dimension simple wire-frame view of a room with motion tracking when the viewer moved their head. Known as ‘The Sword of Damocles’ based upon a Greek legend of a man placed in a precarious position of luxury with a sword above his head (Oxford Dictionary, 1989), similarly the HMD had a computer suspended above the users head attached by a mechanical arm (Figure 19, right) (Carlson, 2003).
[[image:HUD_The_Sword_of_Damocles_019.jpg]]
Figure 19. Head Mounted Display first called The Sword of Damocles (Sutherland,1968)
The HMD was a significant milestone in the development of virtual reality technology, which has since been used in a variety of applications in virtual worlds. Holding advantages over a traditional computer monitor such as total head and body movement, non interrupted viewing for total immersive HMDs and simultaneous viewing of real world and virtual world artefacts in ‘see-though’ HMDs or sometimes called Augmented Reality Displays (Rolland & Hua, 2005).
Today’s HMDs are more compact than Sutherland’s 1960s prototype (Figure 20). In the figure is shown on the left a HMD used for mixed reality environments similar to that designed by Sutherland and right a immersive HMD which is compatible with several online and gaming virtual worlds.
[[image:HUD_See_Through_and_Immersive_020.jpg]]
Figure 20. Today's Head Mounted Displays - Left: See-Though HMD - Right: Immersive HMD
===2.8.3 Early Graphical Computer Games===
Computer games have had a large influence on the evolution of virtual worlds both in the development and use of the technology. The contribution of games includes computational game theory, 2D and 3D graphics, social modelling, simulation, strategies for achieving presence, artificial intelligence, computational game physics and, possibly most significant delivery of a massive consumer market to fund and drive the investment needed for innovation and technology improvement. By far the majority of today’s online virtual worlds were conceived and/or delivered as games, they have subsequently evolved into general business or training platforms which are sometime referred to as Serious Games (Annetta, Murray, Laird, Bohr, & Park, 2006).
The early computer games that can be traced to a few innovative applications (Figure 21):
*'''Tennis for Two''': In 1958 William Higinbotham developed the first electronic game simulator using an oscilloscope display that demonstrated a two-dimensional side view of a tennis court. It was a two player game that the user could control the direction of the bouncing ball by turning a knob on a hand held device. Originally developed by Higinbotham to occupy visitors to Brookhaven National Laboratory during open days the game had queues of people waiting to play (Brookhaven National Laboratory, n.d.). Tennis for Two introduced the concepts of a shared multi-player electronic game experience, a rule based environment managed by a machine, and an electronic space where the actions of one player in the shared space affected the experience of another. The attention the game attracted demonstrated the willingness of participants to accept the visual and sensory limitations of a machine managed game environment and immerse themselves in the experience.
*'''Spacewar!''' The idea originated in 1961 by Steve Russell at the Massachusetts Institute of Technology (MIT) by 1962 the game was released with assistance from his colleges. Spacewar! was the first official release of a two-dimensional computer game.[14] A two player game each with a spaceship that would fire bullets at each other before being pulled into the middle by the sun. Developed originally to demonstrate the power of the new PDP-1 computer, this game was a good demonstration of both the graphic capabilities and the process power of the machine (Computer History Museum, n.d.; Markowitz, 2000). Later in 1969 Rick Blomme modified the game to run on PLATO which made this the first game to be networked (Koster, 2002; Mulligan, 2002). While Tennis for Two was the first multiplayer electronic game, Spacewar was the first computer based multiplayer game. It thus contributed the same key concepts and ideas as Tennis, only for the first time on a computer managed environment.
*'''Maze War''': In 1973-1974 Steve Colley developed the first three-dimensional ‘first person shooter’ (FPS) game Maze War at NASA Ames Research Center. A player would navigate around a maze searching for other players to shoot. As seen below (top right) the player had a first person view, (the eyeball seen in this picture is the other player). This is called a ‘first person’ game, placing the player ‘in-world’ as a part of the game is a significant concept of virtual world games. Maze War also provided other innovations now common to virtual worlds such as instant messaging, levelling and non player robot characters (Damer, 2007). This game which started as a two player game was eventually connected to ARPANET (the forerunner of our current internet network technology) allowing several users from remote locations to play and interact (Colley, n.d.; Damer, 2004). Maze War can therefore lay claim to being the progenitor of virtual worlds but not an actual virtual world because of its lack of persistence.
[[image:Early_Computer_Games_1958_To_1974_021.jpg]]
Figure 21. Early Computer Games 1958 - 1974
*'''DOOM (1993) (II, 1994)''' a 3D FPS game was influential both on a conceptual and technical level (Friedl, 2002; Mulligan, 2000). In DOOM the concept of Maze War was re-implemented in a much more graphically rich 3D environment. Although only a single player game, the key innovation of relevance was the method used to manage the rendering of the 3D space to allow multiple non-player characters to participate in the 3D environment with the player. The strategy adopted was essentially to divide the world into many small rooms surrounded on all sides by walls (essentially a cave system) by rendering only a single room at a time the entire resources of the computer could be devoted to a known confined rendering space, thus achieving the illusion of a highly detailed rendering with the limited computational resources available on the PC’s of the day. Although higher quality 3D rendered games were available some seven years earlier on the Amiga computers from 1986 (including some utilising real-time ray tracing technology), these relied on dedicated proprietary games architected graphics cards and did not provide a 3D space management paradigm that could be easily translated to the future demands of online 3D games. The Doom model could, precisely because it was architected for the graphically and processor challenged generalised home PC’s of the day, rather than proprietary games machines such as the Amiga. The Doom games engine was utilised in many subsequent games and later formed the basis for the model adopted for the online game Quake (Petrich, n.d.; Wikipedia Doom, 2008).
Around the time of DOOM the game industry realised the importance of connecting people together for online gaming, seeing the opportunity they started adding a modem and LAN play and later TCP/IP functionality to their games that allowed both single and multiplayer connectivity. Early games allowed up to 4 players but today’s games can have up to 64 players in a single game session (Quake Wars[15]). Some of the better known brand names included:
*'''Quake''' (1996, a multiplayer extension of DOOM) saw over 80,000 people connected to 10,000 + simultaneous game session (Mulligan, 2000).
Warcraft (1994) (II, 1995) that eventually would become the basis to the largest MMORPG today World of Warcraft (2004) which now has over 11 millions subscribed users (Blizzard Entertainment Inc, 2008).
===2.8.4 Text Based Virtual Worlds===
====2.8.4.1 Text Virtual Worlds: MUDs====
In 1978 the first MUD (Multi User Dungeon) outside of the PLATO system (discussed under Training and Simulators) was created by a Computer Science undergraduate Roy Trubshaw (and shortly afterwards joined by Richard Bartle) from Essex University in England. A text based virtual world, coined a MUD by Bartle was based upon Robert E Howard’s (1932) fictional tale ‘The Phoenix on the Sword’. MUD1[16] was an adventure role playing game, with game levelling and chat rooms which allowed up to 32 players to connect simultaneously over a remote connection (Figure 22) (Bartle, 2003).
[[image:Bartle_The_First_MUD_022.jpg]]
Figure 22. The First MUD: Roy Trubshaw and Richard Bartle (1978)
Early in the game’s history Essex University on whose computers the game was hosted became a part of ARPANET (the forerunner of the internet) and soon after MUD was distributed through that network and being played on universities throughout the world. Some of these institutions were also open for public access. Although copyrighted many variations of MUD1 were made and distributed freely from what Bartle (2003) describes as either player inspiration or pure frustration with the 32 player limitation which made it impossible to play when dial-in lines were fully allocated.
Keegan (1997) identifies two main classification of MUDs developed during this time (Figure 23) - the Essex MUDs (Trubshaw and Bartle’s) and Scepter of Goth (1978). Unfortunately Scepter died an early death, the game was sold and soon afterwards passed onto the creditors when the purchasing company ran out money (Bartle, 2003). Most MUDs were therefore based upon the ideas and technical structure of Trubshaw and Bartle’s MUD (Bartle, 2003; Keegan, 1997).
[[image:Basic_MUD_Tree_Structure_023.jpg]]
Figure 23. Basic Tree Structure for MUD classification
MUD1 introduced a number of concepts retained by most of today’s virtual worlds. Among which are:
*The role and effectiveness of the text based narrative and text communication that contributed to, rather than detracts from the sense of presence.
*Persistence in game play.
*Shared game space and cooperative (team based) activity.
*Non-player artificial intelligences called AI’s (or non player characters) as part of the experience.
*Region based environment management.
*Role-playing as a central game theme.
*Characters and avatars (all be it text based in the early MUDs).
*Game defined goals but player implemented plots.
Region based environment management is a computational aid that warrants particular attention. It was also used by the DOOM 3D graphics engine to manage multi-user environments allowing the computer to render the shared space in a single discrete region at a time. In DOOM this was a room, in MUD1 it was a cave in more recent virtual worlds it may be as much as a 65,000 sqm area (Second Life). This strategy provides a method of scaling the virtual worlds to many regions by distributing the region management across many discrete servers but imposes practical limits on the number of players that can be present in any given region at an instant in time (Hu & Liao, 2004).
MUD1 had a significant impact on virtual world design and development that dominated the online game space until the mid 1990s therefore MUD1 is often marked as the beginning of the first generation in online virtual worlds (Bartle, 2003). MUD1 can still be played online today at british-legends.com (CompuServe, 2007).
====2.8.4.2 ASCII Virtual Worlds====
In the early 1980’s pseudo graphical interfaces were added to some MUDs in the form of ASCII virtual worlds. ACSII (American Standard Code for Information Interchange) is the most widely adopted character encoding on western computer systems. ASCII virtual worlds provided a pseudo-graphical display making use of shape symbols and character positioning escape sequences to create crude planar maps of the terrain (dungeon) environment. The maps enhance the description of the room provided by the text.
ASCII pseudo graphical virtual worlds provided the player with a view of the world improved over the simple text prompt and description of MUDs. An example of an ASCII game can be seen below (Figure 24) Islands of Kesmai (IOK). Developed in 1982 and released in 1984 the game provided a player with a 3rd person view - overhead view of the world. Walls were denoted by [], fire **) and the players were letters (Bartle, 1990). IOK was Compuserve’s (USA ISP) best selling game with players paying up to $12.50 per hour to play (based upon connection time not game played) which usually had between 10-60 players online simultaneously (Bartle, 1990). Other ASCII games around this time were MegaWars I & MegaWars III (1983), NetHack (1987 (O'Donnell, 2003)), Sniper! and The Spy (Bartle, 1990).
[[image:RPG_Islands_Of_Kesmai_024.jpg]]
Figure 24. Islands of Kesmai ASCII Text Role Playing Game (1982-84)
By the mid to late 1980s home computing and online networking service providers opened the gates to huge expansion for on line virtual world. People paid for networking services by the hours, which gave a huge incentive to these providers to get their subscribers hooked on virtual worlds. There was big money to be made with 70% of revenue from one provider (Genie) in the early 1990s being made from games. By 1993 a study showed that 10% of the NSFNET backbone (precursor to the internet consisting of mainly government and universities) network traffic belonged to MUDs (Bartle, 2003).
===2.8.5 Graphical Virtual Worlds===
The text based MUDs evolved into two different streams: the 3D First Person Shooters such as DOOM and Quake which adopted the room at a time view of the world for 3D rendering, and the 2D graphical online virtual worlds that appeared in the early 1990s. Early examples include NeverWinter Nights (1991-1997), Shadow of Yserbius (1992-1996) and Kingdom of Drakkar (1992-Current) (Figure 25).
[[image:Graphical_2D_Virtual_Worlds_025.jpg]]
Figure 25. Graphical 2D Virtual Worlds
Unlike Habitat and Worldsaway (discussed under Social Networking Virtual Worlds) that predated these games appearing in the mid-1980’s, the graphically enhanced text based games were fantasy role playing games -- basically MUDs with graphics. Although 2D some of these games displayed isometrically on an angle which gave an illusion of a three-dimensional view for the player, for this reason these games are sometimes referred to as 2 ½D worlds (Bartle, 2003). These games used more sophisticated graphics (than the pseudo graphical solutions) to improve the sense of presence experienced by the players, while retaining the text based narrative.
By the mid 1990s with nearly 10 million internet hosts (Figure 26) (Slater III, 2002; Zakon, 2006) and price wars between providers the internet opened to doors to millions which saw hordes of inexpert computer users wanting to play games (Bartle, 2003). Game design had improved long with the graphical elements of virtual worlds with graphics rendering capabilities on standard PC’s and the emergence of common graphics file standards which made development of virtual worlds possible, practical and more economical.
[[image:InternetParticipatingHosts_Count_1990_to_1998_026.jpg]]
Figure 26. The Internet No. of Participating Hosts Oct. ‘90 - Apr. ‘98
====2.8.5.1 MMORPGs====
By the mid 1990s we saw the first 3D virtual world online Meridian 59 (1996-2000 & 2002-Current) although technically it used a pseudo-3D graphics engine (Axon, 2008; Bartle, 2003) providing a first person view where the player could view all angles in the environment (Figure 27). We saw the beginnings of a new era of virtual worlds with a massive 25,000 people signing up for the beta release (Axon, 2008), which unfortunately met with limited commercial success (Bartle, 2003; Friedl, 2002) and was shut down in 2000 but resurrected again in 2002 with the updated version online today at meridian59.neardeathstudios.com.
[[image:Meridian_59_First_3D_Online_Virtual_World_027.jpg ]]
Figure 27. Meridian 59 First 3D Online Virtual World (1996)
The turning point for online virtual worlds was Ultima Online (1997-Current). Ultima had already had met with success with the Ultima computer games series. With its online launch it had 50,000 subscribers within 3 months and was the first online virtual world to crack the 100,000 threshold within 12 months of release (which it did so in under 6 months) (Bartle, 2003; Woodcock, 2008). This added a new dimension to the term multiplayer where it has now come to know as a Massive Multiplayer Online, Role Playing Game or MMORPG. Subscription peaked at 250,000 in 2003 with 75,000 being reported in December 2007 (Woodcock, 2008).
Ultima Online consisting of a 2½D graphical virtual world has remained visual much the same (Figure 28) although recently the client that runs the game (the same concept as a web browser) has had a makeover in 2007 with the Kingdom Reborn (right). This game has received regular expansions to the world, which provides for new challenges and adventures for its player. Back in 2001 the client was upgraded to 3D (Wikipedia Ultima, 2008) but recently Electronic Arts announced they will be de-supporting their 3D client continuing only to support the 2D client going forward (Electronic Arts, 2007).
[[image:Ultima_Online_028.jpg]]
Figure 28. Ultima Online (1997-Current)
Other MMORPGs that started around the mid to late 1990s, which can still be played online today, are Furcadia (1996, longest running), The Realm (1996, second longest 15 days out from Furcadia), Lineage (1998), EverQuest (1999) and Asheron's Call (1999).
The more recent MMORPGs of today, not much has changed in game design from the original RPGs but technically they have improved and do provide much better graphics for the player (Figure 29). They have also increased substantially in popularity with the largest subscription based MMORPG World of Warcraft recently climbing to over 11 million players (Blizzard Entertainment Inc, 2008). Although these players do not play in one virtual world they are separated into different realms, the same game but with different people. This contrasts quite differently to the social virtual worlds like Second Life where all the users share one virtual world. In the next section we discuss social online virtual worlds which although they can be a MMORPG within the world itself (as mentioned early) their model of a virtual world is very different than the dedicated MMORPGs.
[[image:MMOZRG_Eve_and_WOW_029.jpg]]
Figure 29. MMORPG's Eve & World of Warcraft
====2.8.5.2 Social Virtual Worlds====
The first attempt for a commercial large scale multi-user game was made by George Lucas’s Lucasfilm Games. Habitat developed by Chip Morningstar and Randall Farmer started development in 1985 (McLellan, 2004; Ray, 2008; Slator et al., 2007). Habitat was built to support thousands of simultaneous users to run on the home computer Commodore 64 to be distributed via Quantum Link network service providers (later known as AOL). Inspired by a science fiction novel ‘True Names’ (Vinge, 1981) the world contained a fully-fledge economy where citizens of the world could own a virtual business, build a house, fall in love, get married and even established their own self governing laws (Morningstar & Farmer, 1990). Habitat a 2D graphical world looked similar to a cartoon (Figure 30, left) with the avatar (digital self) taking a third person view of the world. The storyline was based upon life rather than the fictional storyline of the MUDs, which placed greater emphasis on the social aspect of the world. Lucasfilm's Habitat was first released as a pilot in 1986 then later in 1988 as Club Caribe in North America which reportedly sustained a population of 15,000 participants by 1990 (Morningstar & Farmer, 1990). In 1990 it was released in Japan as Fujitsu Habitat and after extensive modifications. Habitat was realised again in 1995 as WorldsAway (Figure 30, Right) (Damer, 2007) and again as Dreamscape in 2008.
[[image:VW_Habitat_and_Worldsaway_030.jpg]]
Figure 30. Habitat (86) First Graphical Virtual World Precursor to Worldsaway (95)
Habitat introduced some key concepts in virtual worlds;
*The term ‘Avatar’ into the general virtual world community;
*The idea of focussing on social networking as a key form of game play;
*An economy where people could trade both in world currency and artefacts; and
*The most important, the concept that living in a virtual world and leading an alternate life that was not dictated by rules of a game (like with the dedicated MMORPG environments).
More recent social networking virtual worlds include Active Worlds (1995, 1997-current)[17], Second Life (2003-current) and There (2003-current) (Figure 31) – all of which have achieved a significant volume of educational interest as platforms for delivery of learning. The generalised nature of the social networking sites means that they tend to be more diverse in the range of facilities provided and the purposes to which they can be applied than the role playing game systems. They have generally provided participants with some form of content creation tools including the importing and/or exporting of non-virtual world artefacts. In the next section we discuss further the aspect of education in virtual worlds.
[[image:VW_SecondLife_and_There_031.jpg]]
Figure 31. Social Virtual Worlds: Second Life & There
===2.8.6 Simulation and Learning Systems===
====2.8.6.1 PLATO====
PLATO (Programmed Logic for Automated Teaching Operations) was a system designed for computer based education at University of Illinois that started in the early 1960s. Originally developed as a class room course system (Figure 32) with improvements in mainframe technology by 1972 saw up to a thousand simultaneous online users making it the first public online community that featured electronic course delivery, online chat, bulletin boards, 512 x 512 resolution monitors and 1200 baud connection speed (Unger, 1979; Woolley, 1994). With over 15,000 hours of instructional development PLATO was possibility the largest ever investment in educational technology (Garson, 2000).
[[image:PLATO_Lab_Image032.jpg]]
Figure 32. University of Illinois PLATO Lab & Terminal (1961-2006)
By the mid 1970s games made their way onto the university mainframes with great success. Between 1978 and May, 1985 about 20% of time spent on PLATO was game usage (Woolley, 1994). Games appeared such as Spacewar! (1969 game discussed earlier), Empire (1973, multi user space shooter game based upon Star Trek), DND, (1974, MUD[18] based upon the game Dungeons and Dragons), Mines of Moria (1974, MUD, 248 mazes based upon Tolkien’s Lord of the Rings), SPASIM (1974, 32 multi-user, FPS space ship game)[19], Airfight (1974-75 a 3D flight Simulator precursor to Microsoft’s Flight Simulator), Oubliette (1977, first person 3D MUD) and Avatar (19977-79 first person 3D MUD) (Bartle, 2003; Lowood, 2008; Pellett; Wikipedia, 2008b; Woolley, 1994). See below (Figure 33) for some examples of MUDs held on PLATO. Many of the games on PLATO were recreated for commercial use for arcade or personal computer games (Goldberg, 2002; Mulligan, 2002; Woolley, 1994).
[[image:PLATO_Popular_MUD_Games_Developed_For_PLATO_033.jpg]]
Figure 33. PLATO: Some Popular MUDs Games Developed for use on PLATO (1974-1979)
By 1985 after going commercial PLATO had established a systems of over 100 campuses worldwide (Garson, 2000). Known as the ‘ultimate electronic information and communication utility’ offering over 200,000 hours of courseware (Figure 34), with local dial-up of 300 or 1200 baud connection speed, access to both a social and educational contacts were among the many advances of PLATO that made it an attractive system for the academic community at large (Small & Small, 1984). Over time, with improvements in technology, and the cost of maintaining old technology the final PLATO system was turned off in 2006 (Wikipedia, 2008b).
[[image:PLATO_Online_Course_Count_1984_034.jpg]]
Figure 34. PLATO Over 200,000 online courses by 1984
A web site has been established for preservation of PLATO at cyber1.org (VCampus Corporation, 2008) which holds many of PLATO’s games and courseware for public download.
====2.8.6.2 SIMNET====
Military virtual world simulators started with a project called SIMNET (SIMulator NETworking). SIMNET was a DARPA project that enabled the first large scale real-time networked battlefield simulator. Development and implementation occurred on several levels between 1983 and 1990 (Cosby, 1999; Miller & Thorpe, 1995).
Prior to SIMNET military simulators consisted of immersive virtual reality training devices such as cockpit simulators. Cockpit simulators offered a replicated environment of the ‘real thing’ for example, an aeroplane cabin would be built in its entirety with motion and sensory feedback using pre-programmed software to produce repetitive simulations to provide an individual with mastery skills such as low to ground dog-fighting or missile avoidance training (Miller & Thorpe, 1995). SIMNET provided a cheaper alternative for certain types of training than the cockpit simulators and further offered ‘collective skills’ which Miller and Thorpe (1995) define to be cohesive team operations skills distinguished from individual mastery skills taught in cockpit simulators.
SIMNET a multiuser virtual world (Figure 35) consisted of real battle grounds with manned vehicles (tanks and helicopters), command posts, semi-automated forces where a single operator could control many vehicles in the simulation and the ability to record simulations from any view point (known as the flying carpet) so that it could replayed and statistically analysed and reported upon. At the conclusion of the program there were 250 simulators operating in nine locations (4 of which were in Europe) which provided real-time battle engagements that was directly under the control of the participants (Lenoir, 2003; Miller & Thorpe, 1995).
[[image:SIMNET_Battlefield_Simulator_035.jpg]]
Figure 35. SIMNET: Battlefield Simulator at Fort Knox USA (1983-1990)
SIMNET had a substantial impact on military training after being recognised as the key success factor in winning the 3 day ‘Battle of 73 Easting’ in the Gulf War (1991) which lead to several projects based upon the SIMNET technology (Figure 36) (Foley & Gifford, 2002) with the USA government commissioning $2,549 million dollars in 1997 for modelling and simulation projects (Lenoir, 2003).
[[image:US_Military_Networked_Simlator_Projects_1938_To_2001_036.jpg]]
Figure 36. Timeline of US Military Network Modelling and Simulator Projects (1983-2001)
In 1997 a project named Synthetic Theater of War (SToW) commenced which was a program to construct an environment to combine varies simulators into one large-scale distributed battle simulator capable of involving thousands of participants (Budge, Strini, Dehncke, & Hunt, 1998; Tiernan, 1996). This project has since become Joint Semi-Automated Forces (JSAF) (Hardy et al., 2001) which now enables more than 100,000 simultaneous simulations at a time (US Joint Forces Command, 2008). Australia military has also adopted the JSAF platform to build their our own Course Of Action Simulation (COA-Sim) for joint military operations training, exercises and planning (Carless, 2006; Gabrisch & Burgess, 2005)
====2.8.6.3 Military Use of Commercial Games Engines & The America’s Army====
In 1996, General Krulak of the US Marines tasked the Marine Combat Development Command to explore and approve the use of commercial games engines for military training purposes. One outcome of this effort was the collaboratively developed Marine Doom, based on the Software Id Corporation’s shareware Doom engine and Doom Level Editor. The simulation could be configured for simulation of special missions (such as hostage rescue) immediately prior to engagement and used to rehearse the planned mission (Lenoir, 2003).
In July of 2002 the US Military released a milestone in multi-user training game simulators in the form of America’s Army: Operations (Lenoir, 2003; Zyda, 2005). Based on Epic Games ‘Unreal’ games engine, the game created a virtual world that reproduced aspects of a career in the US Army, including ‘boot-camp’ commencement and weapons and tactical training through to various operations scenarios. Although originally developed and released as a recruitment tool, the game was also claimed to be utilised to improve training outcomes by army instructors at Fort Benning (Zyda, 2005).
Now, with 26 subsequent releases (as of 2008) and available for the PC, cell phone and Xbox, the game has more than 9 million registered users exploring entry level to advanced training, and operations in small units (Figure 37). Beyond a focus on realism that extends to accurate tree placement in training courses at the simulated training camps, the game adds an added dimension of presence to the participants through the active involvement of current and former real-world soldiers as players in the game (designated with a star icon in player profiles), interacting with non-military participants (Department of the Army, 2008).
[[image:Americas_Army_037.jpg]]
Figure 37 America's Army (2002)
From a training perspective anecdotal evidence from army trainers regarding the game is that sessions in the training scenarios such as the firing range or obstacle courses improve subsequent results in the real-life versions of these activities (Zyda, 2005). The US Army possibly one of the largest investors of virtual world game technology recently announced their plans to spend $50 million USD over the next 5 years to create 70 gaming systems in 53 locations around the world for combat training (Robson, 2008).
==2.9 Virtual Worlds for Education==
===2.9.1 Architecture Considerations===
====2.9.1.1 Introduction====
To appreciate properly the discussion of the literature examining educational directions in virtual worlds, the researcher considers a brief overview of the key architectural differences to assist the reader. This material is based on the researcher’s examination of a variety of game environments and virtual worlds, and discussions with experienced and knowledgeable users of these environments, rather than sourced from the work of other authors. As such the discussion is interpretive rather than authoritative.
Some of these environments have existed for only a few years, and have not yet enjoyed a comparative analysis undertaken by the academic community. As such, this discussion might not normally reside in the literature review, but it is felt that the placement of this discussion in this sub-section will assist the reader in better appreciating the issues explored in the literature discussion throughout the remainder of the section.
====2.9.1.2 Considerations of Operational Design====
While all of today’s major virtual worlds include capabilities for user interaction, sharing of the environment, persistence, avatars, business rules, streamed audio and text there are substantial differences in the technologies used to deliver the virtual experience. While some of these differences may create marginal differences in the world experience of the casual user, from the perspective of the educator and content creator the differences are substantial.
The major offerings can be viewed under the following groups (note: in each category the researcher has selected only a few example worlds, in most cases other options also exist):
#Proprietary closed engine (e.g. World of Warcraft, Everquest)
#Client resident closed content and world model with open engine (e.g. Shareware Doom )
#Streamed (or semi streamed) closed content and world model with closed engine (Entropia Universe)
#Open client resident content and world model with closed engine (Flight Simulator X, America’s Army, Unreal games, Quake, Doom)
#Open streamed content and world model (Hipi Hi, TruePlay, Active Worlds)
#Open streamed content and world model with out-of-world interfaces (Second Life V1, VastPark)
#Open streamed content and world model with out-of-world interfaces and open client (Second Life V1.2)
#Open streamed content and world model with out-of-world interfaces, open client and open server (DeepSim)
Architectural Components and Implications in Education
Below are some of the architectural components and implementations on the structure of a virtual education environment.
{| border="1"
|'''Architectural Components'''
|'''Implications in Education'''
|-
|Closed Proprietary System
|A closed proprietary system cannot generally be altered. These systems are generally not appropriate for education purposes unless the existing virtual world itself is built for the purpose of the training (such as a purpose built simulator). Closed systems can be used in education for group interaction and discussions, if not for lectures or anything requiring more than text or audio (assuming the system supports group audio communications).
|-
|Closed or Open Environment
|Whether content and world model is closed or open determines whether the textures, objects and artefacts of the world can be modified or created by users. This ability is essential if the world is to be utilised in education as anything more than a 3D discussion forum.
|-
|World Content
|Whether the content and world model is client resident or streamed goes to the complexity of distributing course content, and the dynamics available in delivery. If the content is streamed, it can be changed in real time, but will usually require a high speed internet connection. Systems supporting streamed content generally also include the tools for developing some if not all of the streamable content. If the content is client resident, client interfacing speeds can generally be slower, but the content must be centrally published and distributed to client systems and installed locally prior to use. It cannot be changed in real time, and content production will not generally be supported directly in the virtual world tool set, and will often require advanced 3D modelling skills in dedicated 3D modelling environments.
|-
|World Interfaces
|The existence of out-of-world interfaces goes to whether content from other sources such as the internet web pages, audio or video, etc can be streamed into the world and integrated with the world content and model. Systems capable of providing this capability with streamable open content offer the greatest potential for in-expensive production of course material and publication distribution of that material to students.
|-
|Client / Server Engine
|Whether the client or server engine is open or closed goes to whether the hosting software itself can be modified. Generally this should not be necessary for education if the capabilities of the engines driving the world are otherwise sufficient. Where the content / world are otherwise closed, but the engines are open, the existing content and world could be replaced by interfacing the games engine to a new world with new content.
|}
====2.9.1.3 Options for Content Modification====
The ability to modify the content of a virtual world is essential if the educator is to deliver course content in-world beyond that of an interactive discussion, or monologue.
There are essentially three ways content can be modified by end-users in current virtual world environments (as opposed to systems providers or publishers) depending on the operational design of the environment:
#'''Level Editor''' (eg: Doom, Half Life, America’s Army, Flight Simulator). Applicable to client resident worlds (i.e. systems where the world is stored on each client computer and distributed as a separately published down load). A level editor is a content editing tool that allows an entire simulation to be created including the world model, textures, characters, behaviours, etc. They usually support importation of textures and animations, etc into the ‘level’ and then distribution of the entire level to a central server for redistribution to clients.
#'''Client Content Editing Tool''' with import/export (eg: Second Life, Vast Park, etc). For environments where building and content creation is part of the ‘game play’ the client will have a content editor provided. These environments provide a simplified model for constructing shapes and objects (e.g. Second Life’s prims) and some means for importing complex objects such as organic shapes, textures, animations, sound, etc.
#'''Out-of-world interface''' (e.g. Second Life, Active Worlds). Potentially available in both client resident and server resident (streamed) worlds. An out-of-world interface allows for the connection of some aspect of the user experience while in world to be drawn directly and live from an off world location like a web page, internet resident database or streaming SoundCast server, etc.
====2.9.1.4 Implications of differential content capabilities====
Virtual world are comprised of components (objects) and functions that are managed by the virtual world (or game) engine and together comprise the capabilities of the world. Not all worlds have the same object management capabilities built into their engines. For the purposes of this discussion, the range of capabilities will be considered to be:
#'''Terrain''' – the land form or map of the virtual space. Essentially all virtual worlds offer some form of terrain map (although the terrain map may not be ground, but rather simply a 3D space.
#'''Avatars''' – Discussed extensively already, the avatar is the user’s projection into the virtual world and may or may not be customisable.
#'''Structural objects''' – Including buildings, furniture, ornaments, statues, models, etc. These are the virtual world equivalent to objects in the real world. They may or may not be animatible and scriptable. If they are scriptable they may be able to become autonomous agents, depending in the capabilities of the scripting engine.
#'''Textures''' – The visual covering of any object, terrain, or even avatars. The ability to display and upload/import textures is (generally) essential to the ability to ability to display lecture materials like slides, etc (but note the existence of streams as a potential alternative).
#'''Animations''' – An avatar and a non-player character appears to walk, sit, stand, change facial expressions, etc because of the animation it is playing at the time. Without animations an object might move from one point to another, but it will not change it apparent state. The ability to modify animations is advantageous for creating a sense of realism, but possibly not generally essential for the ability to deliver a lecture or every type of simulation. All virtual worlds examined, offered some range of built-in animations within their worlds. Some allow the animations to be imported or modified, or strung together to create more complex animations.
#'''Scripts''' – Scripting is a capability to programme the objects and behaviours in the world. In worlds modified by level editors and programming language is generally provided as part of the level editing environment and ‘compiled into’ the level before it is published and distributed. In user modifiable worlds, where scripting is supported (like Second Life) the scripting editor and compiler is provided as part of the client application and scripts are dynamically modifiable. In some architectures the scripts are stored in the objects and distributed with the objects (and therefore if the object is moved between worlds/simulators the script and behaviours move with it), whereas in others the scripts are centrally stored controlled for the world/level and not available outside of the world or level or simulator (as appropriate). Scripts govern the behaviour (movement, animations, actions, sounds, appearance, world responses, inter-object communication, etc) of objects. The capability and simplicity of language design of the scripting engine is critical to the options available for educators in building a simulation.
#'''Streams''' – Streams include any media that is streamable such as audio, video, web-page content, etc. The availability of streams is an extension (or possible an alternative) to the ability to import textures. From an educational standpoint it represents the ability to deliver video or sound presentations, or draw lecture materials directly from the internet. Depending on the world engine, stream content may be able to be dynamically published (drawn down to the client as required – such as in Second Life) or packaged into the client resident world (such as in America’s Army).
#'''Non-player Characters''' (also called Bots, AI’s or MOBs – mobile objects) - These are essentially characters that look like avatars but are completely controlled and managed by the engine. They interact with players/avatars in a semi-intelligent manner. The availability and capability of these vary significantly across worlds. In HalfLife and America’s Army, the AI capability is available within the engine and has considerable ‘intelligence’ and in some cases the ability to learn and modify their behaviour. In other worlds (such as Second Life) they are not directly supported by the virtual world engine at all. The existence of non-player characters can directly impact on the type of learning simulation that an educator can build as it can provide user feedback and the feeling of presence within the environment for the user (if implemented to provide a realistic experience for the user).
#'''Text Communication''' - Text chat (including instant messages, group communication chat, etc) is the standard communication strategy in all worlds. It is always instant and dynamic (in that it does not have to be pre-packaged into the world). It is a functional capability rather than an object, and may or may not be logged or copied depending on the client capabilities.
#'''Multi-way Voice Communication''' - Most virtual worlds do not support voice directly, although this has been an increasingly offered function over the last twelve months. Multi-way voice communication enables a group of players to converse as if they were in a conference call, without the necessity to type all communication in text. It is different from streams, in that every client can be a sound source to every other client, whereas streams are a one-way communication from a point source to many destination receivers. Clearly the availability of voice communication impacts both the type student and the form of discussion that can be undertaken in a learning situation.
In selecting the platform for delivering an educational experience, the extent to which the educator requires any or all of these capabilities within a virtual world will probably influence the decision. Some of these capabilities have only recently become generally available, and others are still in only rudimentary forms. In the literature review that follows, the approaches and content adopted, and the outcomes achieved have necessarily been constrained by capabilities of the technology options available at the time and the architectural constraints of the virtual world used.
===2.9.2 Education Applications in Virtual Worlds===
====2.9.2.1 Introduction====
During the 1970’s, 1980’s and early 1990’s, perhaps the most significant multi-user online environment for education was the PLATO system. From the mid 1990’s onwards, the influence of this system waned as it was progressively superseded in user interface capabilities by the emerging 3D online games, social networking systems and custom built virtual worlds for the specific application of subject matter.
Today the use of public online virtual worlds for is gaining popularity with educators with a recent special purpose committee of educators (The New Media Consortium & EDCAUSE, 2007) identifying that virtual worlds will have a significant impact in the future of teaching, learning, or creative expression within higher education. In the next section we will discuss some of the research findings of virtual worlds being used for educational purposes.
====2.9.2.2 Education Uses in Virtual Worlds====
Early work in education using text based MUDs showed that they offered support for constructive knowledge-building communities that offered affordances of coordinated presence with evidence for interactive learning and collaboration across time and space (Dickey, 2003).
The period of the late 1990’s until today has been typified by educators experimenting with the potential for mass market games engines (and more recently virtual worlds) to be re-tasked as education environments (Annetta et al., 2006; Beedle & Wright, 2007; Gikas & Van Eck, 2004). In some cases, such as America’s Army the ‘game’ environment was built with the specific goal of recruitment and training in mind (Zyda, 2005), or as with MicroSoft’s Flight Simulator a game evolved over time with the assistance of subject matter experts to create an accurate simulation tool for the games audience (Lenoir, 2003). In other cases a games engine (the operating system of a game) has been adapted to create a purpose built learning tool, such as educators and students at MIT utilising the Neverwinter Nights tools to create a historical game based on a battle in the Revolutionary War or MIT's Games-to-Teach Project produced playable prototypes of four games, including Biohazard, developed jointly by MIT and the Entertainment Technology Center at Carnegie Mellon University which trained emergency workers to deal with a cataclysmic attack (King, 2003).
The early 3D virtual worlds with their simplistic graphics bearing little resemblance to the real world provided students with advantages over traditional learning methods whilst fostering collaboration in multiuser virtual worlds. An extensive study of virtual reality technology in education was performed by Youngblut (1998) where she looked at 35 different research studies in education that varied in technology use, subject discipline and age group from 1993-1998. Below is an example of VARI House and Virtual Physics both of which were custom built (Figure 38), VARI a single user virtual world and Virtual Physics a multiuser virtual world. Although studies were mainly research based (as opposed to the application in course work) research showed for both single and multi user environments that virtual world technology in many cases surpassed traditional learning methods in areas such as subject matter understanding, memory retention, student collaboration and constructive learning methods. Some obvious disadvantages were technology constraints, cost and development and usability (Youngblut, 1998) which in most part could be contributed to the infancy of this technology, formative years of computer based learning and the lack of general use of computers by students which had yet to permeate sociality as a whole.
[[image:Education_In_Virtual_Worlds_in_1950_to_60_038.jpg]]
Figure 38. Education in Virtual World Mid 1990s
====2.9.2.3 Online Education Uses in Virtual Worlds====
As identified in the architecture considerations section, virtual worlds that are to be used in educational settings must enable content modification if learning is to consist of anything more advanced than an interactive conversation. For the purposes of this research, the researcher is choosing to focus on virtual worlds that support the dynamic delivery or streaming of content (and the building tools are provided as part of the environment), rather than those worlds where a separate level editor is required and a client resident world model must be installed on the client computer prior to use. The literature surveyed in this sub-section will therefore focus on the work done in two such environments – Active Worlds and Second Life.
=====2.9.2.3.1 Active Worlds=====
Online virtual worlds enabled educators’ access to environments without the cost and complexity of developing their own custom software. One of the first online virtual worlds that made it feasible for research and development in education (given its architecture qualities) was Active Worlds (1995, 1997). Officially known as Active Worlds Universe because it consists of many worlds, Active Worlds provided educators with the opportunity to rent or buy their own world allowing restricted access to invited guest, building tools and content management capabilities. Below is a screenshot of Active Worlds (Figure 39). As can be seen, the current client consist of four sections; left – communications and navigations options, right – integrated web browser, bottom – chat window and middle – 3D environment. This type of client is generally called a “browser” by the environment developers.
[[image:Active_Worlds_Universe_039.jpg]]
Figure 39. Early Online Social Virtual World: Active Worlds Universe
'''Active Worlds Research'''
During the late 1990s to the early 2000s several educational institutions setup up a presence in Active Worlds for various projects from research to actively using Active Worlds as an online learning environment (see Smith, 1999 for a list of Virtual Learning projects most of which being in Active Worlds). The early research into online virtual world based education using Active Worlds showed promise.
Dickey (1999, 2003, 2005) undertook research into the viability of Active Worlds being used for geographically distant learners for both formal (a university business computing skills course) and informal courses (Active Worlds building course). His research studies showed that the 3D Virtual Word offered advantages in fostering constructive learning, student and teacher collaboration, visual representation of course context and course content and student engagement and participation. Some of the disadvantages identified were essentially environment specific and included a lack of support for collaborative activities like a whiteboard or collaborative interactive writing spaces, chat tool single posting word limitation, a single shared channel for chat tool providing no separation of teacher / student discussion and no ability for turn taking and kinetics (animation) constraints such as hand raising for alerting the attention of the instructor.[20]
Dickey also identified a number of opportunities specifically enabled by a 3D environment. While some of the previously identified advantages (such as collaboration and student management and participation) might be duplicated in other forms of online education tools, the 3D modelling of the course itself (the visual representation of course context and course content) was an advantage specific to the 3D environment.
Course context modelling as provided in Dickey’s research (1999) was a 3D representation that illustrated the structure of the course by the use of individual buildings and plazas (Figure 40). Each building was a topic in the subject, which provided resources to aid learning and a meeting place where students could collaborate for group projects around this topic.
[[image:Visual_Course_Structure_in_Virtual_Buildings_040.jpg]]
Figure 40. Visual Representation of Course Structure by the use of Individual Buildings
Course content modelling as provided in Dickey’s research (1999) was a 3D representation that the student had to build in order to understand the concept of the subject material (Figure 41).
[[image:Visual_Represnetation_of_Course_Content_041.jpg]]
Figure 41. Visual Representation of Course Content
These alternative methods provide a good example of the power and adaptability of 3D modelling environment applied to education. The course context provided the student a method by which they could visualise the learning objectives and progression of the course. The student had to visit each building within a specific time frame and complete the contained content. The 3D modelling of course content enabled the learner multiple viewpoints of actual subject material which provided interactive learning that was believed to enhance the student’s understand of the subject topic.
Clark & Maher (2006) looked at the role of place and identity in a 3D virtual learning environment using Active Worlds by the analysis of chat logs and physical locality of the avatars within group discussions. They found that a sense of place can be achieved in a 3D virtual learning environment where identity and presence plays a role in establishing the context of the learning place. The students formed a strong bond with their avatar and indicated that they felt a sense of presence, as measured by a series of subjective scales, within the virtual learning environment. Similar Dickey (2003) found that the 3D virtual desktop world provided qualities of presence similar to that of an immersive virtual reality virtual world.
=====2.9.2.3.2 Second Life=====
Second Life (started 2003) consists of two worlds. These are: Second Life Teen Grid and Second Life Adult Grid. The teen grid provides access to 13-17 year olds and educational instructors. The functionality of the teen grid is the same as the adult grid with exception that all content has a PG rating. The Adult Grid is where you find all the universities and colleges for students over 17 years of age. Other educational content in Second Life is an extensive list of museums, galleries, simulations, business product development, role-playing spaces, employee and public business training course, etc. Similar to Active Worlds educators are able to rent or purchase land, allow open or closed access to the public and build and develop on land.
One major difference between Second Life and Active Worlds is that the former has an in world economy with in-built functional support enabling the trading of virtual products and services using ‘Linden dollars’, backed by content copyright and duplication controls and augmented by a provider managed exchange where real dollars can be exchanged for Linden dollars (and vice versa). This fundamental difference provides an incentive for content developers and service providers to actively support and expand the world with content and therefore enables access to a large body of pre-constructed content or access to an entire world-wide industry of content developers at extremely reasonable rates (compared to the real world 3D developers providing the similar content outside of Second Life) (Joseph, 2007). The building and scripting tools are easier to master than traditional 3D rendering tools, and delivered free as part of every user’s world browser and are sufficiently powerful that just about anything imaginable can be constructed (Schmidt et al., 2007).
Second Life’s standard interface as seen below (Figure 42) offers extensive functionality over that of Active Worlds. Some of the more common features as seen in the figure below are built-in world, content and people search facilities (left), a mini map (top right), an inventory library (bottom right), local chat channel (with a standard ranges of 15, 30 meters or 60 meters from text source) and group chat channels (worldwide range for up to 25 groups per avatar), customisable streaming media players (for sound, video and web page content), in world or external web html browser (link for both in world and outside world content), private or public multi-player voice facilities etc.
[[image:Second_Life_042.jpg]]
Figure 42. Online Virtual Social World Second Life (Circa 2008)
Another difference from Active World is avatar control, Second Life avatars can use roaming camera (whereas Active Worlds only provides First and Third person view). Roaming camera enables the user to use their mouse to control the moment around the world without the need to move their avatar. This functionality once mastered offers the users a powerful tool that provides an easy and fast way in which to navigate objects (that can even go through objects such as walls).
Due these and other technological advances over Active Worlds, Second Life has developed a large education community over the last couple of years. For instance, SIMTeach (June, 2008) the Second Life Education Wiki identifies over 200 Educational Institutions in Second Life of which 138 listed are universities, colleges and schools. The Second Life Education (SLED) list server has over 5,000 world-wide members. The New Media Consortium (NMC, a group that hosts education islands) has over 100 universities on their land and Second Life Teen Grid has over 90 educational projects (Linden & Linden, 2008). Figure 44 p88 provides some examples of the training and learning activities in Second Life representing a mixture of educational institutions, corporations and governments agencies.
The content of Second Life is entirely user created. The availability of content developers and potential students already experienced in using the environment is dependent on the take-up and expected future growth of the environment. In Figure 43 are the user base and economical statistics for the first quarter 2008 as provided by Second Life’s proprietor Linden Lab (2008a). As of November 2008 Second Life had 16,318,063 million users (60 day logons 1,344,215 million). A beak-down of Second Life’s demographics as at November 2008 can be seen in Appendix I: Second Life Demographics.
[[image:Second_Life_User_and_Econ_Stats_Q12008_043.jpg]]
Figure 43. Second Life User & Economic Statistics for Q1 2008
[[image:Second_Life_Training_and_Learning_044.jpg]]
Figure 44. Second Life Training and Learning
'''Second Life Research'''
Educators are using Second Life for both formal and informal purposes. Some Educational intuitions have set up entire virtual campuses modelling their real world campus while others are modelling purpose built virtual education structures. The relative youth of Second Life means that there is considerable variation in the maturity of educational efforts across the virtual world, and limited peer reviewed studies yet published. Many educators are still experimenting while others, having active support of their institutions are actively using the environment for partial or entire subject delivery. Here we will look at some of the current research at the time of writing that has been undertaken in Second Life most of which has been recently published since 2006 although given the technological advances that has occurred in Second Life since 2007 onwards we will specifically concentrate on the later research.
Martinez, Martinez, & Warkentin (2007) researched the implementation of a lecture to geographically distributed third year university students in Second Life. The lecture was delivered in a conventional lecture room setting using traditional chalk and talk style delivery with lecture slides and the chat channel for instruction, no voice was used.[21] According to the lecturer’s experience using text only delivery, the time to deliver the content was double that of a face to face lecture. This was also confirmed by the students in their survey. In the student survey some admitted they felt distracted by the novelty of the environment and were overly concerned with ancillary aspects such as their avatar’s appearance etc. Others admitted to being distracted by external (to the environment) concurrent activities occurring simultaneously on their PC’s such as multi tasking with other programs (e.g. MSN messaging) whilst at the lecture. Others experienced technical difficulties and could not get back into the lecture after they were accidentally logged out. In spite of these short-comings, when asked to rate the lecture experience on a scale of 1-10 the average student response was 8.5. In this study it was noted that some of these distractions and difficulties could be put down to first time user experience. The lecturer also felt that this lecture could have easily been pre-recorded and delivered online and that active learning techniques could have improved the delivery of this lecture in Second Life (Arreguin, 2007).
Joseph (2007) notes a consequence of using Second Life (or a virtual worlds in general) for teaching is that sessions generally take longer than traditional methods but believes that this is not an issue per se as time to complete the task should come second to the effectiveness of the experience. Joseph also believes (from experience) that the avatar projected on the screen and sense of presence experienced by the participants is more effective for learning than a live image of a video feed.
Kofi, Svihla, Gawel, and Bransford (2007) researched the potential that virtual worlds could provide efficiency and innovation for adaptive learning. In their study, students were present with a maze to navigate that simulated problem solving skills required for learning similar to that in a real life learning scenario. Kofi, et al found that Second Life was able to provide enough functionality and support for the learner to apply new concepts in order to solve presented problems as long as they were provided key indicators of possible outcomes. They also found that the use of 3D learning environments required the same amount of instruction that would be provided in equivalent real world learning and that simply building a model did not provide sufficient information, of itself, for the learner to learn in this instance; they also needed to be continuously prompted and guided in order to reach the end learning objective.
In another example, Second Life was used to support learning objectives of a total of 13 students aged between 19-26 for a third year level college students on a course for Digital Entertainment and Society where the students were geographically distributed around the world (Gonzalez, 2007). Both lectures and assignment work was conducted within Second Life. The lectures consisted of a video presentation and an in world field excursion. Assignment work required some in-world building, an exercise using linden dollars with a student presentation on completion. No students had used this environment before but an acclimation exercise was sufficient in providing them with the skills required to undergo course work in Second Life. At the end of the course students were given a survey with results presented below (Table 1).
{|
|Elements that Second Life Added:
|-
|
|Agree
|Disagree
|-
|Enjoyment
|100%
|0%
|-
|Technical difficulties
|100%
|0%
|-
|Interaction with tutor
|62%
|38%
|-
|Interact ion with classmates
|62%
|38%
|}
Table 1. Survey Results for Digital Entertainment and Society Second Life Subject
The technical difficulties result was explained largely by network latency experienced by the students. Each student used their own computers with an average of 512 Kbs connection speed – not especially fast, nor ideal for the use in the Second Life environment. No mention was made in the study as to whether the student computers met the Linden Lab systems requirements (2008c). As Second Life is streaming virtual world where content is downloaded on-demand from Linden Labs servers located in the USA to the local computer connection speed can an important factor in technical difficulty performance. Other major impacts from a technical perspective include the computer graphics cards and the size of onboard computer RAM. The Second Life browser does offer many settings for optimising performance on low-end machines but if the minimum system requirements are not met then the user’s experience of the virtual world will be reduced significantly with dropouts, lag and poor graphics.
==2.10 Learning & Instructional Design Theory==
===2.10.1 Introduction===
Learning in any world (real or virtual) requires well thought out instructional design. Learning is a process of the mind regardless of whether your body is present in the virtual world or real world. Instructional components for learning regardless of medium include (DONCIO et al., 2008):
*Clear, concise, and appropriately structured content
*Activities that draw relationships between concepts, challenge learners' thinking and understanding, and reinforce information
*Evaluative measures that determine if knowledge assimilation and retention have occurred
In this research the focus was on the use of new technology in education as opposed to education applied to new technology; therefore this section only provides an overview of applicable theory required to assist in the instructional design, delivery and assessment of the subject material presented to the research participants in this study. Gagne’s Nine Events of Instruction and Bloom’s Taxonomy of the Cognitive Domain were selected to assist in this task.
===2.10.2 Behaviourism and Cognitivism===
There are two main traditional schools of thought in learning theory. These are Behaviourism and Cognitivism (DONCIO et al., 2008; Lewis, 2001).
*Behaviourist (Objectivist) views the mind as a ‘black box’ no consideration of personal or past experience is taken into consideration. The mind starts off with a clean slate where a stimulus produces a response. Only when a change in behaviour is observed learning has occurred. Learning is discrete, measurable and quantifiable.
*Cognitivist (Constructivist) views the mind as a continuous organism that evolves. Knowledge is constructed based upon from past material and personal experience. Learning is unique to the individual; relating new information based upon pervious knowledge learnt.
The University of Washington, Seattle (2008) compares the two approaches of and a provides a discussion of each in terms of philosophy (Table 2, p93), learning outcomes, instructor role, student role, activities and assessment. The philosophies of these approaches are opposing and therefore produce different methods of instruction (Lewis, 2001; Nash, 2007).
Behaviourism was the first to be defined in learning theory while cognitivism developed later as a response to perceived limitations of behaviourism in understanding and adapting to new learning concepts (Lewis, 2001; Mergel, 1998).
While some constructivists argue the merits of constructivism as a distinct theory, viewing knowledge as a something constructed by a learner through the process of learning other writers view constructivist ideas as an evolution of the fundamental cognitivist school. This position is illustrated in Table 2 where the behaviourist and constructivist-enhanced-cognitivist philosophies are compared using a consistent comparative organisation of views (see Dabbagh, 2006; Mergel, 1998).
Constructivists argue a distinction between cognitive constructivism and social constructivism, in which the former emphasises the exploration and discovery on the part of each learner, while the latter emphasises the collaborative efforts of groups of learners as sources of learning, but for our purposes it is sufficient to distinguish the behaviourist and cognitive approaches. Throughout the years many practical teaching methods have evolved with concepts that encompass both approaches.
[[image:TABLE_Instructional_Design_Behaviorism_Cognitivism_045.jpg]]
Table 2. Instructional Design: Comparative Summary Behaviorism and Cognitivism
(University of Washington, 2008)
===2.10.3 Gagne’s Nine Events of Instruction===
Gagne theory of instruction can be divided into three areas (Corry, 1996); taxonomy of learning outcomes, conditions of learning and levels of instruction. There are considerable similarities between Gagne’s ‘taxonomy of learning outcomes’ and Bloom’s ‘taxonomy of the cognitive domain’ therefore a discussion of these will be provided in the next section of this thesis.
Gagne breaks down ‘conditions of learning’ into internal learning and external learning conditions. Internal learning is concerned with previous learned capabilities of the learner and external learning is the instruction or stimuli that will be presented to the learner. While Gagne’s theory takes an essentially cognitivist approach, it recognises both behaviourism and cognitivism influences to instructional learning. For our purposes, it is the ‘levels of instruction’ as outlined by Gagne that are of particular interest which we will explore in this section.
Gagne (1985) presents a systematic approach to instructional design termed the ‘nine levels of instruction’ as presented below in Figure 45 (Clarke, 2000)[22]. These nine levels have been specifically designed for the teaching of intellectual skills.
[[image:GAGNE_Nine_Steps_To_Instruction_046.gif]]
Figure 45. Robert Gagne's Nine Steps of Instruction (Clarke, 2000)
The nine instructional events with their corresponding cognitive processes can be described as follows (Clarke, 2000; Kearsley, 2008):
#'''Gaining Attention (Reception)''': Grab the attention of the participant by presenting a teaser in order to get the participant interested and motivate them to learn more about the topic that will be presented. This could be done using methods such as a movie, phrase, storytelling or a demonstration.
#'''Informing Learners of the Objective (Expectancy)''': Provide the participant with the objectives in order to assist them in organising their thoughts ready to receive the new information that will be presented.
#'''Stimulating Recall of Prior Learning (Retrieval)''': Provide the participant with any background that my assist them in building upon the new knowledge that they are about to receive. This helps to place a framework in their mind based upon previous knowledge.
#'''Presenting the Stimulus (Selective Perception)''': This is where the new learning begins. Information should be chunked and organised meaningfully in order to avoid memory overload and assist in the learning of new knowledge. Chunking the information into sequence of learning events and breaking it down into constituent parts with a structure and purpose that spans across different areas of comprehension. The revised Bloom’s taxonomy (discussed in the next section) can be used to assist in forming of the presented information.
#'''Providing Learning Guidance (Semantic Encoding)''': Assisting the participant to obtain a deeper level of understanding of the new knowledge so that information can be encoded into their long term memory. During instruction try to provide examples, non examples, analogies, graphical representation etc. to assist in semantic encoding process.
#'''Eliciting Performance (Responding)''': Letting the learner do something with the new knowledge or test their new knowledge to confirm they have a correct understanding of the information.
#'''Providing Feedback (Reinforcement)''': Analyse the learner’s understanding of the subject matter presented and provide feedback to correct any misunderstood knowledge. Immediate feedback and reinforcement of the new knowledge (e.g. question and answers).
#'''Assessing Performance (Retrieval)''': Test that the new knowledge is understood and the learning objectives have been met. This could be in the form of a test or a demonstration by the learner to assess if they have mastered the information.
#'''Enhancing Retention and Transfer (Generalisation)''': Generalise the information so that the knowledge transfer can occur, inform them of similar problems or a similar situation so that the acquired knowledge can be put into a new context.
===2.10.4 Bloom’s Taxonomy===
The Taxonomy of Educational Objectives also known as Bloom’s Taxonomy is widely used[23] to assist in the preparation of learning objectives and the assessment of learning outcomes. The learning outcomes of a student are the results of their learning experience of a course that should be a direct consequence of the course objectives (Monash University, 2008). Hence the application of Bloom’s taxonomy of educational objectives in forming course objectives provides a measure by which to assess student’s learning outcomes.
The original work of Bloom’s Taxonomy was developed by an American committee of educational psychologists chaired by Benjamin Bloom that presented over a period of time three domains: cognitive (knowledge) (Bloom, Englehart, Furst, Hill, & Krathwohl, 1956), affective (attitudes) (Krathwohl, Bloom, & Masia, 1964), and psychomotor (motor skills) (Dave, 1967, 1970; Harrow, 1972; Simpson, 1972). In forming educational course objectives Bloom’s cognitive domain is applied to assess the knowledge and intellectual component of a curriculum.
After nearly 47 years had passed Bloom’s cognitive domain was revised (Anderson et al., 2001; D R Krathwohl, 2002) by a committee of eight, two of whom had worked on the original published work (committee: Krathwohl and editor: Anderson). The revision was made as a result of many years of application and research and has since been accepted by many educators as a replacement for Bloom’s original work. The changes that were made are as follows (Figure 46) (Anderson Research Group, n.d.; D R Krathwohl, 2002):
*The names of six major categories were changed from noun to verb forms.
*Comprehension and synthesis were retitled to understand and create respectively, in order to better reflect the nature of the thinking defined in each category.
*Create was moved to the highest, that is, most complex, category.
*The revised Taxonomy is not a cumulative hierarchy.
*A taxon of remember was devised to replace that of Knowledge, and
*A two dimensional Cognitive Taxonomy Table was formed by sub dividing the original Knowledge taxon.
[[image:BLOOM_Changes_in_Cognitive_Domain_047.jpg]]
Figure 46. Changes in Bloom’s Cognitive Domain
====2.10.4.1 Revised Bloom’s Taxonomy of the Cognitive Domain====
A substantive difference is in the handling of “Knowledge”. The revised Bloom’s cognitive domain as shown in Table 3 was extended to include the dimension of Knowledge. So now the revised Bloom’s cognitive domain consists of a two dimensional table with The Knowledge Dimension and The Cognitive Process Dimension. This table provides the instructor with a tool with which to classify learning objectives where learning objectives are specific and inclusive to the discrete learning outcomes or intended results that are hoped to be achieved by the end of instruction. The instructor defines the learning objectives where these objectives are classified into the appropriate cell in the 2D matrix of cognitive and knowledge dimensions which then assists in instructional design, and assessment and provides a tool to enable balancing of the learning objectives across methods of instructional design.
[[image:BLOOM_TABLE_Revised_Taxonomy_048.jpg]]
Table 3. Revised Bloom’s Taxonomy Table
(Anderson et al., 2001, p. 28)
'''The Cognitive Process Dimension'''
The Cognitive Process Dimension is the column values for Table 3 above. This dimension provides the level of learning and comprehension required to complete a task where each differs in their complexity on a scale from 1-6. Cognitive dimensions are defined as 1.Remembering, 2.Understanding, 3.Applying, 4.Analysing, 5.Evaluating and 6.Creating each of which contain further sub-process with 19 specific cognitive processes in total. Table 4 provides an overview of each cognitive process with their defining verbs. Verbs are used to classify an objective. For example, an objective ‘to recall the 7 states of Australia’ would be classified under remembering. Recall in this instance is the verb that classifies the learning objective into level “1. Remember” of the cognitive dimension.
[[image:Cognitive_Process_Dimension_Processes_049.jpg:
Table 4. The Six Categories of The Cognitive Process Dimension And Related Cognitive Processes (Anderson et al., 2001, p. 31)
Bloom’s cognitive taxonomy was solely based upon the values contained in the cognitive dimension (with the exception of the differences previously discussed). Bloom believed that the cognitive process was a cumulative learning process in order to achieve a learning outcome. For example, in order to ‘analyse’ subject matter the student would need to have mastered using the old Bloom’s taxonomy of the cognitive domain knowledge/remember, comprehension/ understand and application/ apply whereas the revised taxonomy of the cognitive domain does not assume this cumulative hierarchy. The early Bloom’s cognitive domain took a behaviourist approach to instruction whereas the revised Bloom’s cognitive domain believes that learning can take place at any level without mastering previous levels. This is a fundamental shift in the philosophical grounding of Bloom’s taxonomy of the cognitive domain where it has moved away from the behaviourist approach of learning.
'''The Knowledge Dimension'''
The Knowledge Dimension provides an additional dimension that has been added to the taxonomy by the subdivision (and modification) of Bloom’s original knowledge category, which can be seen as row values in Table 3 above. The knowledge dimension defines how knowledge is constructed which can be Factual, Conceptual, Procedural or Metacognitive. Table 5 provides an overview of the knowledge dimension and their meanings.
The knowledge dimension separates the noun (or subject matter) from the stated learning objective. For example, continuing on from the objective discussed above ‘to recall the 7 states of Australia’ would be factual knowledge where the bolded words make up the noun construct. This noun is factual because the learner either knows the states or they don’t, to know is the basic element required in order to solve the problem.
[[image:Major_Types_and_Subtypes_Knowledge_Dimension_050.jpg]]
Table 5. The Major Types And Subtypes Of Knowledge Dimension (Anderson et al., 2001, p. 31)
The knowledge dimension has been added as it provides further insight to the type of knowledge a student is required to master. In the original work this assumption was also made as it was the first level in a cumulative hierarchy but the revised knowledge dimension provides the instructor with a greater understanding and assists in defining knowledge as a separate dimension. For example, the objective ‘to recall the 7 states of Australia’ the student needs to Remember Factual Knowledge.
The knowledge dimension like the cognitive dimension is not a cumulative hierarchy, learning can start anywhere within the knowledge dimension.
'''Using the Revised Bloom’s Cognitive Domain to Assist in Instructional Design'''
To assist in formulating instructional design Anderson et al. (2001) provides in their book for the cognitive dimension; sample objectives, corresponding assessments and assessment formats (chapter 5) and in the knowledge dimension; specific details, elements, generalisation, structures and models etc (chapter 4). This assists in the formulation of specific tasks and in defining the level of knowledge required of the student. It also assists in ensuring those objectives and testing of those objectives lie across the required range of cognitive and /or knowledge categories and that the student is being fairly assessed in areas that are directly related to the objectives.
====2.10.4.2 Bloom’s Taxonomy of the Cognitive Domain Applied to a Digital Environment====
'''Bloom’s Digital Taxonomy of the Cognitive Domain'''
Churches (2008) has extended the (revised) Bloom’s cognitive domain for digital learning by taking the cognitive process dimension and included verbs for emerging technology. As can be seen below (Figure 47) the words highlighted in blue are the digital emerging technology verbs that have been categorised by using (revised) Bloom’s cognitive levels as the basis for interpretation of complexity. For example bookmarking is a remembering process is simpler than programming (which is a creating process).
[[image:BLOOM_Revised_As_Digital_Taxonomy_051.jpg]]
Figure 47. Bloom's Digital Taxonomy
Churches further added within his classification system a rubric (scoring criteria) of these technologies similar to that that has been defined in the sub-classification system used in Bloom’s cognitive domain. For example, Table 6 displays the rubric for Bookmarking where it has been broken down from simplest to highest.
[[image:BLOOM_Bookmarking_Rubric_For_Digital_Taxonomy_052.jpg]]
Table 6. Bookmarking Rubric for Bloom’s Digital Taxonomy
'''Bloom’s Taxonomy of the Cognitive Domain applied to Games'''
Wang & Tzeng (2007) proposed using the (revised) Bloom’s taxonomy of the cognitive domain as a method for understanding the application of knowledge in digital games. They believed that players learn in various ways within computer games and recognised how little work (if any) had been done in analysing such e-learning platforms in a structured taxonomic manner and in structuring the implementation and understanding of the cognitive processes. They proposed using Bloom’s taxonomy of the cognitive domain as a method by which to assess cognitive processes in a computer game.
[[image:BLOOM_Taxonomy_For_Games_053.jpg]]
Figure 48. Bloom’s Taxonomy for Games
The research included using a game called Food Force, which was a problem solving and mission-oriented game. Figure 48 summarises the conclusion of their research. As can be seen in Figure 48, players exhibited both personal and social feedback cross Bloom’s cognitive levels. They found that the players experienced cognitive processes for individuals across all categories of the Bloom’s cognitive model and displayed social interaction for the higher level Bloom’s categories of Analyse, Evaluate and Create.
==2.11 Summary==
The acceptance of the latest crop of virtual worlds such World Of Warcraft, Second Life, Entropia Universe, There, Eve, America’s Army and others by the internet using public as an integral part of their life style is possibly the most significant paradigm shift to occur in the last 10 years. With the statistics of user volumes and retention rates shows consumption numbers in the tens of millions of users, spread evenly across ages from youth to middle age and an approximately even gender balance (at least in the social worlds) (KZERO Research, 2007; Woodcock, 2008; Yee, 2006). The growth rates of these worlds collectively have been, and are projected (by industry analysts) to continue to be, rising dramatically for the foreseeable future.
With the current convergence of disparate technologies represented by these systems, the general public now have affordable single platform multi-media collaborative environments with sufficient realism to create virtual immersive spaces where presence is achieved at a level sufficient for them to lead virtual existences and establish social networks that rival their real world existence.
The linking of these spaces with the affordable (often free) tools that enable the public to create new 3D spaces and content for these spaces over the last eight years has resulted in a world-wide content developer base that with substantial skills and a highly competitive market for purchasers of those skills at often very low rates.
With the combined market pressures of minimising education delivery costs, improving education outcomes, and reaching as wide a market as possible it is understandable that educators have shown an extended interest over many years in the possibilities of virtual environments for education delivery. So with the advent of the latest generation of creativity focused social worlds like Second Life over the last few years, it is not surprising that the uptake by universities and educators (numbering in the hundreds of institutions) has been as substantial as it is.
A brief retrospective of the work in simulators, virtual reality and 3D games, shows that the potential of these environments extends beyond the virtual ‘chalk-and-talk’ to enabling education delivery strategies for even campus based students that cannot economically be delivered using reality bound means.
With traditional real world learning environments there is an extensive body of tested knowledge that can provide clear guidance as to workable frameworks for the design of course work. The extent to which and how these methods can or should be applied to the virtual world learning space remains an open question.
</div >
[[Category:Featured Article]]
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
d875646505c796db4cb09ab9a1b432278c0a2c28
BPC RiskManager V6.2 Network Architecture
0
4
7
6
2018-10-28T08:48:00Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
[[Image:BPCRM NetDiag.png]]
BPC RiskManager is an N-Tier application. The primary layers are:
* Database Server layer
* Application Server layer
* Client layer
The core application set does not require a web server but certain optional capabilities do.
You will require a web server if you will be:
* Using the browser plugin client component.
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
While the browser plugin client component can be served by any brand of web server, you will require IIS 5+ if you plan to be:
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
Both of these capabilities use ISAPI libraries running on an IIS server. If you will be using the the HTTPS communication protocol, you will also need an SSL certificate installed on the web server.
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
96accad095e3f378d468445d6bc5231ced78bf76
BPC RiskManager Frequently Asked Questions
0
5
9
8
2018-10-28T08:48:00Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
{|width="100%"
|width="60%" VALIGN="Top" |
# [[How do I get a copy of BPC RiskManager V6.2.5?]]
# [[Would it be possible to get a copy of the BPC RiskManager V6 installation guide?]]
# [[Is there a feature listing for the BPC RiskManager windows client and the browser client?]] We are looking at the possibility of using a mixed client environment based on user specific needs and where they are.
# [[When are multiple BPC RIskManager server licenses required?]] We are looking to have RM implemented across a group of companies. They will all be using the same instance with same fields and definitions as the subject matter is the same. Can we use a single server license or will we require multiple server licenses?
# [[Can you please provide information on the cost of licensing and the type of licensing for BPC RiskManager V6.x ?]]
# [[Does your license include the cost of MS SQL Server ?]]
# [[I just purchased BPC RiskManager. Will you be sending the install disks, and when?]]
# [[What will need to be arranged prior to the installing BPC RiskManager?]]
# [[Does the RiskManager client application work with FireFox browsers?]]
# [[In what programming language is BPC RiskManager written?]]
# [[Does the RiskManager plug-in itself have a certificate like a java applet does?]]
# [[For support, what type of support is available (i.e.: email, phone, onsite, etc...)?]]
# [[What is the best way to get support?]]
# [[How do I arrange installation support and what is the timeline?]]
# [[What support packages are available and at what cost?]]
# [[Is there a cost associated with telephone support (i.e.: cost per call or issue)?]]
# [[How do I get custom features added, or request new features for BPC RiskManager?]]
# [[Is there a User Group Forum?]]
# [[What type of documentation, technical and user is available for BPC RiskManager?]]
# [[How does one decide the optimum BPC RiskManager configuration?]]
# [[Is BPC RiskManager a Client-Server application?]]
# [[What is the difference between the browser plugin and the windows executable RiskManager client?]]
# [[Database stability: Is the RiskManager essentially a SQL Server application ported to Oracle?]]
# [[Database support: Which database choice will give us the best level of support?]]
# [[Security: What is the most secure architecture for BPC RiskManager?]]
# [[What is the best client version - the browser or non browser Risk Manager client?]]
# [[What admin account rights are required to setup a browser plug-in?]]
# [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7|How do I configure IE for the RiskManager browser plugin?]]
# [[BPC RiskManager Server - After installing in production or adding an application server|We just ported our enterprise system to a new server and I can't login. What do I do now?]]
# [[Steps For Migrating RiskManager V6.x from Test To Production|How do I port BPC RiskManager from test (or dev) to production?]]
# [[BPC RiskManager V6 on 64 bit Windows|How do I install BPC RiskManager onto a computer running a 64bit Windows OS?]]
| VALIGN="Top"|
<noinclude>
{|align="right" width="100%" cellpadding="10px"
|- style="background-color:#FFEBCD; " width="100%"
|'''A Frequently asked Question is...'''
|-
|<div class="didyouknow2" STYLE="height: 600px;
border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-right:10px;" >
{{#dpl: includepage=*
|includemaxlength=3000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=RiskManager FAQ
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;"></div>
|}
</noinclude>
|}
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:Bishop Phillips Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinksCategoryHead|CT=RiskManager FAQ|CN=The frequently asked Questions Category}}
</noinclude>
25cfdeccbd4a292afa2715e0cff010008b205d54
File:MS SMTP CFG5.png
6
6
10
2018-10-28T11:49:15Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMLoginFail.jpg
6
7
11
2018-10-28T11:50:59Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Jsmwbutton Exampl1.gif
6
8
12
2018-10-28T11:51:54Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS GP1.png
6
9
13
2018-10-28T11:52:36Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:AdmlistEmpl1.gif
6
10
14
2018-10-28T11:53:10Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserCreate9.png
6
11
15
2018-10-28T11:53:32Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr3.png
6
12
16
2018-10-28T11:54:00Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS RMWD7.png
6
13
17
2018-10-28T11:54:33Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TEXOP text Exampl1.gif
6
14
18
2018-10-28T11:55:02Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserCreate7.png
6
15
19
2018-10-28T11:55:42Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMWC WSSetup4 XP.png
6
16
20
2018-10-28T11:56:15Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RM Config ActiveDirectory Step1.gif
6
17
21
2018-10-28T11:56:43Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS GP7.png
6
18
22
2018-10-28T11:57:19Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TEXOP textarea Exampl1.gif
6
19
23
2018-10-28T11:57:39Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Wagner Gesamtkunstwerk 012.jpg
6
20
24
2018-10-28T11:58:01Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BLOOM Taxonomy For Games 053.jpg
6
21
25
2018-10-28T11:58:27Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RM App Server SysTrayIcon.png
6
22
26
2018-10-28T11:58:50Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserCreate14.png
6
23
27
2018-10-28T11:59:15Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS RMWD1.png
6
24
28
2018-10-28T11:59:37Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SIMNET Battlefield Simulator 035.jpg
6
25
29
2018-10-28T12:00:45Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BusinessComponentObjectives.png
6
26
30
2018-10-28T12:01:13Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IIS Install2 XP.png
6
27
31
2018-10-28T12:01:45Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserCreate4.png
6
28
32
2018-10-28T12:02:06Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLStudMan NewDB2.png
6
29
33
2018-10-28T12:02:26Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Filermdr Exampl1.gif
6
30
34
2018-10-28T12:11:27Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SELOP button Exampl1.gif
6
31
35
2018-10-28T12:12:58Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BPCTitle75PERC.jpg
6
32
36
2018-10-28T12:13:43Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IA Inherent Control Risk Matrix.png
6
33
37
2018-10-28T12:14:16Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLStudMan RestoreDV2.png
6
34
38
2018-10-28T12:14:39Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IIS On W2003 1.png
6
35
39
2018-10-28T12:15:28Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IIS Install1 XP.png
6
36
40
2018-10-28T12:15:57Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS SM2.png
6
37
41
2018-10-28T12:16:17Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SMWS SetUp3.png
6
38
42
2018-10-28T12:16:51Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Cavazza Virtual Universes Landscape 004.jpg
6
39
43
2018-10-28T13:40:03Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC Login6.png
6
40
44
2018-10-28T13:41:00Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS RMWD3.png
6
41
45
2018-10-28T13:41:40Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BLOOM Revised As Digital Taxonomy 051.jpg
6
42
46
2018-10-28T13:42:09Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS GP11.png
6
43
47
2018-10-28T13:43:13Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BPRAnalyticStructure.png
6
44
48
2018-10-28T13:43:41Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BPC RiskManager V6261 Main Screen.jpg
6
45
49
2018-10-28T13:44:12Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RecursiveShapes.png
6
46
50
2018-10-28T13:44:42Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserCreate3.png
6
47
51
2018-10-28T13:45:06Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Visual Represnetation of Course Content 041.jpg
6
48
52
2018-10-28T13:45:31Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Immersion Interaction Imagination 007.jpg
6
49
53
2018-10-28T13:46:25Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS GP3.png
6
50
54
2018-10-28T13:47:17Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Alice via Caroll and Hattori 013.jpg
6
51
55
2018-10-28T13:47:44Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IIS SMTP Install.png
6
52
56
2018-10-28T13:48:53Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TEXOP dtextareaWYSIWYG Exampl1.gif
6
53
57
2018-10-28T13:49:34Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Userlogin Exampl1.gif
6
54
58
2018-10-28T13:49:58Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Basic MUD Tree Structure 023.jpg
6
55
59
2018-10-28T13:50:32Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMWC WSSetup2.png
6
56
60
2018-10-28T13:50:59Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:InternetParticipatingHosts Count 1990 to 1998 026.jpg
6
57
61
2018-10-28T13:51:48Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:MMOZRG Eve and WOW 029.jpg
6
58
62
2018-10-28T13:52:21Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC IESetup2.png
6
59
63
2018-10-28T13:52:54Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SELOP list Exampl1.gif
6
60
64
2018-10-28T13:53:17Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:MMOVW Growth Rate 017.jpg
6
61
65
2018-10-28T13:53:45Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:PLATO Popular MUD Games Developed For PLATO 033.jpg
6
62
66
2018-10-28T13:54:10Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLStudMan RestoreDV6.png
6
63
67
2018-10-28T13:54:30Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLEnt NewLogin2.png
6
64
68
2018-10-28T13:55:04Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TEXOP admlistDEF Exampl1.gif
6
65
69
2018-10-28T13:55:34Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:MS SMTP CFG2.png
6
66
70
2018-10-28T13:55:59Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:GAGNE Nine Steps To Instruction 046.gif
6
67
71
2018-10-28T13:56:34Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:VW Films Tron LawnmowerMan Matrix 016.jpg
6
68
72
2018-10-28T13:57:00Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RPG Islands Of Kesmai 024.jpg
6
69
73
2018-10-28T13:57:30Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:DATEOP text Exampl1.gif
6
70
74
2018-10-28T13:58:04Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:ALSBASteps.png
6
71
75
2018-10-28T13:58:42Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS GP4.png
6
72
76
2018-10-28T13:59:24Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr9.png
6
73
77
2018-10-28T13:59:56Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr14.png
6
74
78
2018-10-28T14:00:26Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLStudMan AssignUserRights2.png
6
75
79
2018-10-28T14:00:59Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS GP6.png
6
76
80
2018-10-28T14:01:41Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr13.png
6
77
81
2018-10-28T14:02:34Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Map.png
6
78
82
2018-10-28T14:03:27Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BishopsStakeholderCommunityModel.png
6
79
83
2018-10-28T14:04:37Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC IESetup1.png
6
80
84
2018-10-28T14:05:11Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:2 BPCSurveyManagerWCLoginPage.jpg
6
81
85
2018-10-28T14:05:42Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLEnt NewDB2.png
6
82
86
2018-10-28T14:06:15Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IA9DocMethods.png
6
83
87
2018-10-28T14:06:57Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMWC WSSetup1.png
6
84
88
2018-10-28T14:07:49Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TABLE Instructional Design Behaviorism Cognitivism 045.jpg
6
85
89
2018-10-28T14:08:15Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IAAnotatedDataFlow.png
6
86
90
2018-10-28T14:09:11Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:DATEOP datepick Exampl1.gif
6
87
91
2018-10-28T14:09:35Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserCreate13.png
6
88
92
2018-10-28T14:09:56Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserSettings UserPref1.png
6
89
93
2018-10-28T14:10:23Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Americas Army 037.jpg
6
90
94
2018-10-28T14:10:56Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Bartle The First MUD 022.jpg
6
91
95
2018-10-28T14:11:35Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr11.png
6
92
96
2018-10-28T14:12:02Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Major Types and Subtypes Knowledge Dimension 050.jpg
6
93
97
2018-10-28T14:12:28Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserCreate8.png
6
94
98
2018-10-28T14:12:53Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr1.png
6
95
99
2018-10-28T14:13:18Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Active Worlds Universe 039.jpg
6
96
100
2018-10-28T14:13:48Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Steuer Variables Influencing Telepresence 008.jpg
6
97
101
2018-10-28T14:14:14Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:DATEOP date Exampl1.gif
6
98
102
2018-10-28T14:15:00Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS GP5.png
6
99
103
2018-10-28T14:15:24Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Zweig95 M&A ImpactOnShareValue.jpg
6
100
104
2018-10-28T14:16:34Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SMWS SetUp2.png
6
101
105
2018-10-28T14:17:19Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TEXOP property Exampl1.gif
6
102
106
2018-10-28T14:18:21Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:VW SecondLife and There 031.jpg
6
103
107
2018-10-28T14:18:44Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Cave Art BC 011.jpg
6
104
108
2018-10-28T14:22:40Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLEnt RestoreDB5.png
6
105
109
2018-10-28T14:23:18Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IASegOfDutiesChart.png
6
106
110
2018-10-28T14:23:51Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS GP12.png
6
107
111
2018-10-28T14:24:20Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLEnt RestoreDB1.png
6
108
112
2018-10-28T14:24:57Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLEnt RestoreDB3.png
6
109
113
2018-10-28T14:25:30Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr5.png
6
110
114
2018-10-28T14:26:10Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BPCRiskManagerExpressV5.jpg
6
111
115
2018-10-28T14:26:39Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Graphical 2D Virtual Worlds 025.jpg
6
112
116
2018-10-28T14:27:08Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Kish Virtual Geography 003.jpg
6
113
117
2018-10-28T14:27:40Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TEXOP admlistDROPLIST Exampl1.gif
6
114
118
2018-10-28T14:28:21Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Jlabel Exampl1.gif
6
115
119
2018-10-28T14:29:15Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:ALSBA.png
6
116
120
2018-10-28T14:29:51Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IIS On W2003 2.png
6
117
121
2018-10-28T14:30:18Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLEnt AssignRMURights1.png
6
118
122
2018-10-28T14:30:44Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS SM5.png
6
119
123
2018-10-28T14:31:23Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Education In Virtual Worlds in 1950 to 60 038.jpg
6
120
124
2018-10-28T14:32:44Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IAControlVAssertions.png
6
121
125
2018-10-28T14:34:01Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Vishnu Hindu Avatar 001.jpg
6
122
126
2018-10-28T14:34:23Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BPCRM NetDiag.png
6
123
127
2018-10-28T14:35:03Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BPRChartKeyV4.gif
6
124
128
2018-10-28T14:39:51Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IAInherent Control Detection Risk Filter.png
6
125
129
2018-10-28T14:42:39Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMWC WSSetup5 XP.png
6
126
130
2018-10-28T14:43:10Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS GP8.png
6
127
131
2018-10-28T14:43:57Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLEnt NewDB1.png
6
128
132
2018-10-28T14:44:24Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Usermaker Exampl1.gif
6
129
133
2018-10-28T14:50:22Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Reality Virtuality Continuum 005.jpg
6
130
134
2018-10-28T14:51:00Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SMWS SetUp4.png
6
131
135
2018-10-28T14:51:24Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserCreate1.png
6
132
136
2018-10-28T15:39:19Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SELOP radioHB Exampl1.gif
6
133
137
2018-10-28T15:39:45Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SMWS SetUp6.png
6
134
138
2018-10-28T15:40:12Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BPCF8aBlackLR.jpg
6
135
139
2018-10-28T15:40:38Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:PLATO Lab Image032.jpg
6
136
140
2018-10-28T15:41:00Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserSettings ChngPWD1.png
6
137
141
2018-10-28T15:41:22Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BLOOM Bookmarking Rubric For Digital Taxonomy 052.jpg
6
138
142
2018-10-28T15:41:53Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BPC4KeyChartObj.png
6
139
143
2018-10-28T15:42:34Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IIS OnVista1.png
6
140
144
2018-10-28T15:42:59Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS RMWD5.png
6
141
145
2018-10-28T15:43:31Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS SM4.png
6
142
146
2018-10-28T15:44:11Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr15.png
6
143
147
2018-10-28T15:44:33Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMWC WSSetup6 XP.png
6
144
148
2018-10-28T15:45:07Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Artificiality Transportation as SS Metrics 009.jpg
6
145
149
2018-10-28T15:45:45Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS HSS01.png
6
146
150
2018-10-28T15:48:09Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS HSS01a.png
6
147
151
2018-10-28T15:50:51Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Second Life User and Econ Stats Q12008 043.jpg
6
148
152
2018-10-28T15:51:42Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:DATEOP datepick Exampl2.gif
6
149
153
2018-10-28T16:01:05Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC ResourceCreate2.png
6
150
154
2018-10-28T16:01:39Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:1 ACFESureveyManagerLaunchPage.jpg
6
151
155
2018-10-28T16:02:27Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SELOP radioHNB Exampl1.gif
6
152
156
2018-10-28T16:03:00Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BPR Components.png
6
153
157
2018-10-28T16:03:32Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC Login4.png
6
154
158
2018-10-28T16:04:12Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SMWS SetUp7.png
6
155
159
2018-10-28T16:04:38Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS GP2.png
6
156
160
2018-10-28T16:05:27Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS GP2a.png
6
157
161
2018-10-28T16:07:35Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:HttpSrvr Client LoginErrorMsg1.png
6
158
162
2018-10-28T16:09:27Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:MnA WhyMerge.jpg
6
159
163
2018-10-28T16:10:06Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS Service1.png
6
160
164
2018-10-28T16:10:28Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Ultima Online 028.jpg
6
161
165
2018-10-28T16:11:16Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMv6 NetworkDiag.png
6
162
166
2018-10-28T16:11:50Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IIS On W2003 SMTP.png
6
163
167
2018-10-28T16:12:33Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS GP9.png
6
164
168
2018-10-28T16:14:14Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr7.png
6
165
169
2018-10-28T16:15:17Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:The Magic Project 006.jpg
6
166
170
2018-10-28T16:16:32Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserCreate6.png
6
167
171
2018-10-28T16:17:05Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLEnt RestoreDB2.png
6
168
172
2018-10-28T16:17:37Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserCreate2.png
6
169
173
2018-10-28T16:18:13Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserSettings1.png
6
170
174
2018-10-28T16:18:38Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC Login2.png
6
171
175
2018-10-28T16:19:22Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:JRR Tolkein Book Covers 014.jpg
6
172
176
2018-10-29T02:32:38Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SMWS SetUp8.png
6
173
177
2018-10-29T02:33:02Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:HUD See Through and Immersive 020.jpg
6
174
178
2018-10-29T02:33:39Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLEnt AssignRMURights0.png
6
175
179
2018-10-29T02:34:13Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:3 BPCSurveyManagerWCSurveyListScreenPNA.jpg
6
177
181
2018-10-29T04:21:44Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BLOOM Changes in Cognitive Domain 047.jpg
6
178
182
2018-10-29T04:22:46Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr6.png
6
179
183
2018-10-29T04:23:20Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Morton Heilig Sensorama Simulator 018.jpg
6
180
184
2018-10-29T04:23:57Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC Login5.png
6
181
185
2018-10-29T04:24:26Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TEXOP admlistMULTI Exampl1.gif
6
182
186
2018-10-29T04:24:51Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Kish Virtual Geography 004.jpg
6
183
187
2018-10-29T04:27:14Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC IESetup3.png
6
184
188
2018-10-29T04:28:05Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserCreate10.png
6
185
189
2018-10-29T04:28:28Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLEnt RestoreDB4.png
6
186
190
2018-10-29T04:28:52Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLStudMan NewDB1.png
6
187
191
2018-10-29T04:29:15Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TutSM2 SurveyExampl1.gif
6
188
192
2018-10-29T04:30:13Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:DataFlow.png
6
189
193
2018-10-29T04:30:48Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RM Config ActiveDirectory Step2.gif
6
190
194
2018-10-29T04:32:23Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Filelist Exampl1.gif
6
191
195
2018-10-29T04:33:32Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:PLATO Online Course Count 1984 034.jpg
6
192
196
2018-10-29T04:34:09Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLStudMan NewLogin1.png
6
193
197
2018-10-29T04:35:19Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Instdlist Exampl1.gif
6
194
198
2018-10-29T04:35:55Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Early Computer Games 1958 To 1974 021.jpg
6
195
199
2018-10-29T04:36:45Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SELOP label Exampl1.gif
6
196
200
2018-10-29T04:37:12Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLEnt NewLogin.png
6
197
201
2018-10-29T04:37:44Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Entity.png
6
198
202
2018-10-29T04:38:45Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:MS SMTP CFG1.png
6
199
203
2018-10-29T04:39:26Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLStudMan RestoreDV4.png
6
200
204
2018-10-29T04:41:07Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Meridian 59 First 3D Online Virtual World 027.jpg
6
201
205
2018-10-29T04:49:34Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserCreate11.png
6
202
206
2018-10-29T04:50:01Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:KPMG99 M&A ImpactOnKPI.jpg
6
203
207
2018-10-29T04:50:34Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Recent VR Literature Covers 015.jpg
6
204
208
2018-10-29T04:50:56Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC Login3.png
6
205
209
2018-10-29T04:51:16Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS Service2.png
6
206
210
2018-10-29T04:51:39Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TutSM1 SurveyExampl1.gif
6
207
211
2018-10-29T04:52:35Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserSettings2.png
6
208
212
2018-10-29T04:53:02Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Bishopj.png
6
209
213
2018-10-29T04:53:51Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Presence Copresence Connected-Presence 010.jpg
6
210
214
2018-10-29T04:54:26Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS RMWD6.png
6
211
215
2018-10-29T04:54:46Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLStudMan NewLogin2.png
6
212
216
2018-10-29T04:55:24Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserCreate5.png
6
213
217
2018-10-29T04:56:03Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Second Life Training and Learning 044.jpg
6
214
218
2018-10-29T04:56:21Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLStudMan RestoreDV3.png
6
215
219
2018-10-29T04:56:49Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BPCTitle.jpg
6
216
220
2018-10-29T04:57:12Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SecondLife Digital Avatars 002.jpg
6
217
221
2018-10-29T04:57:32Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BPCSurveyManager DTCV7 SurveyEdit Screen.jpg
6
218
222
2018-10-29T04:58:15Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS RMWD2.png
6
219
223
2018-10-29T04:59:08Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BPC RiskManager V6261 Main Screen MidSize.jpg
6
220
224
2018-10-29T04:59:36Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TEXOP numeric Exampl1.gif
6
221
225
2018-10-29T05:08:06Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:BLOOM TABLE Revised Taxonomy 048.jpg
6
222
226
2018-10-29T05:11:24Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Checkbox Exampl1.gif
6
223
227
2018-10-29T05:11:50Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:HUD The Sword of Damocles 019.jpg
6
224
228
2018-10-29T05:12:09Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:MS SMTP CFG3.png
6
225
229
2018-10-29T05:12:40Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr12.png
6
226
230
2018-10-29T05:13:07Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC Login1.png
6
227
231
2018-10-29T05:13:30Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLStudMan RestoreDV5.png
6
228
232
2018-10-29T05:13:52Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Cognitive Process Dimension Processes 049.jpg
6
229
233
2018-10-29T05:14:16Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS RMWD4.png
6
230
234
2018-10-29T05:14:37Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:VW Habitat and Worldsaway 030.jpg
6
231
235
2018-10-29T05:14:56Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLStudMan AssignUserRights1.png
6
232
236
2018-10-29T05:15:32Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:DataStore.png
6
233
237
2018-10-29T05:16:03Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:MS SMTP CFG4.png
6
234
238
2018-10-29T05:16:22Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RATOP text Exampl1.gif
6
235
239
2018-10-29T05:16:42Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr4.png
6
236
240
2018-10-29T05:17:03Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RATOP numeric Exampl1.gif
6
237
241
2018-10-29T05:17:30Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr2.png
6
238
242
2018-10-29T05:17:49Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS SM6.png
6
239
243
2018-10-29T05:18:12Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Visual Course Structure in Virtual Buildings 040.jpg
6
240
244
2018-10-29T06:27:54Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Jbutton Exampl1.gif
6
241
245
2018-10-29T06:28:37Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Jsmlabel Exampl1.gif
6
242
246
2018-10-29T06:28:58Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IARepWriteCavemen.png
6
243
247
2018-10-29T06:29:23Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr8.png
6
244
248
2018-10-29T06:30:27Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Jsmbutton Exampl1.gif
6
245
249
2018-10-29T06:31:10Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Cbxuser example1.gif
6
246
250
2018-10-29T06:31:55Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IARepWriteSectionStructure.png
6
247
251
2018-10-29T06:32:14Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC ResourceCreate3.png
6
248
252
2018-10-29T06:32:32Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SELOP radioV Exampl1.gif
6
249
253
2018-10-29T06:33:00Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TEXOP password Exampl1.gif
6
250
254
2018-10-29T06:33:21Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IAGoalAchievement.png
6
251
255
2018-10-29T06:33:47Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC ResourceCreate1.png
6
252
256
2018-10-29T06:34:10Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SMWS SetUp5.png
6
253
257
2018-10-29T06:34:29Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TEXOP dtextarea Exampl1.gif
6
254
258
2018-10-29T06:36:02Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:AddRemWinComp.png
6
255
259
2018-10-29T06:36:27Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IIS OnVista2.png
6
256
260
2018-10-29T06:36:46Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMC UserCreate12.png
6
257
261
2018-10-29T06:37:16Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS SM1.png
6
258
262
2018-10-29T06:37:37Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SELOP checkbox Exampl.gif
6
259
263
2018-10-29T06:38:06Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:IAAssertionMatrix.png
6
260
264
2018-10-29T06:38:43Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS SM3.png
6
261
265
2018-10-29T06:39:03Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:MS SMTP CFG3a.png
6
262
266
2018-10-29T06:39:27Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SELOP droplist Exampl1.gif
6
263
267
2018-10-29T06:39:46Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:SQLStudMan AssignUserRights0.png
6
264
268
2018-10-29T06:40:06Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS RMWD9.png
6
265
269
2018-10-29T06:40:32Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Second Life 042.jpg
6
266
270
2018-10-29T06:40:52Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Template:BackLinks
10
267
271
2018-10-29T06:49:24Z
Bishopj
1
Created page with "<section begin=BackLinks /> =BackLinks= {{#dpl: linksto={{FULLPAGENAME}} }} <section end=BackLinks />"
wikitext
text/x-wiki
<section begin=BackLinks />
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
<section end=BackLinks />
8eff6ab712292a9972eb320dbdad3d855bfe979b
Template:BackLinksCategoryHead
10
268
272
2018-10-29T08:19:09Z
Bishopj
1
Created page with "<section begin=BackLinks /> =BackLinks= * [[:category:{{{CT}}}|{{{CN}}}]] {{#dpl:linksto={{FULLPAGENAME}}|notcategory={{{CT}}}}} <section end=BackLinks />"
wikitext
text/x-wiki
<section begin=BackLinks />
=BackLinks=
* [[:category:{{{CT}}}|{{{CN}}}]]
{{#dpl:linksto={{FULLPAGENAME}}|notcategory={{{CT}}}}}
<section end=BackLinks />
65d7e72b29bb7f2660235d85acd979bfa88e1f06
Template:BackLinksNoAnsw
10
269
273
2018-10-29T08:29:14Z
Bishopj
1
Created page with "<section begin=BackLinks /> =BackLinks= * [[:category:RiskManager FAQ|The frequently asked Questions Category]] {{#dpl:linksto={{FULLPAGENAME}}|notcategory=RiskManager FA..."
wikitext
text/x-wiki
<section begin=BackLinks />
=BackLinks=
* [[:category:RiskManager FAQ|The frequently asked Questions Category]]
{{#dpl:linksto={{FULLPAGENAME}}|notcategory=RiskManager FAQ}}
<section end=BackLinks />
a89b24751a504aa46e032ae50503dc546b977603
Category:BPC RiskManager
14
270
274
2018-10-29T08:48:37Z
Bishopj
1
Created page with "Articles relating to BPC RiskManager and its associated support systems. These articles include position papers, user manuals, technical manuals, and various other resoures."
wikitext
text/x-wiki
Articles relating to BPC RiskManager and its associated support systems. These articles include position papers, user manuals, technical manuals, and various other resoures.
29220e37af2f5d21bd2bbcf2062ff8a0bf7db45f
BPC RiskManager Software Suite
0
3
276
5
2018-10-29T11:30:49Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=BPC RiskManager Software Suite - Risk, Compliance and Certification=
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) was first released in 2008. The latest version was released in 2011. It is updated continuously throughout the year and, as a client, we encourage you to actively participate in the development direction.
The Enrima edition of BPC RiskManager is a single-user and multi-user risk management, compliance management, financial statements certification, insurance, survey, incidents & hazards system all in one application. You can manage multiple organisations and simultaneously view governance issues as risks, compliance obligations (legislation, processes and procedures) and compliance topics simultaneously. It manages email based reminders for a large variety of user expectations internally.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width=100%
|-
|
* BPC RiskManager V5 (Express)
|[[image:BPCRiskManagerExpressV5.jpg]]
|-
|
* BPC RiskManager V6 (Enrima Edition)
|[[image:BPC_RiskManager_V6261_Main_Screen.jpg|600]]
|}
=Client Base=
BPC RiskManager clients are head quartered in Australia, Canada, the United Kingdom and the United States of America. Global clients, of course have offices in many other countries. [http://www.bishopphillips.com| Bishop Phillips Consulting] has local offices in both Australia and North America.
The system is used extensively in the education sector with a very substantial presence in Universities in both Australia and Canada and commercial education providers and colleges in the USA. Other significant client groups include insurance providers (both primary insurers and reinsurers), central government agencies (such as federal & state/province departments and local government), utilities such as postal, electrical and water utilities.
BPC RiskManager implements and substantially extends the Risk Management Standards "AS/NZS 4360:2004 :Risk Management", and "ISO 31000" and complies with the "ISO/IEC Guide 73- Risk Management – Vocabulary".
The Risk Manager is not restricted to merely following the one interpretation of the risk standards. As a consequence of its long market history, BPC RiskManager implements a large number of divergent risk management methodologies or methods. Any combination of one to three assessment groups each containing ratings for likelihood, consequnce and control are possible. For example some clients use a risk management methodology that utilises risk budgets with three rating groups "Inherent, Residual and Target" where inherent ratings shift with external factors and target shifts with the corporate risk appetite (ie a risk budget) while the residual floats according to assessment ratings.
Any number of self assessments in each group can be maintained together with a separate family of assessments and remediations created by audit/expert that coexist with management's risk assessments.
Whether your preferred risk methodology uses quantification (quantitative risk analysis), or qualification (qualitative risk analysis), BPC Risk Manager directly supports the approach on a per assessment basis. Terminology (including fields names and purpose and screen captions) is fully customisable so the system can directly implement the corporate risk methodology / risk method.
=Get a Fully Functional Evaluation Copy of BPC RiskManager for FREE=
You can get a free no-obligation fully functional copy of BPC RiskManager (Enrima Edition) simply by completing the request form here:
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php I want to evaluate BPC RiskManager without obligation for free, please.]
It will work for 60 days, and if you need more time you can contact us and request a longer evaluation. There are no limitations in the evaluation version and we will even give you support for free while you get it running. It is fully self installing and will open up accessing your first risk database when the installer finishes.
If it isn't right for you, you can just uninstall after the 60 days with no further obligation to us.
=Knowledge Base=
*[[BPC RiskManager V6 Enterprise (Enrima Edition)]]
** [[BPC RiskManager V6 Enterprise (Enrima Edition)| BPC RiskManager Features]]
** [[BPC RiskManager V6.2 Network Architecture]]
** [[RM625ENT Installation Instructions|BPC RiakManager V6.2.5 Installation Instructions]]
** [[BPC RiskManager Frequently Asked Questions|BPC RiskManager - Frequently Asked Questions]]
** [[BPC RiskManager Quick Help With Common Tasks]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
** [[BPC RiskManager V6 on 64 bit Windows]]
*[[BPC SurveyManager - Overview]]
** [[BPC Surveymanager - Key Features]]
** [[BPC SurveyManager - Introduction]]
** [[BPC SurveyManager - Creating Surveys - Layout and Markup Tags]]
** [[BPC SurveyManager - Creating Surveys - The Page Script]]
** [[BPC SurveyManager - Questions and Input Controls]]
** [[BPC SurveyManager - Creating Surveys - Properties]]
** [[BPC SurveyManager - Creating Surveys - Rules Scripting]]
** [[BPC SurveyManager - The Built In Reports]]
** [[BPC SurveyManager - Advanced Database Configuration Settings]]
** [[BPC SurveyManager - Client Overview]]
** [[BPC SurveyManager - Tutorials - Survey Layouts]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
<noinclude>
[[Category:Featured Article]]
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
dc7ccfc5f7d790cb2dd0c17b50cdde25c14ee35b
BPC RiskManager V6.2 Network Architecture
0
4
278
7
2018-10-29T11:30:49Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
[[Image:BPCRM NetDiag.png]]
BPC RiskManager is an N-Tier application. The primary layers are:
* Database Server layer
* Application Server layer
* Client layer
The core application set does not require a web server but certain optional capabilities do.
You will require a web server if you will be:
* Using the browser plugin client component.
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
While the browser plugin client component can be served by any brand of web server, you will require IIS 5+ if you plan to be:
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
Both of these capabilities use ISAPI libraries running on an IIS server. If you will be using the the HTTPS communication protocol, you will also need an SSL certificate installed on the web server.
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
96accad095e3f378d468445d6bc5231ced78bf76
BPC RiskManager Frequently Asked Questions
0
5
280
9
2018-10-29T11:30:49Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
{|width="100%"
|width="60%" VALIGN="Top" |
# [[How do I get a copy of BPC RiskManager V6.2.5?]]
# [[Would it be possible to get a copy of the BPC RiskManager V6 installation guide?]]
# [[Is there a feature listing for the BPC RiskManager windows client and the browser client?]] We are looking at the possibility of using a mixed client environment based on user specific needs and where they are.
# [[When are multiple BPC RIskManager server licenses required?]] We are looking to have RM implemented across a group of companies. They will all be using the same instance with same fields and definitions as the subject matter is the same. Can we use a single server license or will we require multiple server licenses?
# [[Can you please provide information on the cost of licensing and the type of licensing for BPC RiskManager V6.x ?]]
# [[Does your license include the cost of MS SQL Server ?]]
# [[I just purchased BPC RiskManager. Will you be sending the install disks, and when?]]
# [[What will need to be arranged prior to the installing BPC RiskManager?]]
# [[Does the RiskManager client application work with FireFox browsers?]]
# [[In what programming language is BPC RiskManager written?]]
# [[Does the RiskManager plug-in itself have a certificate like a java applet does?]]
# [[For support, what type of support is available (i.e.: email, phone, onsite, etc...)?]]
# [[What is the best way to get support?]]
# [[How do I arrange installation support and what is the timeline?]]
# [[What support packages are available and at what cost?]]
# [[Is there a cost associated with telephone support (i.e.: cost per call or issue)?]]
# [[How do I get custom features added, or request new features for BPC RiskManager?]]
# [[Is there a User Group Forum?]]
# [[What type of documentation, technical and user is available for BPC RiskManager?]]
# [[How does one decide the optimum BPC RiskManager configuration?]]
# [[Is BPC RiskManager a Client-Server application?]]
# [[What is the difference between the browser plugin and the windows executable RiskManager client?]]
# [[Database stability: Is the RiskManager essentially a SQL Server application ported to Oracle?]]
# [[Database support: Which database choice will give us the best level of support?]]
# [[Security: What is the most secure architecture for BPC RiskManager?]]
# [[What is the best client version - the browser or non browser Risk Manager client?]]
# [[What admin account rights are required to setup a browser plug-in?]]
# [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7|How do I configure IE for the RiskManager browser plugin?]]
# [[BPC RiskManager Server - After installing in production or adding an application server|We just ported our enterprise system to a new server and I can't login. What do I do now?]]
# [[Steps For Migrating RiskManager V6.x from Test To Production|How do I port BPC RiskManager from test (or dev) to production?]]
# [[BPC RiskManager V6 on 64 bit Windows|How do I install BPC RiskManager onto a computer running a 64bit Windows OS?]]
| VALIGN="Top"|
<noinclude>
{|align="right" width="100%" cellpadding="10px"
|- style="background-color:#FFEBCD; " width="100%"
|'''A Frequently asked Question is...'''
|-
|<div class="didyouknow2" STYLE="height: 600px;
border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-right:10px;" >
{{#dpl: includepage=*
|includemaxlength=3000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=RiskManager FAQ
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;"></div>
|}
</noinclude>
|}
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:Bishop Phillips Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinksCategoryHead|CT=RiskManager FAQ|CN=The frequently asked Questions Category}}
</noinclude>
25cfdeccbd4a292afa2715e0cff010008b205d54
Steps For Migrating RiskManager V6.x from Test To Production
0
271
282
281
2018-10-29T11:34:45Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Introduction==
There is a very detailed installation process on [[RM625ENT Installation Instructions]]
However, this assumes an essentially manual installation process, essentially starting from a raw iron server, and includes installation of the OS components required. If you use the automatic installer (recommended) the process is much simpler. Production, generally differs from Test or Dev environments, however as the components maybe more widely distributed and you are generally starting with an at least partially configured server (unless you are dedicating a production application server instance to RiskManager).
Different sites do differing things for production, some reinstall completely others duplicate test into production, some do everything manually for production, while using the automated system for Dev, etc.
We recommend a reinstallation - partly because it is the least error prone, and possibly faster.
==If You Have An Existing BPC RiskManager Production Installation==
If you have an existing RM installation in production, you can actually just copy the changed files onto the server (replacing the existing files of the same name) and start the RiskManagerData server once, then close it down, and you are done, so the auto-installer is not actually necessary in this case. Alternativley, you can run the uninstaller in production to remove the previous installation, and then use the new installer to reinstall. You will NOT loose any of your configuration settings - so it is completely safe to do this. That will essentially make you existing system a raw machine EXCEPT that the connection settings will be in place already.
If this is your situation, the steps below are still correct BUT you should NOT let the installer create the database(s) for you - as you already have the connections present. Just say no to this question when it comes up during installation.
==Performing the Migration To Production==
Read the preceding section if your production server has a pre-existing RiskManager V6 installation. If you are migrating from BPC RiskManager Express or RiskMan, you DO NOT NEED TO UNINSTALL, BPC RiskManager V6.x will ignore the Express settings and installation.
Assuming we are starting with a W2003+ server that does not have a pre-existing RM installation, and that your SQL Server is on a separate computer:
===MAKE DECISIONS BEFORE INSTALLING:===
<ol>
<li> If using BPC support during installation email us to arrange a time for our call to assist you install.
<li> Decide whether you are going to enable SurveyManager as part of the installation, or later. (Ask the business)
<li> Decide how many databases will be set up in production (Can be increased later if desired, but easiest if known prior to installation as the installer does all the work for you).
<li> If you want to make an existing database available in production that has been set up in dev/test and you will NOT be using the same physical database as that set up in dev/test, decide whether you will be will be using the RM installer to restore a backup of the established database into production, or whether you will restore the backup separately (after installation completes). You should consider:
* If you have already restored the database into production you probably do not the installer to attempt to create it
* If the target database is the "DEFAULT" connection (so named) of the application server and the database does not already exist in production and the database server is essentially a single server solution with data and log files on the same server then either decision is appropriate, and it is probably simplest to let the installer create a database of the target name and then use the installer's restore option to restore the backup over the top of the newly created database.
* If the target database is a uniquely named database connection of the application server and the database does not already exist in production and the database server is essentially a single server solution with data and log files on the same server then either decision is appropriate, and it is probably simplest to let the installer create a database of the target name and then use the installer's restore option to restore the backup over the top of the newly created database.
* If the target database server is a complex configuration with log files and data files separated across multiple NAS/Servers etc, the installer will probably not be able to determine the configuration correctly as the information is not always available to it in the remote registry (although it will attempt to do it correctly). So restoring from backup on a remote machine may not succeed. You are probably best to do this manually prior to installing the server, or if the database does not yet exist on the target server, the simplest approach is to let the installer create an empty database of the same name and then restore your backup over the top of the newly created database after installation completes. If you choose the former approach, you will need to do some extra steps (instructions provided during installation below) so the client test will validate your install. If you do not create any databases during installation (or have a pre-existing database to which to connect) you will not be able to validate connection during installation. We strongly recommend that you at least let let the installer create its default database and test connection to that. You can always discard it later.
<li> Decide whether you will be using network compression comms or the default raw comms (see the instructions below for the implications of this decision – raw is simplest) and if both, which will be the default. (Can be enabled later if desired)
<li> Decide whether you will be enabling HTTP/HTTPS comms access as well. (Can be enabled later if desired)
<li> Decide whether you will be using the desktop client and or the browser plugin (we recommend the desktop client – both have the same functionality, but the plugin behaviour varies a little across different Win OS’s and IE versions due to MS security changes, so if you have mixed desktop OS’s not every desktop will behave exactly the same. If you want to know the implications or need this explained further, ask us or look on the riskwiki.
<li> Verify the installation site (eg the remote desktop on which the installer will be working) has phone access (preferably hands free), and that you know the telephone number for the phone, and, ideally, outbound internet (IE/Firefox/etc) access so you can look at the riskwiki if needed.
<li> You should do steps 1 – 9 below prior to the BPC support call.
</ol>
===PREPARE THE SITE BEFORE INSTALLING:===
<ol>
<li> Verify server has the following infrastructure on it:
* Functioning network connection to the rest of the network with port 211 (and ideally port 212 as well) and SQL Server TCP ports available – eg 1433.
* Functioning installation of IIS 6+
<li> Verify the server either has on it or available to it:
* Functioning SQL Server (any version) configured in Mixed mode authentication or SQL Authentication mode
* Functioning SMTP server that will accept relays from this machine (this can always be configured later)
<li> Verify that you the person installing knows:
* Server local system administrator user ID / PWD
* SQL Server user id SA / PWD (If SA is not available you will need to speak to me again)
* The name of the SQL Server and the instance (if not using the default instance)
* The Administrator account user ID (usually Administrator) and PWD for the RiskManagement system. This is database specific, and more important when restoring than installing. Not knowing does not stop you installing, but may prevent you from connecting via a client when the test is run at the end of the installation. Otherwise, any RM Administrator account is fine to use. It is auto-created on first connection, so it can often be the user name of the person who does the installation. Ideally you settle on a common user name, and always use that across all databases and remember the password. Access by the root administrator account can be blocked by the RM system administrator after installation of a fresh database, so for restored databases, it may be that this account’s access is blocked anyway.
* The http addressable name of the application server as it would be typed into browser address bar by a remote LAN client (eg: a human operating from her office)
* The fully qualified domain name of the application server as it would be entered in the windows network browser of a remote user if they were able to browse to a folder on the application server (eg. the human again)
(NOTE: Part of the installation process is to create special purpose limited rights SQL accounts, the installer either creates these for you, or expets you to know the passwords. I am assuming they do not exist yet on the target SQL server. You will need to provide a password during the installation for the “riskmanuser” sql server account. The installer will make this account if it is not available already, so you need to have decided what the password will be. I recommend using the same password as that used for dev. This is a limited rights ID. The other accounts will be set to use the same password. They can be changed manually later if desired.).
<li> If transfering the dev database into production:
* Prepare a backup of the dev database.
* Ensure the verison of SQL Server in production is the same as, or higher than, that in dev from where the backup comes (eg. You can NOT restore an SQL 2008 backup into an SQL 2005 server, but you can do the reverse)
<li> Confirm with the RM administrator how many databases they want in production. We recommend a minimum of two databases, the default auto names database, and another spare / empty database for future use. The auto named database will have the connection name “DEFAULT”, the other database can have whatever connection name you choose. The autonamed database will be called RiskManDB625 and the connection will be called “DEFAULT”. The connection name (and in fact the database name) can be changed later. The connection name is the name the user sees as the database name. The caonnection DEFAULT does not need to be entered at all by the user – so this is ideally the main database in use.
<li> Copy the RM Installer to a directory of the application server that will be accessable to the person performing the installation.
<li> Copy the backup file to a directory on the SQL server that the SQL server will be able to access (read from) during a restore. We recommend that that directory is the default backup directory for the targeted instance of the SQL server as that is where it will read from naturally (and if you use the installer to do it, the SQL server must be able to read the file – so it needs to be readable by the SQL server under the SA account).
<li> Verify that the place from which you will be connecting to the application server (ie the remote client) has a telephone preferrably able to run work in hands free mode (so we can talk you through the process by phone).
<li> Locate your BPC RiskManager registration code so you can enter it when asked. You will not need this until the client connects at the end of the installation process. If this is a new server and new database you will have up to 60 days to enter it.
<li> If you opted during the decision stage above to backup an existing RM database from Test and restore it into Production, you should do that now. (Or schedule it now to be done immediatley before the installation commences). Make sure you know the database name on the server.
<li> Send BPC an email or phone BPC to arrange a time for support to contact you – preferrably as long BEFORE you commence installing as possible. We will confirm the booking and contact you at that time. If you just wish us to be available should you need it during support, we will make sure we are able to take your call at that time, and email you a direct number to use should you need it.
</ol>
===INSTALLING:===
<ol>
<li> If using a remote client to connect to the application server and run the installation process (eg mstsc), verify that the remote client is set to operate at 96 DPI not 120 DPI (there is a bug in the installer display routine that hides some buttons at the 120 DPI resolution. If connecting via mstsc, enter mstc /console as the connection command in start/run from the remote computer so that you are operating in console mode. This is important so that you can see the system tray icons.
<li> (If using BPC support, await the call first). Run the installer in “Complete Mode”, read the onscreen instructions and answer all the questions.
* Always create default database, during initial installation
* If restoring a backed up dev database, the installer can do this AFTER the installer creates the databases, or you can do this after the entire process manually. For some complex SQL setups this may be required, as while the installer attempts to locate the correct places for database restoration from the SQL Server registry, this is not 100% reliable due to the various ways this information is stored in the registry across different versions and instances of SQL Server. Let the installer create the blamk database for you, so that all the connections are made, and then you can simply restore over the default database with your backed up database after the installation. If the SQL server is on the application server itself, there is a much higher probability of complete success in installer based restoration.
<li> The installer will auto-register the components and start the BPC RiskManager DataServer console. If you are NOT connecting to an existing database (ie you let the installer create new databases), you can go on to the next step - just select "End Process" on the consol window...other wise check the dot points below:
* If want to connect to an existing database that was NOT created or restored during installation (ie. a database that exists but that is not yet known by the application server on THIS computer) AND you already have the database(s) set up on the production database server, you will need to configure the connections when the application server console window appears (ie. NOW): [[BPC RiskManager - Database Configuration|How to Connect the BPS RiskManager Application Server to an existing database]]
<li> Next, the installer will start the client locally for a test connection at the end to verify access to the default (or other) database. If you can connect and see the main screen after login you have successfully installed.
NOTE: The installer will set the server up in single-user edition and auto-administration access mode. This does not prevent remote access but will (usually) need to be changed to your correct access settings for production enterprise deployment. See the section "After Installation" below.
<li> Switch the server into web edition - click [[BPC RiskManager - General Configuration|on this link for instructions]].
<li> Set up client access. Either (or all):
* Copy the desktop client installer (there are two to choose from depending on whether you prefer single exe or MSI installers) from the /program files/bishopphillips/RiskManagerVxxx to a network share that will be accessible to users
* Copy the already installed client from the /program files/bishopphillips/RiskManagerVxxx/win32client directory to a separate computer/folder and make the folder sharable, if you want people to simple run the client across the network from a remote folder. The client does not actually need to be installed on a destop to work, but installing it provides shortcuts / menus and enables the use of the network compression/encryption library in V6.2.5.x.
* Install the client into a citrix (or other remote desktop) image.
* Distribute the browser plugin ActiveX client to the Risk Manager web site.
<li> Go to a typical remote LAN computer and attempt to install/use the client set up in 13 to access the server using the same account used previously and verify remote connectivity to the application server.
<li> If intending to use streaming network compression/encryption, follow the instructions in the riskwiki for enabling this. Remember you will need to advise all users that the access settings are other than the defaults in the client. (a box has to be ticked and possibly a port changed in the login window). If using streaming network comms, we recommend 2 ports be enabled – one for raw comms and one for compressed comms. (Hence the suggestion at the start that you clear 211 and 212 for RM comms). In reallity RM does not care what port is used. By default it is set to expect communications on port 211, but you can set it to use any combination of ports you like. We advise sticking with the recommended (obviously). If using steaming compression, you should probably for simplicity enable that on port 211 – so clients only need to tick a box to enable it, and set the raw channel to be 212, as the raw channel is only for trouble shooting, and backup connection.
Note, enabling compression/encryption will EXCLUDE the option of copying clients as a means of installation as the compression library is currently a separate lib in V625.x - that will change in a future release.
</ol>
=After Installation=
Most of these actions require you to use the RiskManager application server configuration console. So firstly, on the application server computer locate the "BPC RiskManager DataServer" in the start menu and start it. When started, the application server appears as an icon in the Windows system tray, typically located in the lower right hand corner of your screen. Please double click on the icon [[Image:RM_App_Server_SysTrayIcon.png]] to interact with this program. The configuration console will open....and then..
<ol>
<li> Now proceed to the instructions for completing the security/access set up:
<br>
<br>
* [[Security Configuration - Update Installation and Reset]]
<br>
<br>
<li> If you have additional databases to connect to riskmanager that you did not do during installation, you had better so that now: [[BPC RiskManager - Database Configuration|How to Connect the BPS RiskManager Application Server to an existing database]]
<br>
<br>
<li> Depending on which other components you are using (network streaming compression/encryption, email messaging, surveymanager, browser plugin client, etc.) there may be a few manual steps to complete the installation using the RM Configuration wizard after the installation finishes and tests have been completed. You should generally, in any case, access the IIS server after installation and enable “Unknown ISAPI extensions” – for surveymnager operation even if the surveymanager is not being used yet, as it will save you time later when RM decide randomly to create a survey. The explanation of how to do this is in the riskwiki instructions below. Now do each of these steps in order (note all are optional - the system will work without any of these configurations, but some things like email will not be available without them:
<br>
<br>
# [[BPC RiskManager - Send Mail Options Configuration]]
# [[BPC RiskManager - Mail Server Connection Properties]]
# [[BPC RiskManager - Logging Configuration (OPTIONAL)]]
# [[BPC RiskManager - Create the Root Administrator]]
# [[BPC RiskManager - Distribution of Client Components]] (Browser plugin ActiveX)
# [[BPC RiskManager - Configure Risk Mail Manager]]
<br>
<br>
<li> If you are using the survey engine, the installer will have set that up on the application server, but there are a couple of things you will need to do. In particular you will have to manually tell IIS to allow unknown "ISAPI extensions" and if you have connected to a pre-existing database (rather than one created during the installation process) you will need to configure it. Also, if your SurveyManager web server will be different from your application server computer (eg a web farm), you will need to do the config step for each database in the RiskManager environment. (There is special tab in the to help with the multi database situation efficiently).
<br>
<br>
* [[BPC RiskManager - Install The SurveyManager]]
</ol>
[[Category:RiskManager FAQ]]
[[Category:BPC RiskManager V6 Installation]]
[[Category:BPC RiskManager V6 System Administration]]
<noinclude>
{{BackLinks}}
</noinclude>
ba321d524a897f9f6bc8c831a9035e1da24cabf6
286
282
2018-10-29T11:36:01Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Introduction==
There is a very detailed installation process on [[RM625ENT Installation Instructions]]
However, this assumes an essentially manual installation process, essentially starting from a raw iron server, and includes installation of the OS components required. If you use the automatic installer (recommended) the process is much simpler. Production, generally differs from Test or Dev environments, however as the components maybe more widely distributed and you are generally starting with an at least partially configured server (unless you are dedicating a production application server instance to RiskManager).
Different sites do differing things for production, some reinstall completely others duplicate test into production, some do everything manually for production, while using the automated system for Dev, etc.
We recommend a reinstallation - partly because it is the least error prone, and possibly faster.
==If You Have An Existing BPC RiskManager Production Installation==
If you have an existing RM installation in production, you can actually just copy the changed files onto the server (replacing the existing files of the same name) and start the RiskManagerData server once, then close it down, and you are done, so the auto-installer is not actually necessary in this case. Alternativley, you can run the uninstaller in production to remove the previous installation, and then use the new installer to reinstall. You will NOT loose any of your configuration settings - so it is completely safe to do this. That will essentially make you existing system a raw machine EXCEPT that the connection settings will be in place already.
If this is your situation, the steps below are still correct BUT you should NOT let the installer create the database(s) for you - as you already have the connections present. Just say no to this question when it comes up during installation.
==Performing the Migration To Production==
Read the preceding section if your production server has a pre-existing RiskManager V6 installation. If you are migrating from BPC RiskManager Express or RiskMan, you DO NOT NEED TO UNINSTALL, BPC RiskManager V6.x will ignore the Express settings and installation.
Assuming we are starting with a W2003+ server that does not have a pre-existing RM installation, and that your SQL Server is on a separate computer:
===MAKE DECISIONS BEFORE INSTALLING:===
<ol>
<li> If using BPC support during installation email us to arrange a time for our call to assist you install.
<li> Decide whether you are going to enable SurveyManager as part of the installation, or later. (Ask the business)
<li> Decide how many databases will be set up in production (Can be increased later if desired, but easiest if known prior to installation as the installer does all the work for you).
<li> If you want to make an existing database available in production that has been set up in dev/test and you will NOT be using the same physical database as that set up in dev/test, decide whether you will be will be using the RM installer to restore a backup of the established database into production, or whether you will restore the backup separately (after installation completes). You should consider:
* If you have already restored the database into production you probably do not the installer to attempt to create it
* If the target database is the "DEFAULT" connection (so named) of the application server and the database does not already exist in production and the database server is essentially a single server solution with data and log files on the same server then either decision is appropriate, and it is probably simplest to let the installer create a database of the target name and then use the installer's restore option to restore the backup over the top of the newly created database.
* If the target database is a uniquely named database connection of the application server and the database does not already exist in production and the database server is essentially a single server solution with data and log files on the same server then either decision is appropriate, and it is probably simplest to let the installer create a database of the target name and then use the installer's restore option to restore the backup over the top of the newly created database.
* If the target database server is a complex configuration with log files and data files separated across multiple NAS/Servers etc, the installer will probably not be able to determine the configuration correctly as the information is not always available to it in the remote registry (although it will attempt to do it correctly). So restoring from backup on a remote machine may not succeed. You are probably best to do this manually prior to installing the server, or if the database does not yet exist on the target server, the simplest approach is to let the installer create an empty database of the same name and then restore your backup over the top of the newly created database after installation completes. If you choose the former approach, you will need to do some extra steps (instructions provided during installation below) so the client test will validate your install. If you do not create any databases during installation (or have a pre-existing database to which to connect) you will not be able to validate connection during installation. We strongly recommend that you at least let let the installer create its default database and test connection to that. You can always discard it later.
<li> Decide whether you will be using network compression comms or the default raw comms (see the instructions below for the implications of this decision – raw is simplest) and if both, which will be the default. (Can be enabled later if desired)
<li> Decide whether you will be enabling HTTP/HTTPS comms access as well. (Can be enabled later if desired)
<li> Decide whether you will be using the desktop client and or the browser plugin (we recommend the desktop client – both have the same functionality, but the plugin behaviour varies a little across different Win OS’s and IE versions due to MS security changes, so if you have mixed desktop OS’s not every desktop will behave exactly the same. If you want to know the implications or need this explained further, ask us or look on the riskwiki.
<li> Verify the installation site (eg the remote desktop on which the installer will be working) has phone access (preferably hands free), and that you know the telephone number for the phone, and, ideally, outbound internet (IE/Firefox/etc) access so you can look at the riskwiki if needed.
<li> You should do steps 1 – 9 below prior to the BPC support call.
</ol>
===PREPARE THE SITE BEFORE INSTALLING:===
<ol>
<li> Verify server has the following infrastructure on it:
* Functioning network connection to the rest of the network with port 211 (and ideally port 212 as well) and SQL Server TCP ports available – eg 1433.
* Functioning installation of IIS 6+
<li> Verify the server either has on it or available to it:
* Functioning SQL Server (any version) configured in Mixed mode authentication or SQL Authentication mode
* Functioning SMTP server that will accept relays from this machine (this can always be configured later)
<li> Verify that you the person installing knows:
* Server local system administrator user ID / PWD
* SQL Server user id SA / PWD (If SA is not available you will need to speak to me again)
* The name of the SQL Server and the instance (if not using the default instance)
* The Administrator account user ID (usually Administrator) and PWD for the RiskManagement system. This is database specific, and more important when restoring than installing. Not knowing does not stop you installing, but may prevent you from connecting via a client when the test is run at the end of the installation. Otherwise, any RM Administrator account is fine to use. It is auto-created on first connection, so it can often be the user name of the person who does the installation. Ideally you settle on a common user name, and always use that across all databases and remember the password. Access by the root administrator account can be blocked by the RM system administrator after installation of a fresh database, so for restored databases, it may be that this account’s access is blocked anyway.
* The http addressable name of the application server as it would be typed into browser address bar by a remote LAN client (eg: a human operating from her office)
* The fully qualified domain name of the application server as it would be entered in the windows network browser of a remote user if they were able to browse to a folder on the application server (eg. the human again)
(NOTE: Part of the installation process is to create special purpose limited rights SQL accounts, the installer either creates these for you, or expets you to know the passwords. I am assuming they do not exist yet on the target SQL server. You will need to provide a password during the installation for the “riskmanuser” sql server account. The installer will make this account if it is not available already, so you need to have decided what the password will be. I recommend using the same password as that used for dev. This is a limited rights ID. The other accounts will be set to use the same password. They can be changed manually later if desired.).
<li> If transfering the dev database into production:
* Prepare a backup of the dev database.
* Ensure the verison of SQL Server in production is the same as, or higher than, that in dev from where the backup comes (eg. You can NOT restore an SQL 2008 backup into an SQL 2005 server, but you can do the reverse)
<li> Confirm with the RM administrator how many databases they want in production. We recommend a minimum of two databases, the default auto names database, and another spare / empty database for future use. The auto named database will have the connection name “DEFAULT”, the other database can have whatever connection name you choose. The autonamed database will be called RiskManDB625 and the connection will be called “DEFAULT”. The connection name (and in fact the database name) can be changed later. The connection name is the name the user sees as the database name. The caonnection DEFAULT does not need to be entered at all by the user – so this is ideally the main database in use.
<li> Copy the RM Installer to a directory of the application server that will be accessable to the person performing the installation.
<li> Copy the backup file to a directory on the SQL server that the SQL server will be able to access (read from) during a restore. We recommend that that directory is the default backup directory for the targeted instance of the SQL server as that is where it will read from naturally (and if you use the installer to do it, the SQL server must be able to read the file – so it needs to be readable by the SQL server under the SA account).
<li> Verify that the place from which you will be connecting to the application server (ie the remote client) has a telephone preferrably able to run work in hands free mode (so we can talk you through the process by phone).
<li> Locate your BPC RiskManager registration code so you can enter it when asked. You will not need this until the client connects at the end of the installation process. If this is a new server and new database you will have up to 60 days to enter it.
<li> If you opted during the decision stage above to backup an existing RM database from Test and restore it into Production, you should do that now. (Or schedule it now to be done immediatley before the installation commences). Make sure you know the database name on the server.
<li> Send BPC an email or phone BPC to arrange a time for support to contact you – preferrably as long BEFORE you commence installing as possible. We will confirm the booking and contact you at that time. If you just wish us to be available should you need it during support, we will make sure we are able to take your call at that time, and email you a direct number to use should you need it.
</ol>
===INSTALLING:===
<ol>
<li> If using a remote client to connect to the application server and run the installation process (eg mstsc), verify that the remote client is set to operate at 96 DPI not 120 DPI (there is a bug in the installer display routine that hides some buttons at the 120 DPI resolution. If connecting via mstsc, enter mstc /console as the connection command in start/run from the remote computer so that you are operating in console mode. This is important so that you can see the system tray icons.
<li> (If using BPC support, await the call first). Run the installer in “Complete Mode”, read the onscreen instructions and answer all the questions.
* Always create default database, during initial installation
* If restoring a backed up dev database, the installer can do this AFTER the installer creates the databases, or you can do this after the entire process manually. For some complex SQL setups this may be required, as while the installer attempts to locate the correct places for database restoration from the SQL Server registry, this is not 100% reliable due to the various ways this information is stored in the registry across different versions and instances of SQL Server. Let the installer create the blamk database for you, so that all the connections are made, and then you can simply restore over the default database with your backed up database after the installation. If the SQL server is on the application server itself, there is a much higher probability of complete success in installer based restoration.
<li> The installer will auto-register the components and start the BPC RiskManager DataServer console. If you are NOT connecting to an existing database (ie you let the installer create new databases), you can go on to the next step - just select "End Process" on the consol window...other wise check the dot points below:
* If want to connect to an existing database that was NOT created or restored during installation (ie. a database that exists but that is not yet known by the application server on THIS computer) AND you already have the database(s) set up on the production database server, you will need to configure the connections when the application server console window appears (ie. NOW): [[BPC RiskManager - Database Configuration|How to Connect the BPS RiskManager Application Server to an existing database]]
<li> Next, the installer will start the client locally for a test connection at the end to verify access to the default (or other) database. If you can connect and see the main screen after login you have successfully installed.
NOTE: The installer will set the server up in single-user edition and auto-administration access mode. This does not prevent remote access but will (usually) need to be changed to your correct access settings for production enterprise deployment. See the section "After Installation" below.
<li> Switch the server into web edition - click [[BPC RiskManager - General Configuration|on this link for instructions]].
<li> Set up client access. Either (or all):
* Copy the desktop client installer (there are two to choose from depending on whether you prefer single exe or MSI installers) from the /program files/bishopphillips/RiskManagerVxxx to a network share that will be accessible to users
* Copy the already installed client from the /program files/bishopphillips/RiskManagerVxxx/win32client directory to a separate computer/folder and make the folder sharable, if you want people to simple run the client across the network from a remote folder. The client does not actually need to be installed on a destop to work, but installing it provides shortcuts / menus and enables the use of the network compression/encryption library in V6.2.5.x.
* Install the client into a citrix (or other remote desktop) image.
* Distribute the browser plugin ActiveX client to the Risk Manager web site.
<li> Go to a typical remote LAN computer and attempt to install/use the client set up in 13 to access the server using the same account used previously and verify remote connectivity to the application server.
<li> If intending to use streaming network compression/encryption, follow the instructions in the riskwiki for enabling this. Remember you will need to advise all users that the access settings are other than the defaults in the client. (a box has to be ticked and possibly a port changed in the login window). If using streaming network comms, we recommend 2 ports be enabled – one for raw comms and one for compressed comms. (Hence the suggestion at the start that you clear 211 and 212 for RM comms). In reallity RM does not care what port is used. By default it is set to expect communications on port 211, but you can set it to use any combination of ports you like. We advise sticking with the recommended (obviously). If using steaming compression, you should probably for simplicity enable that on port 211 – so clients only need to tick a box to enable it, and set the raw channel to be 212, as the raw channel is only for trouble shooting, and backup connection.
Note, enabling compression/encryption will EXCLUDE the option of copying clients as a means of installation as the compression library is currently a separate lib in V625.x - that will change in a future release.
</ol>
=After Installation=
Most of these actions require you to use the RiskManager application server configuration console. So firstly, on the application server computer locate the "BPC RiskManager DataServer" in the start menu and start it. When started, the application server appears as an icon in the Windows system tray, typically located in the lower right hand corner of your screen. Please double click on the icon [[Image:RM_App_Server_SysTrayIcon.png]] to interact with this program. The configuration console will open....and then..
<ol>
<li> Now proceed to the instructions for completing the security/access set up:
<br>
<br>
* [[Security Configuration - Update Installation and Reset]]
<br>
<br>
<li> If you have additional databases to connect to riskmanager that you did not do during installation, you had better so that now: [[BPC RiskManager - Database Configuration|How to Connect the BPS RiskManager Application Server to an existing database]]
<br>
<br>
<li> Depending on which other components you are using (network streaming compression/encryption, email messaging, surveymanager, browser plugin client, etc.) there may be a few manual steps to complete the installation using the RM Configuration wizard after the installation finishes and tests have been completed. You should generally, in any case, access the IIS server after installation and enable “Unknown ISAPI extensions” – for surveymnager operation even if the surveymanager is not being used yet, as it will save you time later when RM decide randomly to create a survey. The explanation of how to do this is in the riskwiki instructions below. Now do each of these steps in order (note all are optional - the system will work without any of these configurations, but some things like email will not be available without them:
<br>
<br>
# [[BPC RiskManager - Send Mail Options Configuration]]
# [[BPC RiskManager - Mail Server Connection Properties]]
# [[BPC RiskManager - Logging Configuration (OPTIONAL)]]
# [[BPC RiskManager - Create the Root Administrator]]
# [[BPC RiskManager - Distribution of Client Components]] (Browser plugin ActiveX)
# [[BPC RiskManager - Configure Risk Mail Manager]]
<br>
<br>
<li> If you are using the survey engine, the installer will have set that up on the application server, but there are a couple of things you will need to do. In particular you will have to manually tell IIS to allow unknown "ISAPI extensions" and if you have connected to a pre-existing database (rather than one created during the installation process) you will need to configure it. Also, if your SurveyManager web server will be different from your application server computer (eg a web farm), you will need to do the config step for each database in the RiskManager environment. (There is special tab in the to help with the multi database situation efficiently).
<br>
<br>
* [[BPC RiskManager - Install The SurveyManager]]
</ol>
[[Category:RiskManager FAQ]]
[[Category:BPC RiskManager V6 Installation]]
[[Category:BPC RiskManager V6 System Administration]]
<noinclude>
{{BackLinks}}
</noinclude>
ba321d524a897f9f6bc8c831a9035e1da24cabf6
BPC RiskManager V6 on 64 bit Windows
0
272
284
283
2018-10-29T11:34:45Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Introduction=
BPC RiskManager is a 32 bit application, but it will work just fine on 64 bit Windows. In most scenarios (particularly W2008 and above and Windows 7), the supplied BPC Riskamanger auto installer will correctly install the RiskManager system on a 64bit computer with no manual intervention. The optional SurveyManager library will require some manual steps in IIS and you should consider the notes lower down this page concerning that. If you are installing the W2003 64bit you may have to do some manual steps.
If you wish to pursue this solution on Windows 2003 for 64 bit or Windows 2008 for 64 bit you will need to do the following things:
*Install on the application server machine the 32bit ADO drivers for the target database (eg the MDAC 2.8 driver set. The RiskManager Installer will automatically cheeck for these and install them for you, so you can just run the installer for this step if you wish. For standard MS databases these should already be present, but you may need to download the appropriate 32 bit MDAC driver set from Microsoft. (A 64bit DB server will still require a 32bit driver for BPC RM to connect to it, but these should already be present.).
*Install BPC RiskManager as you would on a 32 bit operating system accepting the defaults. The installer will automatically put the 32but components in the x86 directory as required.
*Run the 32bit SocketServer, BPC RiskManager, BPC RiskManager DataServer and BPC RiskMailManager in 32 bit compatible mode i.e. using WOW (Windows-32 bit on Windows-64 bit) on your server. The auto installer will automatically do this for you, so you should not need to do anything unless you are doing a manual install (ie. copying and pasting the components).
*Move the 32 bit Midas.dll into the 32 bit system directory and register it manually. Again the installer will do this automatically and you should not have to do anything unless you are doing a manual install.
*Enable IIS to run 32 bit ISAPI dll's (if using the web components like surveymanager). This, you will have to do even if using the installer.
*Move the 32 bit ISAPI libraries into the 32 bit ISAPI directory. This you may have to do even if using the installer.
If you are installing on Windows 2008 or above, Windows 7 or above the 32 bit and 64 bit MDAC drivers should already be present, or if you are using the installer they should be installed automatically by the installer.
So, the simple solution to setting up RiskManager on 64Bit windows? - Just run the RiskManager Installer and let it do all the work.
=Setting Up the Database drivers on WOW64=
If you are using the insatller to install Riskanager, the installer will check for the MDAC (ADO) drivers and install the correct ones if missing.
There are multiple scenarios that you could be facing - all have essentially the same solution:
#. Locally installed 64 bit database server : You will need the appropriate 32 bit drivers. These have probably been installed with your database installation, but you may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
#. Externally installed database server on 64 bit OS : You will need the appropriate 32 bit drivers. You may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
#. Externally installed database server on 32 bit OS : You will need the appropriate 32 bit drivers. You may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
In other words, the key "gotcha" in setting up the 64 bit OS version is making sure you have the 32 bit drivers loaded and registered appropriately. Most of the time you will already have the ADO drivers avaiable, or the RiskManager installer will have installed them for you, and you need do nothing in this step. If, however, you install and can not connect from the app server to the database, or if the installer fails to make databases when instructed, you probably have something wrong with your ADO drivers. In the early releases of 64bit OS's the existance of the 32Bit MDAC drivers were a particular issue. From Windows 2008 this does not seem to have been a problem any longer.
The second most common event we have noted is that if you are using SQLExpress and, depending on the options you chose, when you installed SQLServer your SQL Instance may be the default instance (ie. no instance name) OR SQLEXPRESS. If you can't connect check this first, then look to see if the 32 bit drivers are present.
=Enable the application components to use WOW64=
Windows-32 on Windows-64 (WoW64) is already part of you Windows 64 bit OS. All you have to do to use it is to enable the 32 bit applications to run in that mode. If you are running the RiskManager installer, it will do all these steps automatically for you.
*Install on the application server machine the 32bit ADO drivers for the target database (eg the MDAC 2.8 driver set. For standard MS databases these should already be present, but you may need to download the appropriate 32 bit MDAC driver set from Microsoft. (A 64bit DB server will still require a 32bit driver for BPC RM to connect to it).
*Install the RiskManager application normally ([[RM625ENT Installation Instructions|see the instructions for installing BPC RiskManager]])
*Run the application server components and socketserver component in W2003/W2008 32 bit compatible mode:
**Right clicking on the icons after installation and selecting properties.
**From the properties screen set the executable compatibility mode to be “Windows 2003 sp1”.
**Open a command prompt and navigate to the "Program Files\common files\borlan\socketserver" directory and type "socketserver.exe -install" to install the socket server as a service after enabling it to run in 32 bit compatible mode.
=Register the 32 bit Midas.dll on the application server=
If you are running the RiskManager installer you will not have to do anything here.
If you are installing manually (ie. copying and pasting the files), you must register the Midas.dll manually by performing the following steps to enable 32 bit MIDAS.DLL to run on 64-bit Windows:
1. Copy the midas.dll from the system32 directory (if present) or the system files
directory of the BPC RiskManager install directory to:
%systemdrive%\windows\SysWOW64\
2. Open a command prompt and navigate to the %systemdrive%\windows\SysWOW64 directory.
3. Type the following command:
Regsvr32 midas.dll
4. Press ENTER.
=Enable the IIS server to run 32 bit ISAPI dlls=
Depending on your version of IIS you will need to do different things. The primary issue is to make sure that IIS sees the components as 32bit apps.
Enable the IIS server to run 32 bit ISAPI dlls by perfoming the following steps:
*To enable IIS 6.0+ to run 32-bit applications on 64-bit Windows
1. Open a command prompt and navigate to the
%systemdrive%\Inetpub\AdminScripts directory.
2. Type the following command:
cscript.exe adsutil.vbs set W3SVC/AppPools/Enable32BitAppOnWin64 “true”
3. Press ENTER.
*Copy the surveymanager dll’s generated during configuration to the IIS server to run 32 bit ISAPI dlls to the special 32 bit ISAPI directory:
%windir%\system32\inetsrv.
[[Category:RiskManager FAQ]]
[[Category:BPC RiskManager V6 Installation]]
[[Category:BPC RiskManager V6 System Administration]]
<noinclude>
{{BackLinks}}
</noinclude>
23221c9ff91592b379804045b1dfd398f2399395
288
284
2018-10-29T11:36:01Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Introduction=
BPC RiskManager is a 32 bit application, but it will work just fine on 64 bit Windows. In most scenarios (particularly W2008 and above and Windows 7), the supplied BPC Riskamanger auto installer will correctly install the RiskManager system on a 64bit computer with no manual intervention. The optional SurveyManager library will require some manual steps in IIS and you should consider the notes lower down this page concerning that. If you are installing the W2003 64bit you may have to do some manual steps.
If you wish to pursue this solution on Windows 2003 for 64 bit or Windows 2008 for 64 bit you will need to do the following things:
*Install on the application server machine the 32bit ADO drivers for the target database (eg the MDAC 2.8 driver set. The RiskManager Installer will automatically cheeck for these and install them for you, so you can just run the installer for this step if you wish. For standard MS databases these should already be present, but you may need to download the appropriate 32 bit MDAC driver set from Microsoft. (A 64bit DB server will still require a 32bit driver for BPC RM to connect to it, but these should already be present.).
*Install BPC RiskManager as you would on a 32 bit operating system accepting the defaults. The installer will automatically put the 32but components in the x86 directory as required.
*Run the 32bit SocketServer, BPC RiskManager, BPC RiskManager DataServer and BPC RiskMailManager in 32 bit compatible mode i.e. using WOW (Windows-32 bit on Windows-64 bit) on your server. The auto installer will automatically do this for you, so you should not need to do anything unless you are doing a manual install (ie. copying and pasting the components).
*Move the 32 bit Midas.dll into the 32 bit system directory and register it manually. Again the installer will do this automatically and you should not have to do anything unless you are doing a manual install.
*Enable IIS to run 32 bit ISAPI dll's (if using the web components like surveymanager). This, you will have to do even if using the installer.
*Move the 32 bit ISAPI libraries into the 32 bit ISAPI directory. This you may have to do even if using the installer.
If you are installing on Windows 2008 or above, Windows 7 or above the 32 bit and 64 bit MDAC drivers should already be present, or if you are using the installer they should be installed automatically by the installer.
So, the simple solution to setting up RiskManager on 64Bit windows? - Just run the RiskManager Installer and let it do all the work.
=Setting Up the Database drivers on WOW64=
If you are using the insatller to install Riskanager, the installer will check for the MDAC (ADO) drivers and install the correct ones if missing.
There are multiple scenarios that you could be facing - all have essentially the same solution:
#. Locally installed 64 bit database server : You will need the appropriate 32 bit drivers. These have probably been installed with your database installation, but you may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
#. Externally installed database server on 64 bit OS : You will need the appropriate 32 bit drivers. You may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
#. Externally installed database server on 32 bit OS : You will need the appropriate 32 bit drivers. You may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
In other words, the key "gotcha" in setting up the 64 bit OS version is making sure you have the 32 bit drivers loaded and registered appropriately. Most of the time you will already have the ADO drivers avaiable, or the RiskManager installer will have installed them for you, and you need do nothing in this step. If, however, you install and can not connect from the app server to the database, or if the installer fails to make databases when instructed, you probably have something wrong with your ADO drivers. In the early releases of 64bit OS's the existance of the 32Bit MDAC drivers were a particular issue. From Windows 2008 this does not seem to have been a problem any longer.
The second most common event we have noted is that if you are using SQLExpress and, depending on the options you chose, when you installed SQLServer your SQL Instance may be the default instance (ie. no instance name) OR SQLEXPRESS. If you can't connect check this first, then look to see if the 32 bit drivers are present.
=Enable the application components to use WOW64=
Windows-32 on Windows-64 (WoW64) is already part of you Windows 64 bit OS. All you have to do to use it is to enable the 32 bit applications to run in that mode. If you are running the RiskManager installer, it will do all these steps automatically for you.
*Install on the application server machine the 32bit ADO drivers for the target database (eg the MDAC 2.8 driver set. For standard MS databases these should already be present, but you may need to download the appropriate 32 bit MDAC driver set from Microsoft. (A 64bit DB server will still require a 32bit driver for BPC RM to connect to it).
*Install the RiskManager application normally ([[RM625ENT Installation Instructions|see the instructions for installing BPC RiskManager]])
*Run the application server components and socketserver component in W2003/W2008 32 bit compatible mode:
**Right clicking on the icons after installation and selecting properties.
**From the properties screen set the executable compatibility mode to be “Windows 2003 sp1”.
**Open a command prompt and navigate to the "Program Files\common files\borlan\socketserver" directory and type "socketserver.exe -install" to install the socket server as a service after enabling it to run in 32 bit compatible mode.
=Register the 32 bit Midas.dll on the application server=
If you are running the RiskManager installer you will not have to do anything here.
If you are installing manually (ie. copying and pasting the files), you must register the Midas.dll manually by performing the following steps to enable 32 bit MIDAS.DLL to run on 64-bit Windows:
1. Copy the midas.dll from the system32 directory (if present) or the system files
directory of the BPC RiskManager install directory to:
%systemdrive%\windows\SysWOW64\
2. Open a command prompt and navigate to the %systemdrive%\windows\SysWOW64 directory.
3. Type the following command:
Regsvr32 midas.dll
4. Press ENTER.
=Enable the IIS server to run 32 bit ISAPI dlls=
Depending on your version of IIS you will need to do different things. The primary issue is to make sure that IIS sees the components as 32bit apps.
Enable the IIS server to run 32 bit ISAPI dlls by perfoming the following steps:
*To enable IIS 6.0+ to run 32-bit applications on 64-bit Windows
1. Open a command prompt and navigate to the
%systemdrive%\Inetpub\AdminScripts directory.
2. Type the following command:
cscript.exe adsutil.vbs set W3SVC/AppPools/Enable32BitAppOnWin64 “true”
3. Press ENTER.
*Copy the surveymanager dll’s generated during configuration to the IIS server to run 32 bit ISAPI dlls to the special 32 bit ISAPI directory:
%windir%\system32\inetsrv.
[[Category:RiskManager FAQ]]
[[Category:BPC RiskManager V6 Installation]]
[[Category:BPC RiskManager V6 System Administration]]
<noinclude>
{{BackLinks}}
</noinclude>
23221c9ff91592b379804045b1dfd398f2399395
BPC SurveyManager - Web Client Manual
0
273
290
289
2018-10-29T11:37:51Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=BPC SurveyManager Web Client Manual=
==Introduction==
BPC SurveyManager is comprised of five logical parts:
# BPC Survey Manager Engine - this delivers the surveys, reports and performs a wide range of management functions and a stateless mode. It has no direct user interface, but is best thought of as an library of survey capable routines and an interpreter of BPC SurveyManager "script" that dynamically constructs web pages on demand and according to the authors design.
# BPC Survey Manager Management WebClient - this application is a stateful pure browser based management solution for the BPC SurveyManager system. The web client surfaces the most commonly required capabilities of the SurveyManager system and presents them in a way intended for novice users to create, distribute, publish, manage and report surveys across multiple organisation units and regions.
# BPC SurveyManager Portal - the portal is really a function of the BPC SurveyManager Engine, allowing an organisation to selectively publish surveys to an indefinate number of portals. A portal is a page that responders' can use as a fixed entry point to collect and do surveys available to them. It is one of several channels through which a responder can respond to a survey opportunity.
# BPC SurveyManager DeskTop client - the most powerful surveymanager management client which enables all the capabilities of the surveymanager system to be used (including distributed survey databases, and remote and partially connected users). It is only available as an installable client-server application. This component is not distributed with any BPC application, but is supplied on request and acceptance of the conditions for its use.
# BPC SurveyManager N-Tier library - The library supports the N-Tier application-server structure of other BPC applications like BPC RiskManager which use it to provide an advanced survey manager management client directly in the body of an MS Windows installed application.
This manual covers ONLY the BPC SurveyManager Management Web Client application.
==Introduction for ACFE and ACE users==
In order to assist ACFE and ACE users we have included additional notes or alternative instructions where appropriate in a section marked clearly for these groups' attention.
==Contents==
* [[BPC SurveyManager Web Client Manual: Accessing]]
* [[BPC SurveyManager Web Client Manual: Home (ACFE/ACE) - Working with The LSS]]
* [[BPC SurveyManager Web Client Manual: Home - The Survey List Page]]
* [[BPC SurveyManager Web Client Manual: Creating the list of respondents]]
[[Category:BPC SurveyManager Web Client Manual]]
<noinclude>{{BackLinks}}
</noinclude>
33f6591c669e7be31839c9ab7b720df94763a741
BPC SurveyManager Web Client Manual: Accessing
0
274
292
291
2018-10-29T11:37:51Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Section 1. Accessing the BPC Survey Manager=
==1.1 Connect to the ACFE and BPC SurveyManager web site==
===1.1.1 ACFE clients: Accessing the ACFE Learners Satisfaction Survey Website===
ACFE users or ACE providers wishing to access the Learners Satisfaction Survey Management Website should access the ACFE BPC SurveyManager site and log into your ACE organisation using your issued organistion administration. There you will find, added to your survey list, the LSS for the current year. This link will be in the email provided by your regional coordinator.
Connect to the Survey Manager site:
[http://acfe.bishopphillips.com/ ACFE BPC Survey Manager Website]
===1.1.2 Other clients: Accessing Survey Manager Management Website===
All other users, including BPC RiskManager users should use the link provider by your BPC SurveyManager hosting provider. This link and any site specific instructions will be in the email you received on activation.
BPC RiskManager users have a further option of using your BPC RiskManager client to build and manage surveys. This manual, however, is about the use of the BPC SurveyManager Management website.
==1.2 Starting The BPC Survey Manager Web Client==
===1.2.1 The BPC SurveyManager Launch Page===
There are now two ways to access the BPC Survey Manager system using a web browser:
# The Survey Manager (maintenance and management system)
# The Survey Portal
'''''The first – Survey Manager -''''' (the maintenance and management system) provides facilities for creating, editing, publishing, and maintaining surveys, as well as maintaining users and responders and viewing reports and results, etc.
'''''The second - the Survey Portal -''''' must first be enabled using the Survey Manager system, but once enabled can be used for anonymous survey entry, class/course based survey response recording, etc without publishing surveys to users first. Where you do not wish to track responses by student ID, are not emailing survey invitations to responders (such as students or staff), are entering surveys from hard-copy responses or are using class/course based survey collection in (for example) your own computer labs, this new facility may be of interest.
The survey portal requires a password to access, but once accessed it will launch any survey assigned to it. Surveys must have the security flags set appropriately for portal use. This will generally be either "Login Required" or "Allow Anonymous". In the former case a responder who has not yet logged in will be presented with a login screen before proceeding to the survey. In the latter case the survey engine generates random identifiers for responders with each survey access. Surveys entered through the portal using Anonymous response will need to be completed in a single sitting, rather than in multiple sittings which can be done with the emailed invitations or "Login Required" surveys. The survey portal with anonymous access is not appropriate where a known list of responders are entering responses remotely as you will not be able to work out who has responded and who has not because of the IDs.
This manual covers the Survey Manager (maintenance and management system) access method first as you will still need to use that first, if only to enable anonymous portal use.
You can use the traditional emailed (invited) responders and the login OR anonymous portal based methods simultaneously on the one survey. You can mix all the methods across multiple surveys.
==1.3 All Users - Start and Log into BPC Survey Manager Web Client==
===1.3.1 Starting BPC SurveyManager WC===
The BPC Survey Manager Web Client and the Survey Portal are usually launched from the same launch page. The launch page is a static page which will always be visible if the hosting web server is running. For example the ACFE launch page:
[[IMAGE:1_ACFESureveyManagerLaunchPage.jpg]]
Select the button that launches the Survey Manager application (not the Survey Portal). In the example page above that button is the "Enter the ACE Survey System".
===1.3.2 Logging into BPC SurveyManager WC===
On the login screen you have the opportunity both login and request login details be sent to the recorded email address for the ID you are using. The process for requesting your log in details in covered in the next section.
*Step 1: Select your organisation from the drop down list. ACFE and ACE users should select your ‘Training Organisation Identifier’ (TOID) from the drop down list. Other users should select the organisation unit advised to you with your login credentials, or any organisation to which you have subsequently been granted access.
*Step 2: Your username and password will have been provided to you by ACFE for ACFE and ACE users and by Bishop Phillips Consulting or your enterprise survey manager for other users.
#. Enter your ‘User name’ (Case sensitive).
#. Enter your ‘Password’ (Case sensitive).
#. Click ‘Log In’
[[IMAGE:2_BPCSurveyManagerWCLoginPage.jpg]]
After login you will be presented with the Survey List screen for the current organisation. From this screen you can access all the capabilities of the survey manager web client.
[[IMAGE:3_BPCSurveyManagerWCSurveyListScreenPNA.jpg]]
===1.3.2 Request your BPC SurveyManager WC login details===
On the login screen you have the opportunity both login and request login details be sent to the recorded email address for the ID you are using. You must already have a valid login account for this process to work for you.
*Step 1: Select your organisation from the drop down list. ACFE and ACE users should select your ‘Training Organisation Identifier’ (TOID) from the drop down list. Other users should select the organisation unit advised to you with your login credentials, or any organisation to which you have subsequently been granted access.
*Step 2: Your username will have been provided to you by ACFE for ACFE and ACE users and by Bishop Phillips Consulting or your enterprise survey manager for other users.
#. Enter your ‘User name’ (Case sensitive).
#. Leave the ‘Password’ blank.
#. Click ‘I Forgot My Password’
The systems will look up your user ID and send login details to the email recorded as belonging to that User name (User ID).
[[Category:BPC SurveyManager Web Client Manual]]
<noinclude>{{BackLinks}}
</noinclude>
f53f0fb081d310f9d844ea9526493c00101bbd60
BPC SurveyManager Web Client Manual: Home - The Survey List Page
0
275
294
293
2018-10-29T11:37:52Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=SECTION 2B. The Survey List Page=
==Introduction - After Login==
After login you will be presented with the Survey List page. Think of this as your Survey Manager "home" page. From here you can reach everything you need. Excluding ACFE/ACE users, if this is the first time you have accessed your organisation, you will generally find no surveys in your survey list. Surveys may be present because they have been deployed to your organisation from a parent organisation (like a Region) or remain there from previous activities. These will be surveys you or your previous users have created for your organisation.
[[IMAGE:3_BPCSurveyManagerWCSurveyListScreenPNA.jpg]]
==The Survey List Actions==
If you have surveys present look at the surveys in your list. You will note that they have actions listed including:
# "Edit" - This allows you to edit certain presentaional aspects of the survey such as the enquiry email address, the logo graphic, incitation text, help, etc. For deployed surveys (like the LSS), you can not change the questions in the survey, but for other surveys you have access to add remove or change questions, etc. Unless you wish to change the default appearance of a deployed survey you do not need to use the action.
# Delete - While a survey is in draft mode, it can be deleted from your organisation. Once it the survey is activated or receives it's first responses the delete action will no longer be visible.
# "Manage" - This is the main action you will use. It enables publication of the survey to responders, sending of invitation, viewing of reports, and general management of the survey. For Providers using email invitations exclusively, this will be the only action in which you are interested.
# "Data Entry" - The data entry action enables the entry of surveys from both hardcopy and telephone/interview. A survey administrater or data entry account holder can enter the survey responsers by selecting from the list of published responders.
# "Make Template" - This action saves your current survey as a template that can then be used to build new fully editable versions of the survey. Both the original survey used for the template and surveys produced from the template remain independent.
==Creating a Survey==
Below the survey list, you will find a "Create a New Survey" button. This button will allow you to create new surveys for publication to groups of resonders. To learn about creating surveys go to [[yy]]
==Activating your Portal==
If you wish to use the portal (or even think you might use it) you can do this by ticking the "Activate portsl" checkbox. The system will invent a password if you use with the portal, but you are free to change it.
==The Menu Options==
# Change Login - displays the login screen. Primarilly for the use of admin users and others with membership of multiple organisations.
# Survey List - displays this screen. Display your current survey list.
# Manage Users - displays, edit or create the users of this organisation. You would use this to create data entry users. While you can create users for assignment to a survey in this area, the V6 survey manager web client favours creationg of responders specifically for a survey. V7 simplifies the assignment of exisiting users to a survey.
[[Category:BPC SurveyManager Web Client Manual]]
<noinclude>{{BackLinks}}
</noinclude>
aded5c2a439c08ea26e87216cd1721d5e03267f9
BPC SurveyManager Web Client Manual: Creating the list of respondents
0
276
296
295
2018-10-29T11:37:52Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
[[Category:BPC SurveyManager Web Client Manual]]
<noinclude>{{BackLinks}}
</noinclude>
3f1a5aef4adfaf5cd49fb337d211068a3381e226
BPC SurveyManager Web Client Manual: Home (ACFE/ACE) - Working with The LSS
0
277
298
297
2018-10-29T11:37:52Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=SECTION 2A. Locating the current LSS=
After login you will be presented with the Survey List page. Think of this as your Survey Manager "home" page. From here you can reach everything you need. If this is the first time you have accessed your organisation, you will generally find only one survey in your survey list - the current LSS survey (similar to the image below). It is possible that you will see other surveys as well. These will be surveys you or your previous users have created for your organisation. Each year BPC removes the previous LSS from your organisation, but preserves all non LSS surveys and data until requested to remove them.
[[IMAGE:3_BPCSurveyManagerWCSurveyListScreenPNA.jpg]]
Locate the current LSS in your list. You will note that it has actions listed including:
# "Edit" - This allows you to edit certain presentational aspects of the survey such as the enquiry email address, the logo graphic, incitation text, help, etc. You can not change the questions in the LSS, so you will not have this option available in the edit screen. Unless you wish to change the default appearance of the LSS you do not need to use the action.
# "Manage" - This is the main action you will use. It enables publication of the survey to responders, sending of invitation, viewing of reports, and general management of the survey. For Providers using email invitations exclusively, this will be the only action in which you are interested.
# "Data Entry" - The data entry action enables the entry of surveys from both hardcopy and telephone/interview. A survey administrater or data entry account holder can enter the survey responsers by selecting from the list of published responders.
# "Make Template" - The make a template action creates a template from the associated survey that can be transferred bewteen organisations and used to create new modifiable duplicate surveys. This is one way to start a new survey that is an alternative to starting from scratch. LSS coordinators will NOT need to use this action to meet the survey submission requirements.
A note on terminology: a survey is what all respondents complete. A "survey response" is what we get back when each responder enters data into the survey. You do not need to create a survey for each responder - you need just one survey and many invitations and/or many "survey reponses".
Below the survey list, you will find a "Create a New Survey" button. This button will allow you to create new surveys for publication to groups of resonders. As the LSS is already deployed to your organisation and therefore visible in your organisation's survey list, your do NOT need to use this button to meet the LSS survey submission requirements. If you want to know about this facility go to [[BPC SurveyManager Web Client Manual: Home - The Survey List Page]].
==The Next Step==
If you wish to use the portal (which enables you to have class based survey collection in computer labs, for example), or you are working on a survey OTHER than the LSS, you should proceed to the next section: [[BPC SurveyManager Web Client Manual: Home - The Survey List Page]].
If you are exclusively interested in the LSS, you should proceed to section: [[BPC SurveyManager Web Client Manual: Creating the list of respondents]].
[[Category:BPC SurveyManager Web Client Manual]]
<noinclude>{{BackLinks}}
</noinclude>
1d60f7b7af8b7e9bba74b2912822cd1201634305
BPC RiskManager V6 Enterprise (Enrima Edition)
0
2
300
3
2018-10-29T11:39:09Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=The BPC RiskManager Software Suite - Features=
==What is the BPC RiskManager Software Suite?==
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) in 2008. The latest release is July 2010.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width="100%"
|-width="100%"
|
* BPC RiskManager V5 (Express)
|
|-
|
* BPC RiskManager V6 (Enrima Edition)
|
|}
While there are a lot of similarities between the systems, they are not identical and not data compatible. BPC RiskManager V5 (Express) is maintained on an an annual update cycle, while BPC RiskManager (Enrima Edition) is maintained on a quarterly (every 3 months) update cycle.
In terms of scalability, both systems will handle thousands of simultaneous users, and both model risk management at the enterprise level and project level. Both systems include risk, controls/strategies, consequences, survey, compliance, incident management support and both systems feature customisable screens and field names. Both systems allow multiple simultaneously active databases.
The essential differences are in depth and complexity of issues supported and expandability of the system. Here they have significant differences. Express is designed to be extremely simple and consequently excludes both depth and breadth beyond the functions of a risk and compliance register. It therefore is able to present almost all its risk or compliance record data on a single screen.
In the Enrima V6 series this single screen display is not possible as the both multiple views and considerable anciliary management objects are brought into the system (such as documents, assets, assertions, insurance, claims, etc).
==BPC RiskManager V6.2.5 (Enrima Edition)==
[[image:BPC_RiskManager_V6261_Main_Screen.jpg|539px]]
===BPC RiskManager - Who should use it?===
====User====
BPC RiskManager designed to manage the governance function of an organisation. It therefore fits in audit, risk management, compliance management, insurance risk management, environmental risk management, project risk management, human resources, OHS and strategic planning. It delivers functions covering both ther strategic and the operational functions of these disciplines. For example the claims module actually manages insurance claims (not merely registering them), the document management system is capable of actually managing documents (not merely cataloguing them), the compliance and strategy systems actually manage the remediation of the issue, etc.
It functions best as an integrated solution with multiple governenance teams using the one system. With each release we expand the governance functions in the system.
====Scale====
BPC RiskManager is designed to scale. There are four types of clients using it:
#. Single user or small work groups running off a single user install switched to server mode.
#. Medium scale enterprises with risk and executive seats on an IT group managed server / in-cloud and database.
#. Large scale enterprise with many seats actively managing general risks and compliance issues and project risks, etc
#. Hosting consolidators providing cloud services to many clients in different organisations with many databases.
Every version of BPC RiskManager (from the single user install, up) comes capable of operating in all these modes. For each type of operation there are specific features built in to aid maintenance and management (including multi database bulk operatiions for hosting providers).
===BPC RiskManager Features===
BPC RiskManager V6.2.5 (Enrima Edition) (often referred to as RiskManager V625 or Enrima), is a powerful risk and compliance management solution with an almost unlimited range of end-user configurable solutions. It delivers:
*General
** Totally end-user configurable (change almost any label or caption or search relationship, re-task fields, define your own risk and compliance model, build your own reports, define your own work flows, customisable messages, define your own risk structure, etc)
** Runs out-of-the-box (ready to use immediately after install in single-user or small work group mode).
** Provides an optional fast configure mode (shown on first run of any client and available at any time thereafter).
** An extremely versatile ratings engine support multiple methods of ratings compliance and risk issues. Each item can simultaneously store different ratings for inherent, residual, auditor, reviewer and unlimited current self ratings for each of likelihood, impact and (residual) risk. It also holds additional ratings for compliance breach, compliance rating, and unlimited assertion sets.
** Ratings can be rolled up through trees of risks and compliance issues
*Functional
** Risk Management
** Compliance Management
** Incident Management
** Planning
** Document Management
*Registers
** General Risk register(s) with unlimited risk types and able to distinguish project and general risks
** Project Risk register(s)
** Compliance register(s) with unlimited assertions/questions and assertions/question groups AND pure HTML based compliance surveys / checklists
** Incident & Hazard register
** Insurance register
** Claims register
** Legal register
** Document register
** Causes register
** Consequence & impact register
** Standard strategies register (Type of Control)
** Strategies & control register
** Actions register
** Work flow register
** Asset register
** Business plan register
** Survey register
** Access control
*Evaluation engines
** Risk & compliance rating
** Question & assertion rating
** Assessments engine
** Survey rules engine
** Charting engine
** Email management engine
** Exception tracking engine
*Work flow control systems
** Work flow engine
** Instantaneous internal message engine
** Instant and batched email management engine
** PAX & TMS ScripterStudio scripting engines
** Survey management system
** Exception tracking engine
*Data reporting and access
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. These structures are understood by the search and reporting engines.
** Unlimited risk structuring - risk folders to any depth, risk-linking, risk categorisation, unlimited master-child structures, etc
** Tree, search and flat risk navigation simultaneously supported
** Risks/compliance issues can inhabit any number of tree folders simultaneously (allowing multiple grouping and reporting frameworks with risk roll up)
** Link Objectives, assertions, questions, processes, legislative/regulator obligation, causes, risks, consequences, compliance obligations, controls / strategies, actions, risk history, incidents / hazards, people, supporting documentation, and information web-sites, and more.
** Full live search-able audit trail of all changes
** Storable searches used through-out the application to access and feed data to tables, views, folders and reports
** Multiple reporting engines:
*** Built-in pre-written reports
*** Very powerful, programmable end user report writer and manual (outputs in various formats including HTML and PDF)
*** Word Document (mail-merge) style report engine
*** SurveyManager Instant Reporting engine (maps survey response reports back into the survey layout)
*** BPC SurveyManager operating in web forms mode is a powerful reporting engine in its own right
*** Query Exporter (Administrator only - can cross feed to the import engine creating an excellent method for doing bulk updates based on extracted data)
*** Search based end user export
*** Built-In Charting
*** End-user charting
** End user sample reports
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL and PDF
** Dashboard with drill through to risk collections, risks, assessments and incidents
** Dashboard risk collections configurable via folder tree view system (so any risk/compliance topic can be put to the dashboard with unlimited layers of drill through).
*Messaging
** Built-in automated email messaging based on events and dates for a wide range of scenarios, and occurrences, with email contents able to be fed by custom reports from the report writer.
** Multiple levels of responsibility assignment on all trackable objects
** Risk Message racking and work flow message tracking
*Secretarial, Administration and Desktop Integration
** MS Office compatible
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL
** Spell checking using your MS Word dictionary
** Simple point and select search system but with an option for savable advanced query writer custom searches if required.
** Extensive configuration and customisation screens to support tuning the system to do just what you want.
** Dynamic screen captions allowing you to adopt your own terminology, which also appear to the report writer as the names of the fields
** Smooth support for large and small fonts and 96dpi and 120dpi and other screen resolutions
** Works on all versions of windows from W2000 up, including Vista and Version 7.
** Fast fully automated installation and upgrade system.
** Available in single/small work group and enterprise configurations
*Compliance System
** Compliance obligations can be viewed as general risks and compliance modes
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Compliance obligations will support multiple compliance models simultaneously (SOX / Sched7 / General / etc).
** Compliance obligations are stored internally as risks so they roll up smoothly into the general and project risk register
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. An issue can belong to many such relationships at once.
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of compliance ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings and question/assertion ratings rules for automated rating translation.
** Compliance responses automatically convert to risk equivalent ratings so that both compliance issues and risks can be seen on the one heat map, and in comparative tables.
** Unlimited compliance milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Risk System
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers.
** Risk Tolerances (rating and numeric) for differential risk reporting and automated condition reporting.
** Likelihood & consequence trigger points
** Separate audit comment and tracking data for each risk.
** Multiple modelling systems - inherent, current and residual risk ratings (with optional likelihood, impact, control and residual categories for each rating)
** Velocity supported at the impact/consequence level
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of risk ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings
** Confidential risks
** Risk advisory notes for each risk
** Unlimited risk milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Incident Management
** Fully configurable - drop lists, business rules, screens, etc.
** Incident type determines rules and attributes
** Multiple handling steps fully tracked - recorder, assignee, reviewer, responder, escalted to, investigator
** Automatic triggers for review, escalation, investigation, etc based on user configurable rules (triggered by participant information, incident attributes, etc.)
** Configurable unlimited incident attributes with triggers (for reviews, escalation, enhancements, workflow, etc.) to classify incidents
** Unlimited configurable incident types (which determine the set of incident attributes applied to the incident)
** Incidents have a built in workflow – record, assign, review, escalate, resolve, investigate, close
** Unlimited user defined additional fields for storing extra data
** Unlimited text fields details/notes, etc for unstructured data
** Change tracking
** Separate org structure defnition that lives side by side with the risk management org structure (allowing different structures for risk/compliance and incidents)
** Structure and rule driven review, escalation and investigation
** Unlimited incidents per risk/compliance event
** Incidents attached to more than one risk/compliance topic
** Incidents can be created and attached to a risk/compliance topic at a later time
** Notifiers
** Incident Causes – immediate and underlying (mirrors risk causes)
** Incident Actions – Current (done) and future, both proposed and approved + action assignment, progress and tracking
** Proposed actions can be converted to risk / compliance topic controlls
** Large array of location types (even GPS location specification)
** Unlimited partcipants per incident (with user defined roles)
** Particpant records of interview
** Participant injury tracking
** Review and investigation reminders
*Incident Investigations
** Investigations including progress tracking/status / findings / recommendations, etc
** Configurable investigation types with differing investigation team structures
** Investigation external document links
** Configurable and managed signoff models including separate lists for investigation team members and other parties
** Investigation signoffs with qualified and dissenting opinion options
** Investigations build distinct reports
*Internal Audit System
** Separate audit risk ratings and notes per risk/compliance issue
** Separate audit external document links
** Internal-audit remediation register with assignable tasks and remediation progress, status and outcome recording.
** Automated access escalation for user flagged as auditors
** Auditors use the same screens as normal users but have extra fields and facilities
** Automated CSA survey generation
** Full change logs kept of key accountable tables (can be expanded to include additional tables including additional tables added by clients)
*Insurance and claims
** Insurance register with renewal reminders
** Insurance policies link to risk/compliance registers via the strategy and controls register, actions register and document registers.
** Claims management
** Claims link to risks/compliance registers via incident and insurance registers
** Incident/Hazards Register (plus hooks for interfacing into a separate incident management system if desired)
*Causes Register
** Unlimited risk specific causes per risk
** Type-of-Cause allows standardisation of causes while allowing complete flexibility in description and instance of a cause (similar to Type-of-Control)
** Incident and Risk/Compliance causes.
** Causes can have numeric risk event triggers (allowing concepts such as the "likelihood of exceeding x events in a year")
** Direct sub linking between causes and strategies and consequences enables cause and effect strategy design and verifiable coverage of causes
** Causes can be sub linked off Assertions/Questions (the default for compliance screens) allowing low rating compliance questions or analytic steps for remediating breaches to be structured around the causes of each question's failure. This enables the compliance model to be around built around both compliance risk and compliance topics philosophies.
** As there can be an indefinite number of question sets with an indefinite number of questions per risk / compliance issue, cause structuring can get very deep.
** Causes integrate with surveys, the scripting engine and external modelling systems to enable programmatic setting of likelihood ratings using additional fields as part of the interface (like the "risk trigger value").
*Strategies & Controls register
** Strategies and controls with progress notes and tracking
** Register and track unlimited strategies and controls
** Customisable ratings scheme for each control or strategy including any of likelihood, impact, control, (residual) risk over inherent, residual, current self, audit, reviewer, etc ratings groups, as well as five ratings defaulting to authority, reliability, efficiency, economy, and timeliness control assertions.
** Officially mandated Type-of-Control list provides a template for approved control strategies and allows strategies to be both individually described, and structurally grouped and standardised.
** Strategies & Controls directly cross link to individual causes and impacts/consequences allowing you to tie specific strategies to one or more causes and consequences of a risk or compliance item.
** Strategies & Controls can have actions.
** (Coming soon: unlimited assertion/ratable question sets similar to that used for compliance and risk screens).
** Includes Responsible officer, delegate, email reminders, assignment tracking, cost and benefit measures, link to insurance, cyclic and one off controls/strategies, flag where insurance expired, due dates exceeded, user defined categories and subcategories, etc.
** Automatic access rights escalation where read only viewer is accessing a strategy for which they have responsibility
** Fully customisable messages with or without email running.
** Survey question library links surveys to strategies
** Can feed CSA automated surveys
*Financial Elements Register
** Unlimited charts of account
** Account rollup
** Store performance metrics (budget, actual, transaction volumes, etc)
** Store audit assessments for each element
** Link to audit/risk/compliance assertions
** Ownership
** Unlimited risks/compliance obligations per account
** Test plans and test plan scheduling
** Heat maps for each element with drill through to risks and incidents
*Document Register
** Document register for unlimited documents
** Supports multiple document management strategies simultaneously: unmanaged, delegated management and full management.
** Unlimited risk/compliance issues may be linked to each managed or unmanaged document.
** Unlimited unmanaged documents may be linked to a risk-compliance issue
** Document management can be set at the document or section level on a per-document basis
** Managed documents track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Managed document sections track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Full snapshot version control operates on managed documents - a full time-stamped copy of the relevant records is made for each change.
** The document register presents document and section specific lists and heat maps of all risks/compliance issues attached to the document or section and supports export on that basis.
** Main listing screens support dynamically constructed QBE filters and free text search to enable isolation of documents using specific terms or any of the tracking fields.
* Store documents internally or interface to your document management system, web site links available for most objects.
*Work flow engine
** The work flow system supports two purposes (a) documenting processes with flow charts, and (b) automating RM related activities
** Work flow modelling and diagramming tool (with a built-in script-able work-flow diagramming subsystem)
** Work flows can be executed and can invoke RM screens and external applications. Executed work flows can be assigned to individuals and have multiple individuals participating in different steps.
** Work flows steps can have attachments.
*Survey engine
** Full implementation of BPC SurveyManager with customised management client built-in
** Built in survey engine
** A full scale (not limited) survey / web forms engine that is licensed for separate use and can be used for far more than just your risk management requirements. Think of something you need to collect data on the BPC SurveyManager will handle it. The SurveyManager can be used to write entire web sites on its own.
*Access and security
** Single user mode or secured access modes (end user selectable)
** Multiple access security support (LDAP,AD, NTGroups, Internal, Trusted, etc)
** Configurable access rights for access to risk type, business group, business unit, risks over multiple levels of access from none to administration
** Automatic escalation of access to individual records where the user has responsibility assigned, but otherwise would not have access
*People & resources
** People and positions (resources) may be imported in bulk, created individually or automatically created on connection.
** Resources integrate with the access control system
** SurveyManager keeps a separate list if resources mirrored with the RiskManager resource tables
** RiskManager allows for three domains of resources - survey responders (access to specific surveys), risk manager known persons (can be managed by email, assigned responsibilities but do not have access to the system), and risk manager users (access allowed).
** User access control down to individual business unit risks & issues as read / update / create (See access control).
** Resources (people) can be retired (removed from lookup windows, etc) without deletion from system (to preserve risk/compliance history integrity).
*Scalability, Networking and communications
** N-Tier architecture, can be installed on one computer with the database (as in single user mode) or distributed across multiple servers (as in Enterprise/Web mode).
** Networked comms supports simultaneous or individual use of Raw TCP/IP, HTTP and HTTPS (SSL) network communications (all with compression)
** Supports unlimited simultaneous databases ''(subject to license purchased)''
** Supports unlimited simultaneous application servers ''(subject to license purchased)''
** Supports unlimited simultaneous survey engines ''(subject to license purchased)''
** Supports unlimited installed client desktops ''(subject to license purchased)''
*Other
** Cost and benefit tracking
** Full internal scripting language to support end user expansion and external interfacing
** Interfaces for external complex risk assessment (eg Monte-Carlo modelling risk systems such as Benfield / AON Remetrics)
** Single point of update publishing for clients
==BPC RiskManager Express V5.x==
[[image:BPCRiskManagerExpressV5.jpg|539px]]
BPC RiskManager Express has a dramatically simplified and restricted user interface, does not maintain structured causes lists (but does have unlimited "contributing factors" descriptions) and allows one level of responsibility for assignment of issues and actions, and does not have an end-user report writer (although it does support both mail-merge and word / XL template driven reporting). It can be configured as either a compliance or a risk solution running on separate databases through the one application server. Like it's more powerful sibling, it will support an indefinite number of databases.
BPC RiskManager Express is targeted at organisations where simplicity of operation and user input overrides the need for granularity of input and analysis, and where the additional governance sub-systems available in BPC RiskManager are not needed (eg insurance, claims, assertion / question rating models, work-flow, assessments, security, assets, etc.)
This riskwiki focuses on BPC RiskManager (Enrima Edition).
=Additional Resources=
[http://bpc.bishopphillips.com/forum/ BPC Support Forum]<br>
[http://bpc.bishopphillips.com/riskthink/ BPC RiskThink Blog]<br>
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php Request a free fully functional trial copy of BPC RiskManager (Enrima)]
<noinclude>
[[Category:Featured Article]]
[[Category:Bishop Phillips Software]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
{{BackLinks}}
</noinclude>
81bdffb458d1875bbf1156a08c95aa2571f1e615
338
300
2018-10-29T11:57:33Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=The BPC RiskManager Software Suite - Features=
==What is the BPC RiskManager Software Suite?==
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) in 2008. The latest release is July 2010.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width="100%"
|-width="100%"
|
* BPC RiskManager V5 (Express)
|
|-
|
* BPC RiskManager V6 (Enrima Edition)
|
|}
While there are a lot of similarities between the systems, they are not identical and not data compatible. BPC RiskManager V5 (Express) is maintained on an an annual update cycle, while BPC RiskManager (Enrima Edition) is maintained on a quarterly (every 3 months) update cycle.
In terms of scalability, both systems will handle thousands of simultaneous users, and both model risk management at the enterprise level and project level. Both systems include risk, controls/strategies, consequences, survey, compliance, incident management support and both systems feature customisable screens and field names. Both systems allow multiple simultaneously active databases.
The essential differences are in depth and complexity of issues supported and expandability of the system. Here they have significant differences. Express is designed to be extremely simple and consequently excludes both depth and breadth beyond the functions of a risk and compliance register. It therefore is able to present almost all its risk or compliance record data on a single screen.
In the Enrima V6 series this single screen display is not possible as the both multiple views and considerable anciliary management objects are brought into the system (such as documents, assets, assertions, insurance, claims, etc).
==BPC RiskManager V6.2.5 (Enrima Edition)==
[[image:BPC_RiskManager_V6261_Main_Screen.jpg|539px]]
===BPC RiskManager - Who should use it?===
====User====
BPC RiskManager designed to manage the governance function of an organisation. It therefore fits in audit, risk management, compliance management, insurance risk management, environmental risk management, project risk management, human resources, OHS and strategic planning. It delivers functions covering both ther strategic and the operational functions of these disciplines. For example the claims module actually manages insurance claims (not merely registering them), the document management system is capable of actually managing documents (not merely cataloguing them), the compliance and strategy systems actually manage the remediation of the issue, etc.
It functions best as an integrated solution with multiple governenance teams using the one system. With each release we expand the governance functions in the system.
====Scale====
BPC RiskManager is designed to scale. There are four types of clients using it:
#. Single user or small work groups running off a single user install switched to server mode.
#. Medium scale enterprises with risk and executive seats on an IT group managed server / in-cloud and database.
#. Large scale enterprise with many seats actively managing general risks and compliance issues and project risks, etc
#. Hosting consolidators providing cloud services to many clients in different organisations with many databases.
Every version of BPC RiskManager (from the single user install, up) comes capable of operating in all these modes. For each type of operation there are specific features built in to aid maintenance and management (including multi database bulk operatiions for hosting providers).
===BPC RiskManager Features===
BPC RiskManager V6.2.5 (Enrima Edition) (often referred to as RiskManager V625 or Enrima), is a powerful risk and compliance management solution with an almost unlimited range of end-user configurable solutions. It delivers:
*General
** Totally end-user configurable (change almost any label or caption or search relationship, re-task fields, define your own risk and compliance model, build your own reports, define your own work flows, customisable messages, define your own risk structure, etc)
** Runs out-of-the-box (ready to use immediately after install in single-user or small work group mode).
** Provides an optional fast configure mode (shown on first run of any client and available at any time thereafter).
** An extremely versatile ratings engine support multiple methods of ratings compliance and risk issues. Each item can simultaneously store different ratings for inherent, residual, auditor, reviewer and unlimited current self ratings for each of likelihood, impact and (residual) risk. It also holds additional ratings for compliance breach, compliance rating, and unlimited assertion sets.
** Ratings can be rolled up through trees of risks and compliance issues
*Functional
** Risk Management
** Compliance Management
** Incident Management
** Planning
** Document Management
*Registers
** General Risk register(s) with unlimited risk types and able to distinguish project and general risks
** Project Risk register(s)
** Compliance register(s) with unlimited assertions/questions and assertions/question groups AND pure HTML based compliance surveys / checklists
** Incident & Hazard register
** Insurance register
** Claims register
** Legal register
** Document register
** Causes register
** Consequence & impact register
** Standard strategies register (Type of Control)
** Strategies & control register
** Actions register
** Work flow register
** Asset register
** Business plan register
** Survey register
** Access control
*Evaluation engines
** Risk & compliance rating
** Question & assertion rating
** Assessments engine
** Survey rules engine
** Charting engine
** Email management engine
** Exception tracking engine
*Work flow control systems
** Work flow engine
** Instantaneous internal message engine
** Instant and batched email management engine
** PAX & TMS ScripterStudio scripting engines
** Survey management system
** Exception tracking engine
*Data reporting and access
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. These structures are understood by the search and reporting engines.
** Unlimited risk structuring - risk folders to any depth, risk-linking, risk categorisation, unlimited master-child structures, etc
** Tree, search and flat risk navigation simultaneously supported
** Risks/compliance issues can inhabit any number of tree folders simultaneously (allowing multiple grouping and reporting frameworks with risk roll up)
** Link Objectives, assertions, questions, processes, legislative/regulator obligation, causes, risks, consequences, compliance obligations, controls / strategies, actions, risk history, incidents / hazards, people, supporting documentation, and information web-sites, and more.
** Full live search-able audit trail of all changes
** Storable searches used through-out the application to access and feed data to tables, views, folders and reports
** Multiple reporting engines:
*** Built-in pre-written reports
*** Very powerful, programmable end user report writer and manual (outputs in various formats including HTML and PDF)
*** Word Document (mail-merge) style report engine
*** SurveyManager Instant Reporting engine (maps survey response reports back into the survey layout)
*** BPC SurveyManager operating in web forms mode is a powerful reporting engine in its own right
*** Query Exporter (Administrator only - can cross feed to the import engine creating an excellent method for doing bulk updates based on extracted data)
*** Search based end user export
*** Built-In Charting
*** End-user charting
** End user sample reports
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL and PDF
** Dashboard with drill through to risk collections, risks, assessments and incidents
** Dashboard risk collections configurable via folder tree view system (so any risk/compliance topic can be put to the dashboard with unlimited layers of drill through).
*Messaging
** Built-in automated email messaging based on events and dates for a wide range of scenarios, and occurrences, with email contents able to be fed by custom reports from the report writer.
** Multiple levels of responsibility assignment on all trackable objects
** Risk Message racking and work flow message tracking
*Secretarial, Administration and Desktop Integration
** MS Office compatible
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL
** Spell checking using your MS Word dictionary
** Simple point and select search system but with an option for savable advanced query writer custom searches if required.
** Extensive configuration and customisation screens to support tuning the system to do just what you want.
** Dynamic screen captions allowing you to adopt your own terminology, which also appear to the report writer as the names of the fields
** Smooth support for large and small fonts and 96dpi and 120dpi and other screen resolutions
** Works on all versions of windows from W2000 up, including Vista and Version 7.
** Fast fully automated installation and upgrade system.
** Available in single/small work group and enterprise configurations
*Compliance System
** Compliance obligations can be viewed as general risks and compliance modes
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Compliance obligations will support multiple compliance models simultaneously (SOX / Sched7 / General / etc).
** Compliance obligations are stored internally as risks so they roll up smoothly into the general and project risk register
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. An issue can belong to many such relationships at once.
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of compliance ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings and question/assertion ratings rules for automated rating translation.
** Compliance responses automatically convert to risk equivalent ratings so that both compliance issues and risks can be seen on the one heat map, and in comparative tables.
** Unlimited compliance milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Risk System
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers.
** Risk Tolerances (rating and numeric) for differential risk reporting and automated condition reporting.
** Likelihood & consequence trigger points
** Separate audit comment and tracking data for each risk.
** Multiple modelling systems - inherent, current and residual risk ratings (with optional likelihood, impact, control and residual categories for each rating)
** Velocity supported at the impact/consequence level
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of risk ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings
** Confidential risks
** Risk advisory notes for each risk
** Unlimited risk milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Incident Management
** Fully configurable - drop lists, business rules, screens, etc.
** Incident type determines rules and attributes
** Multiple handling steps fully tracked - recorder, assignee, reviewer, responder, escalted to, investigator
** Automatic triggers for review, escalation, investigation, etc based on user configurable rules (triggered by participant information, incident attributes, etc.)
** Configurable unlimited incident attributes with triggers (for reviews, escalation, enhancements, workflow, etc.) to classify incidents
** Unlimited configurable incident types (which determine the set of incident attributes applied to the incident)
** Incidents have a built in workflow – record, assign, review, escalate, resolve, investigate, close
** Unlimited user defined additional fields for storing extra data
** Unlimited text fields details/notes, etc for unstructured data
** Change tracking
** Separate org structure defnition that lives side by side with the risk management org structure (allowing different structures for risk/compliance and incidents)
** Structure and rule driven review, escalation and investigation
** Unlimited incidents per risk/compliance event
** Incidents attached to more than one risk/compliance topic
** Incidents can be created and attached to a risk/compliance topic at a later time
** Notifiers
** Incident Causes – immediate and underlying (mirrors risk causes)
** Incident Actions – Current (done) and future, both proposed and approved + action assignment, progress and tracking
** Proposed actions can be converted to risk / compliance topic controlls
** Large array of location types (even GPS location specification)
** Unlimited partcipants per incident (with user defined roles)
** Particpant records of interview
** Participant injury tracking
** Review and investigation reminders
*Incident Investigations
** Investigations including progress tracking/status / findings / recommendations, etc
** Configurable investigation types with differing investigation team structures
** Investigation external document links
** Configurable and managed signoff models including separate lists for investigation team members and other parties
** Investigation signoffs with qualified and dissenting opinion options
** Investigations build distinct reports
*Internal Audit System
** Separate audit risk ratings and notes per risk/compliance issue
** Separate audit external document links
** Internal-audit remediation register with assignable tasks and remediation progress, status and outcome recording.
** Automated access escalation for user flagged as auditors
** Auditors use the same screens as normal users but have extra fields and facilities
** Automated CSA survey generation
** Full change logs kept of key accountable tables (can be expanded to include additional tables including additional tables added by clients)
*Insurance and claims
** Insurance register with renewal reminders
** Insurance policies link to risk/compliance registers via the strategy and controls register, actions register and document registers.
** Claims management
** Claims link to risks/compliance registers via incident and insurance registers
** Incident/Hazards Register (plus hooks for interfacing into a separate incident management system if desired)
*Causes Register
** Unlimited risk specific causes per risk
** Type-of-Cause allows standardisation of causes while allowing complete flexibility in description and instance of a cause (similar to Type-of-Control)
** Incident and Risk/Compliance causes.
** Causes can have numeric risk event triggers (allowing concepts such as the "likelihood of exceeding x events in a year")
** Direct sub linking between causes and strategies and consequences enables cause and effect strategy design and verifiable coverage of causes
** Causes can be sub linked off Assertions/Questions (the default for compliance screens) allowing low rating compliance questions or analytic steps for remediating breaches to be structured around the causes of each question's failure. This enables the compliance model to be around built around both compliance risk and compliance topics philosophies.
** As there can be an indefinite number of question sets with an indefinite number of questions per risk / compliance issue, cause structuring can get very deep.
** Causes integrate with surveys, the scripting engine and external modelling systems to enable programmatic setting of likelihood ratings using additional fields as part of the interface (like the "risk trigger value").
*Strategies & Controls register
** Strategies and controls with progress notes and tracking
** Register and track unlimited strategies and controls
** Customisable ratings scheme for each control or strategy including any of likelihood, impact, control, (residual) risk over inherent, residual, current self, audit, reviewer, etc ratings groups, as well as five ratings defaulting to authority, reliability, efficiency, economy, and timeliness control assertions.
** Officially mandated Type-of-Control list provides a template for approved control strategies and allows strategies to be both individually described, and structurally grouped and standardised.
** Strategies & Controls directly cross link to individual causes and impacts/consequences allowing you to tie specific strategies to one or more causes and consequences of a risk or compliance item.
** Strategies & Controls can have actions.
** (Coming soon: unlimited assertion/ratable question sets similar to that used for compliance and risk screens).
** Includes Responsible officer, delegate, email reminders, assignment tracking, cost and benefit measures, link to insurance, cyclic and one off controls/strategies, flag where insurance expired, due dates exceeded, user defined categories and subcategories, etc.
** Automatic access rights escalation where read only viewer is accessing a strategy for which they have responsibility
** Fully customisable messages with or without email running.
** Survey question library links surveys to strategies
** Can feed CSA automated surveys
*Financial Elements Register
** Unlimited charts of account
** Account rollup
** Store performance metrics (budget, actual, transaction volumes, etc)
** Store audit assessments for each element
** Link to audit/risk/compliance assertions
** Ownership
** Unlimited risks/compliance obligations per account
** Test plans and test plan scheduling
** Heat maps for each element with drill through to risks and incidents
*Document Register
** Document register for unlimited documents
** Supports multiple document management strategies simultaneously: unmanaged, delegated management and full management.
** Unlimited risk/compliance issues may be linked to each managed or unmanaged document.
** Unlimited unmanaged documents may be linked to a risk-compliance issue
** Document management can be set at the document or section level on a per-document basis
** Managed documents track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Managed document sections track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Full snapshot version control operates on managed documents - a full time-stamped copy of the relevant records is made for each change.
** The document register presents document and section specific lists and heat maps of all risks/compliance issues attached to the document or section and supports export on that basis.
** Main listing screens support dynamically constructed QBE filters and free text search to enable isolation of documents using specific terms or any of the tracking fields.
* Store documents internally or interface to your document management system, web site links available for most objects.
*Work flow engine
** The work flow system supports two purposes (a) documenting processes with flow charts, and (b) automating RM related activities
** Work flow modelling and diagramming tool (with a built-in script-able work-flow diagramming subsystem)
** Work flows can be executed and can invoke RM screens and external applications. Executed work flows can be assigned to individuals and have multiple individuals participating in different steps.
** Work flows steps can have attachments.
*Survey engine
** Full implementation of BPC SurveyManager with customised management client built-in
** Built in survey engine
** A full scale (not limited) survey / web forms engine that is licensed for separate use and can be used for far more than just your risk management requirements. Think of something you need to collect data on the BPC SurveyManager will handle it. The SurveyManager can be used to write entire web sites on its own.
*Access and security
** Single user mode or secured access modes (end user selectable)
** Multiple access security support (LDAP,AD, NTGroups, Internal, Trusted, etc)
** Configurable access rights for access to risk type, business group, business unit, risks over multiple levels of access from none to administration
** Automatic escalation of access to individual records where the user has responsibility assigned, but otherwise would not have access
*People & resources
** People and positions (resources) may be imported in bulk, created individually or automatically created on connection.
** Resources integrate with the access control system
** SurveyManager keeps a separate list if resources mirrored with the RiskManager resource tables
** RiskManager allows for three domains of resources - survey responders (access to specific surveys), risk manager known persons (can be managed by email, assigned responsibilities but do not have access to the system), and risk manager users (access allowed).
** User access control down to individual business unit risks & issues as read / update / create (See access control).
** Resources (people) can be retired (removed from lookup windows, etc) without deletion from system (to preserve risk/compliance history integrity).
*Scalability, Networking and communications
** N-Tier architecture, can be installed on one computer with the database (as in single user mode) or distributed across multiple servers (as in Enterprise/Web mode).
** Networked comms supports simultaneous or individual use of Raw TCP/IP, HTTP and HTTPS (SSL) network communications (all with compression)
** Supports unlimited simultaneous databases ''(subject to license purchased)''
** Supports unlimited simultaneous application servers ''(subject to license purchased)''
** Supports unlimited simultaneous survey engines ''(subject to license purchased)''
** Supports unlimited installed client desktops ''(subject to license purchased)''
*Other
** Cost and benefit tracking
** Full internal scripting language to support end user expansion and external interfacing
** Interfaces for external complex risk assessment (eg Monte-Carlo modelling risk systems such as Benfield / AON Remetrics)
** Single point of update publishing for clients
==BPC RiskManager Express V5.x==
[[image:BPCRiskManagerExpressV5.jpg|539px]]
BPC RiskManager Express has a dramatically simplified and restricted user interface, does not maintain structured causes lists (but does have unlimited "contributing factors" descriptions) and allows one level of responsibility for assignment of issues and actions, and does not have an end-user report writer (although it does support both mail-merge and word / XL template driven reporting). It can be configured as either a compliance or a risk solution running on separate databases through the one application server. Like it's more powerful sibling, it will support an indefinite number of databases.
BPC RiskManager Express is targeted at organisations where simplicity of operation and user input overrides the need for granularity of input and analysis, and where the additional governance sub-systems available in BPC RiskManager are not needed (eg insurance, claims, assertion / question rating models, work-flow, assessments, security, assets, etc.)
This riskwiki focuses on BPC RiskManager (Enrima Edition).
=Additional Resources=
[http://bpc.bishopphillips.com/forum/ BPC Support Forum]<br>
[http://bpc.bishopphillips.com/riskthink/ BPC RiskThink Blog]<br>
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php Request a free fully functional trial copy of BPC RiskManager (Enrima)]
<noinclude>
[[Category:Featured Article]]
[[Category:Bishop Phillips Software]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
{{BackLinks}}
</noinclude>
81bdffb458d1875bbf1156a08c95aa2571f1e615
BPC RiskManager Frequently Asked Questions
0
5
302
280
2018-10-29T11:39:10Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
{|width="100%"
|width="60%" VALIGN="Top" |
# [[How do I get a copy of BPC RiskManager V6.2.5?]]
# [[Would it be possible to get a copy of the BPC RiskManager V6 installation guide?]]
# [[Is there a feature listing for the BPC RiskManager windows client and the browser client?]] We are looking at the possibility of using a mixed client environment based on user specific needs and where they are.
# [[When are multiple BPC RIskManager server licenses required?]] We are looking to have RM implemented across a group of companies. They will all be using the same instance with same fields and definitions as the subject matter is the same. Can we use a single server license or will we require multiple server licenses?
# [[Can you please provide information on the cost of licensing and the type of licensing for BPC RiskManager V6.x ?]]
# [[Does your license include the cost of MS SQL Server ?]]
# [[I just purchased BPC RiskManager. Will you be sending the install disks, and when?]]
# [[What will need to be arranged prior to the installing BPC RiskManager?]]
# [[Does the RiskManager client application work with FireFox browsers?]]
# [[In what programming language is BPC RiskManager written?]]
# [[Does the RiskManager plug-in itself have a certificate like a java applet does?]]
# [[For support, what type of support is available (i.e.: email, phone, onsite, etc...)?]]
# [[What is the best way to get support?]]
# [[How do I arrange installation support and what is the timeline?]]
# [[What support packages are available and at what cost?]]
# [[Is there a cost associated with telephone support (i.e.: cost per call or issue)?]]
# [[How do I get custom features added, or request new features for BPC RiskManager?]]
# [[Is there a User Group Forum?]]
# [[What type of documentation, technical and user is available for BPC RiskManager?]]
# [[How does one decide the optimum BPC RiskManager configuration?]]
# [[Is BPC RiskManager a Client-Server application?]]
# [[What is the difference between the browser plugin and the windows executable RiskManager client?]]
# [[Database stability: Is the RiskManager essentially a SQL Server application ported to Oracle?]]
# [[Database support: Which database choice will give us the best level of support?]]
# [[Security: What is the most secure architecture for BPC RiskManager?]]
# [[What is the best client version - the browser or non browser Risk Manager client?]]
# [[What admin account rights are required to setup a browser plug-in?]]
# [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7|How do I configure IE for the RiskManager browser plugin?]]
# [[BPC RiskManager Server - After installing in production or adding an application server|We just ported our enterprise system to a new server and I can't login. What do I do now?]]
# [[Steps For Migrating RiskManager V6.x from Test To Production|How do I port BPC RiskManager from test (or dev) to production?]]
# [[BPC RiskManager V6 on 64 bit Windows|How do I install BPC RiskManager onto a computer running a 64bit Windows OS?]]
| VALIGN="Top"|
<noinclude>
{|align="right" width="100%" cellpadding="10px"
|- style="background-color:#FFEBCD; " width="100%"
|'''A Frequently asked Question is...'''
|-
|<div class="didyouknow2" STYLE="height: 600px;
border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-right:10px;" >
{{#dpl: includepage=*
|includemaxlength=3000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=RiskManager FAQ
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;"></div>
|}
</noinclude>
|}
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:Bishop Phillips Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinksCategoryHead|CT=RiskManager FAQ|CN=The frequently asked Questions Category}}
</noinclude>
25cfdeccbd4a292afa2715e0cff010008b205d54
Real Learning in Virtual Worlds - CHAPTER 2: Literature Review
0
278
304
303
2018-10-29T11:40:32Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 2: Virtual Worlds - Concepts, History, and Use in Education (Literature Review)=
==2.1 Introduction==
Gartner (2007) predicts that as many as 80% of active internet users will have a ‘Second Life’ in a virtual world by the end of 2011. Depending on your definition of ‘virtual world’ this may seem a little ambitious. Certainly, the extent to which virtual worlds are seen to include massively multi-user online environments supporting collaborative exchange of information in shared virtual space, the prediction might prove reasonably safe. To the extent that this definition is constrained to massively multi-player online games then prediction may prove a little “braver”.
Today’s virtual worlds represent the convergence of multiple technology streams, with the latest examples of the genre representing the merger of internet, telecommunications, instant messaging, virtual reality, 2D & 3D graphics, a variety of 3D modelling technologies, spatial sound, distributed databases, spatial indexing, mapping, streaming data transmission, physics, scripting languages, object-oriented software, agent theory, artificial intelligence, networking, economic modelling, online trading systems, game theory and many, many more technologies.
While the developers of many virtual worlds are content within the game space, some virtual world developers, such as Linden Research (developers of Second Life) have ambitions to be the web platform of the future (Bulkley, 2007). To this end a number of the commercial developers of virtual worlds have joined forces with a number of major corporate consumers, systems integrators and US government bodies to explore common standards for inter-operability of virtual world platforms which is a necessary first step in moving the technologies from the isolated proprietary place they now inhabit to a world-wide shared web platform (Terdiman, 2007).
This chapter explores virtual worlds, reviews the literature considering alternative definitions, characteristics, history, key architectural features, research outcomes and applications in education. The chapter concludes with an examination of traditional education taxonomy and relates that to the virtual world context as a basis for structuring an approach to exploring education affordances offered by two approaches to education in virtual worlds.
==2.2 Virtual Worlds==
===2.2.1 What is a Virtual World?===
====2.2.1.1 In Search of a Definition====
“Virtual worlds are places where the imaginary meets the real”. (Bartle, 2003, p. 1)
Virtual, as defined in the Oxford Dictionary (1989) with respect to the computing context is: “… not physically existing as such but made by software to appear to do so from the point of view of the program or the user….” and defined in the virtual reality context to be “… a notional image or environment generated by computer software, with which a user can interact realistically as by using a helmet containing a screen, gloves fitted with sensors, etc.” (1997).
The term world is defined in the Oxford Dictionary (1989) as “the ‘realm’ within which one moves or lives”.
In simple terms, therefore, a ‘virtual world’ can be defined as a generated computer software realm in which a user moves, exists or lives in a manner that appears to be real to the user.
A common definition for the term ‘virtual world’ is passionately debated in the literature (see Combs, 2004; Jennings, 2007; Reynolds, 2008; Wilson, 2007). It is a term that is used to describe many types of software environments from a simple MUD (Multi User Dungeons, also referred to as Multi User Dimensions or Domains) (Bartle, 2003; Keegan, 1997; Slator et al., 2007) to a sophisticated fully immersive 3D virtual reality environment used in gaming, physical training simulators or social interaction spaces (MetaMersion; Patel, Bailenson, Jung, Diankov, & Bajcsy, 2006; Van Dam, Forsberg, Laidlaw, LaViola, & Simpson, 2000). The term virtual world can be used to describe a single user walk-through simulated environment (Dalgarno, 2004; Youngblut, 1998) or an environment such as a massive multiplayer online role playing game (MMORPG) like World of Warcarft (Bainbridge, 2007). The term virtual world is also interchanged with other terms such as - virtual environment, synthetic world, mirror world, metaverse, virtual universe, artificial world etc[2] (Grøstad, 2007).
Bartle (2003, p. 1) provides the following definition:
<blockquote>
“Virtual worlds are implemented by a computer (or network of computers) that simulate an environment. Some -but not all- of the entities in this environment act under the direct control of individual people. Because several such people can affect the same environment simultaneously, the world is said to be shared or multi-user. The environment continues to exist and develop internally (at least to some degree) even when there are no people interacting with it; this means it is persistent.”
</blockquote>
Therefore, using Bartle’s definition in conjunction with the Oxford Dictionary definition provided above a virtual world can be defined as:
<blockquote>A shared software environment (or realm) in which a person represented as a projected entity (such as an digitally projected image, text identity or other computationally representational object) moves, exists or lives in a manner that appears to be real to the person and capable of affecting that environment and, being affected by, in a manner that simultaneously effects the experiences of other entities within the environment and which generally remains persistent once the user has left the world.
</blockquote>
The key components of this definition are:
#A shared environment in which a real-world participant shares a computationally generated artificial space with other real world participants and/or other computationally generated entities.
#The nature of the real-world participant’s projection into the computationally generated virtual space.
#The characteristics of the space, which establish a sense of realism to the participant.
#The manner and extent to which the real world participant is able to affect the shared space.
#The nature and form of persistence that the artificial space retains.
Throughout this section we will examine the current state of these components; the ideas and literature analysing contributing to the current expression of these concepts in the form of currently available virtual worlds. The realisation of virtual worlds in software has been (and continues to be) a rapidly evolving field continually consolidating mixed influences from a fiction, mechanical and electrical engineering, computer science, gaming theory, telecommunications, social science, commerce, religion and sociology. It is a field where advances are made as much in the act of amateur invention as in formal science, and a field in which the academic literature frequently lags the leading edge of the advances by a significant degree.
===2.2.2 Recognising a Virtual World by its Features===
While there is not as yet a single common set of universally accepted attributes, the literature offers a variety of feature based definitions that attempt to provide a basis for classifying whether a given application or environment is, or is not, a virtual world. Across these competing views there are some features that are most frequently repeated.
Coming from the perspective of virtual worlds as gaming platforms, Bartle (2003, pp. 3-4) proposes that a virtual world should adhere to the following conventions:
*'''Physics''': The world contains automated rules for the players that effect change in the world.
*'''Character''': The player is a part of in world experience that is represented by a character and with which they strongly identify.
*'''Interactions''': All interactions with the world are channelled thought the character.
*'''Real-time''': Interaction in the world take place in real-time.
*'''Shared''': The world is shared by others characters in common.
*'''Persistence''': The world is continuous and evolves to some degree regardless of whether or not the player is present in the word. While not present the player’s state in the game remains unchanged.
Bartle tends to use the term character, for what this thesis refers to as an avatar, and considers that the player (which will be identified as ‘the intelligence’ in this thesis) must strongly identify with that character. In the context of role playing games where the player assumes an identity not their own, this aspect of the feature list goes to recognise the effectiveness of the immersion and sense of presence the player experiences (concepts we will be exploring later), but outside of this space, where the player and the ‘character’ may be one and the same, this feature is less of a distinguishing criterion.
His use of the term Physics in the context of an application genre that may include 3D environments is perhaps a little confusing. In these spaces Physics most commonly refers to the physics engine that manages the simulation of an avatar and object dynamics in the space (such as gravity, acceleration, force, momentum and limb movement, etc). As used by Bartle, the term includes the ‘business rules’ and behaviours of the system – the rules governing all interaction, not just those simulating physical movement.
The nature of the shared space and interactive channel imply that the actions of one player affect the experience of another.
Edward Castronova (2001, pp. 5-6) proposes that a virtual world should have the following features:
*'''Interactivity''': Existing on one computer and can be accessed via a network (or the internet) by many simultaneous users. The actions of each user have influence on other users in the world.
*'''Physicality''': Users access the world by a computer, which provides a first person view of the world, the world is generally ruled by natural laws much like the real world with scarcity of resources.
*'''Persistence''': The world is continuous and evolves to some degree regardless of whether or not the player is present in the word. While not present the player’s state in the game remains unchanged.
Castronova’s feature requirements are essentially a subset of Bartle’s, although with the possible omission of the expectation that interaction is necessarily real time.
Sun Microsystems Inc (2008, p. 3) proposed the following common features of open virtual worlds (ie multi-user virtual worlds open to public access over the internet):
*Shared space, allowing multiple users to participate simultaneously.
*Users interact with one another and the environment.
*Persistence.
*Immediacy of the interactions.
*Similarities to the real world rules.
We might, perhaps reject Sun’s expectation of any need to assimilate ‘real world rules’ as this would exclude many fantasy role playing games from being classed as virtual worlds, but outside from this aspect Sun’s list is essentially consistent with the views of Bartle and Castronova.
These three sources are essentially consistent with the body of the literature, making allowance for the additional attributes and some latitude in interpretation we can establish a minimum feature list that would be generally accepted:
*The environment is shared;
*Interaction are in real-time;
*A person participates in the world through some form of representation with which they identify and are identified and that facilitates interaction and recognition (such as a character or avatar);
*Interactivity in the world is channelled though the avatar;
*Changes induced by a participant influence the experience of the space for other participants;
*Rules govern the world and interactions are shared and commonly applied; and
*The world is persistent.
==2.3 The Avatar–The Nature of a Participant’s Projection into a Virtual World==
While Bartle (2003) refers to a participant’s projection into a virtual world as a “Character”, the more widely accepted name today for a real world participant’s projection into a virtual world is an Avatar. This is the term this thesis will be adopting in this research.
The word avatar derives from avatara a Sanskrit word meaning “descent of a deity” or incarnation and utilised by the Vaishnavism religious tradition of Hinduism. The Hindi concept of an avatar is thought to originate as early as the second century B.C.E (Sheth 2002). One of the most recognised Hindu deities is Vishnu (Figure 1). In Hinduism, Vishnu, is said to have a standard list of ten avataras (collectively known as Dasavatara) with one of them said to be Buddha (Siddhārtha Gautama) the founder of Buddhism (Sheth 2002).
[[image:Vishnu_Hindu_Avatar_001.jpg]]
Figure 1. Hindu Avatara
Left: Visnu (or Vishnu) Hindu deity the protector and preserver of the universe
Right: Ten avatars of Visnu (Dasavatara)
(Vivekananda Centre, 2008)
In computing terms, little has changed from the original Hindi meaning of avatar. As with Hindu avatara, the virtual world participant can be thought of as “descending” or “projected” from reality to become a computational representational in a virtual world. In virtual worlds, an avatar is generally (although not exclusively) a graphical representation of the user’s persona (Deuchar & Nodder, 2003) although it can also be a representation of a system or a function in some applications (Sheth, 2003), a simple name in the form of a text string (in some text based MUD’s) and is evolving to include virtualisations of other senses (such as aural and tactile) (S.-Y. Lee, Kim, Ahn, Lim, & Kim, 2005). The graphical representation of an Avatar was thought to originate from a networked multi-user virtual world game called Habitat in 1984 (Bye, 2008; Morningstar & Farmer, 1990). Early research seems to suggest that the use of digital avatars in virtual worlds provides the user with reduced inhibitions and dissolves social status, or reconstructs social status among users (Dede, 1995; Dickey, 2003; Rheingold, 1993).
The projected form is not necessarily a recognisable representation of the real world human form. In his or her projected form, for example, the avatar might be represented as an image of a human, an animal, an animated mechanical object, a simple name, or any form appropriate to the virtual world, and within the technical capabilities of that world’s object management systems. For example, in Eve (a space based virtual world) all avatars are space ships whereas in Second Life (a social based virtual world) an avatar can take any form (Figure 2) but regardless of appearance your avatar’s name remains the same.
[[image:SecondLife_Digital_Avatars_002.jpg]]
Figure 2. Digital Avatars of Second Life (Levine, 2007)
In terms of today’s virtual worlds, and for the purposes of this research, an avatar should be thought of as a combination of a representation, an agent and an intelligence:
#The ''representation'' may be visual, aural, tactile or any other sense conveying the presence of the avatar to other avatars or agents in a virtual world.
#The ''agent'' is the library of capabilities of the avatar in a virtual world.
#The ''intelligence'' (or actor) provides the tactical and strategic control of the avatar, which could be artificial or natural (eg human).
In a virtual world the decisions of the intelligence are communicated to, and realised by, the agent. The consequence of the agent realising (enacting/implementing) the intelligence’s commands may result in a change in the state of both the agent and the representation, eg, in a 3D Graphical virtual world, a command to walk issued by the intelligence might result in the agent changing position and entering a movement or walking state and triggering the representation to display a walking animation (enter a walking animation state).
==2.4 A Taxonomy of Virtual Worlds==
===2.4.1 Introduction===
As might be expected, the literature contains extensive discussion of the appropriate taxa to be applied in classifying virtual worlds, and also an equal measure of disagreement among authors as to the appropriate criterion so to be applied. In spite of the range of discussions, most attempts are incomplete and therefore capable of classifying in a useable form only a portion of the genre. To be fair, this space is rapidly evolving and possibly as fast as it is classified a new entrant appears that change the paradigm, and old entrants are updated to include new capabilities.
===2.4.2 A Taxon for Virtual Worlds===
Outside of the education and virtual reality streams, possibly the largest single family of virtual worlds are those developed for games. While not actually claiming to propose a taxon, Bartle (2003, pp. 38-61), whose pedigree is essentially from the gaming stream, proposes a set of attributes that can be used to classify Virtual (game) Worlds. Not surprisingly, the attributes are most relevant to multi-user game focussed virtual worlds, but provide a workable superset of the current thought on the matter and with some adjustment can be extended to the more general examples of virtual worlds. He suggests that a virtual world can be categorised according to the following taxa:
#'''Appearance''': To a ‘newbie’ (Bartle’s term for a new user of a virtual world application) the distinction is whether the virtual world is a ‘text based’ MUD, ASCI, graphical 2D or graphical 3D etc. To an ‘oldbie’ (as described by Bartle) this is only an interface issue and therefore not as important as the other listed categories.
#'''Genre''': Is the world fantasy, cyberpunk, horror, social etc. The plot or the settings of the virtual world. This taxon is most helpful with purpose focussed virtual worlds. In the non-gaming or semi-gaming space occupied by some generalised social worlds, the virtual world is as much a platform on which other ‘sub-worlds’ can be based, and thus the genre of the virtual world can be all other genres. Examples of this might include PLATO and Second Life.
#'''Codebase''': Although not as important for the user as it is hidden from them this is an important aspect to the designer of a virtual world. The codebase defines the technical makeup of the world - reusable content and controls, scripting language, database structure etc. This researcher suggests that the codebase is not a single taxon, but perhaps should be separated into multiple taxa. In its place one might propose the content management, asset management, game engine, environment application programming interface, AI, and scripting function library within the system as more relevant technical matters.
#'''Age''': How long the virtual world lasts is an important aspect for the measure of success of the virtual world. Generally the longer you can keep a player (or user) interested the longer the virtual world survives which in turn attracts new users which adds to the player base of the virtual world.
#'''Player base''': How large is the player (or user) base of the virtual world. This measure varies depending upon what you are counting for example, the number of registered users, the number of avatars (a user can have more than one character in a virtual world but in general not for simultaneous use), simultaneous users logged in, hours played per user, access over a period of time, number of active subscriptions, etc. In some worlds the meaningful measure of player base is in fact the number of owner occupied ‘acres’ of virtual land (as opposed to general users of the virtual world). The player base measures the current success of the virtual world, its popularity so to speak, which in turn lengthens the age of the virtual world. Given the number of ways a player base can be structured and measured a single measure is open to both misinterpretation and reporting manipulation, and for some measures (like subscribed users – where some subscriptions are costed and others free) may be completely erroneous when comparing one virtual world to the next.
#'''Degree to which they can be changed''': Virtual worlds vary in the degree to which a user can change the content or add to the content of the virtual world. Virtual worlds such as World of Warcraft (and most game based virtual environments) allow no change by the player with all content created by the developers of the virtual world. Other virtual worlds such as Second Life, Active Worlds, TruePlay and PLATO rely on content created by the community. In the case of Second Life (for example) the entire virtual world is made from user created content by providing them with building tools, import and export capabilities, out-of-world interfaces and communications capabilities, an extensive library of API functions and a scripting language. The degree to which a virtual world’s content can be changed by the user adds to the technical codebase complexity and the user’s (and other user’s for multi-user virtual worlds) experience of and within the virtual world.
#'''Degree of persistence''': Bartle defines persistence to be the degree to which a world’s state remains intact if you shutdown and restart the virtual world. He classifies persistence into ‘discrete’ or ‘continuous’ groups. At the extreme a discrete virtual world would regenerate - described a ‘Ground Hog’ world (named after the movie). Here all content and the location of the player would be reset to the start of play. In a continuous virtual world the content and locations are retained through a restart.<BR />Persistence also relates to what happens to the world when a user logs off, does the virtual world continue to evolve without the individual player – and if so can the player’s state be affected while off line? A virtual world generally displays some level of persistence and is generally a term used to distinguish if a ‘virtual world’ is really a ‘world’ or in fact just a simple ‘Ground Hog’ environment (see Gehorsam, 2003). The ultimate level of persistence being that akin to the real world which is constantly evolving and changing regardless of our existence within the World.
With some modification and generalisation most of the taxa can be applied in the general case of gaming and non-gaming virtual worlds. To be applied outside of the narrow RPG (Role Playing Game) grouping, the classification system would benefit from some subdivision of elements.
We have already noted codebase as one such category. Codebase is such a wide group that is could be applied to every functional capability of the virtual world not covered by another taxon, and thus is of limited help in establishing a consistent framework for classification. For example Castronova (2001) taxonomy recognises a grouping under marketplaces (implying commercial functionality) while both Kish (2007) and Cavazza (2007) recognise groupings covering Paraverses (although they use different terms). In Bartle’s taxa these might both be covered as distinguishing characteristics under codebase, yet the one relates to the ability to conduct real-world commercial transactions in the space, while the other addresses the merging of real-world content with virtual world content.
Persistence as framed by Bartle mixes up multiple discrete concepts – host state persistence, user state persistence, environmental evolution, and scenario persistence. This last item is generally typical of games (such as quest driven environments where on restarting a ‘quest’ the user can rely on the sequence of events being a repetition of the sequence that occurred previously – effectively a ground-hog space within a larger persistent environment), and absolutely essential for simulators and learning systems where a user taking a course should be able to rely on the lesson replaying in a consistent and predictable way each time (unless variation is an intended part of the training like in a military battlefield virtual world). In order to classify virtual worlds, recognising these attributes independently of each other would be more helpful than identifying the world as persistent or not persistent, nor are the sub-features linearly related – i.e. one form of persistence does not imply the inclusion of another form of persistence (Purbrick & Greenhalgh, 2002).
===2.4.3 Applied Taxonomies===
While Bartle proposes a reasonably extensive set of attributes (taxa) for classification, some authors have proposed simpler classification regimes, although all seem as yet to avoid claiming an actual taxonomy.
Kish (2007) recognised that with the appearance of the weakly defined ‘Web 2’ technologies, virtual worlds could be seen to encompass a wider range of social networking and world-imagining spaces. Kish’s classification groups virtual environments into the broad categories (Figure 3):
#'''MMORPGs''': Massively Multiplayer Online Role Playing Games. A category which includes text and graphical gaming environments with the common theme of role playing and containing internally a hierarchical, level based player grading system to determine expertise and implied seniority, and generally plot or quest driven and goal oriented as their linking characteristic. Typical examples might include World of Warcraft, Entropia Universe, Everquest, MUDs, etc.
#'''Metaverses''': Imagined public fantasy spaces, emphasising social interaction, creativity and lacking a single plot or purpose for participation. Generally exhibiting a devolved structure without a single levelling system or clear environment imposed hierarchic seniority system[3]. Typical examples might include Habitat, Second Life, Active Worlds, Furcadia, etc
#'''Paraverses''': Spaces that intersect with the real world, incorporating content from the real world and thus could be described as virtual extensions of the real world. This group potentially includes many of the Web 2 spaces that contain sufficient functionality to create in the minds of their users a ‘real’ virtual community as strongly present to the participant as their real world existence.
#'''Intraverses''': Spaces that are otherwise Metaverses or MMOLE’s but private or closed to the broader public. Virtual reality environments could be seen generally to fall into this category as well as private/corporate implementations of public virtual world spaces. Typical examples might include Qwaq, Sun System’s Wonderland, IBM’s Metaverse, etc.
#'''MMOLEs''': Massively Multi-user Online Learning Environments. Possibly the oldest class of virtual worlds as it includes systems such as PLATO and is typified by educational environments supporting user social interaction. Primarily purpose (or although not goal) driven – such as learning, training, idea exchange, simulation, etc. This space includes the dedicated training / teaching environments of PLATO and planning / simulation management systems of SIMNET, Blackboard, Boston College’s Media Grid, etc.
[[image:Kish_Virtual_Geography_003.jpg]]
Figure 3. Virtual Geography (Kish, 2007)
Cavazza (2007) proposes that a virtual world should be open (public) and contain taxa supporting strong and generalised capabilities in each of the dimensions (Figure 4):
#Social networking
#Gaming
#Entertainment
#Business
[[image:Cavazza_Virtual_Universes_Landscape_004.jpg]]
Figure 4. Virtual Universes Landscape (Cavazza, 2007)
Consequently most of the virtual worlds identified by other authors are excluded from Cavazza definition of virtual worlds, but included under the broad category of ‘Virtual Universe’. To illustrate this idea Cavazza has classified a huge range of the existing virtual environments:
#Social
#*2.5 & 3D Chats
#*Avatar Centric
#*Social Platforms
#*Branded Universe
#*Virtual Worlds
#Game
#*MOG
#*Sports
#*MMORPG
#*Avatar Centric
#*Social Platforms
#*Branded Universe
#*Adult Games
#*Virtual Worlds
#Entertainment
#*Virtual Sex
#*Virtual City Guides
#*2.5 & 3D Chats
#*Avatar Centric
#*Branded Universe
#*Virtual World Generators
#*Virtual Worlds
#Business
#*Serious Games
#*Virtual Marketplaces
#*Adult Games
#*Virtual World Generators
#*Virtual Worlds
Cavazza’s definition and classification system is extensive, and possibly the most comprehensive to date. While Kish’s classification tends to focus on functionality, Cavazza’s emphasises purpose. Never-the-less, there is significant cross-over in their ideas. For example, both recognise the difference between games and social networking, and both accommodate the paraverses in a special category (Cavazza includes them in ‘Virtual City Guides’ among other groups). Cavazza’s analysis, however, lacks the accommodation of the education, training and simulation virtual spaces present in Kish’s categorisation, although, it might be argued that these are covered in multiple categories including ‘Virtual World Generators’ (eg PLATO, VastPark) and Serious Games (training simulators).
==2.5 What’s in a Name? – Virtual Worlds versus Virtual Reality==
Virtual Reality environments are generally a combination of user interface hardware (such as headsets and data gloves) and software. The availability of the (often costly or purpose built) user interface hardware has meant that the majority of these environments are either single user or very small scale multi-user environments (Jones & Hicks, 2004; Miller & Thorpe, 1995). A direct consequence of this is that Virtual Reality environments have tended to ignore the dimensions of user interaction, game play and collaboration in favour of the technology of immersion. This fact, possibly more than any other, has predisposed some authors to exclude virtual reality spaces from the domain of virtual worlds (Bartle, 2003; Yee, 2006).
While Bartle’s virtual world definition, contributes part of the definition we have adopted for virtual worlds in this research, the researcher departs from the entirety of Bartle’s embodiment of virtual worlds as expanded in that work. Bartle believes that a virtual world has a meaning divergent from that of virtual reality believing that “Virtual reality is primarily concerned with the mechanism by which human beings interact with computer simulations… [rather than] the nature of the simulations themselves” (2003, p. 3). To this extent Bartle’s definition specifically excludes Virtual Reality spaces from the definition of virtual worlds.
This researcher adopts a view consistent with some other writers in the field that excluding the body of work in virtual reality from the concept of a virtual world by writing virtual reality spaces out of the definition, places the emphasis narrowly on the social and gaming dimensions of these worlds, and away from the immersive experience thus excluding the vast body of research that predates or has been done in parallel to the development of gaming virtual worlds (Cosby, 1999; Heilig, 1955; Pimentel & Teixeira, 1994; Rheingold, 1992; Schroeder, 1997; Steuer, 1992; Sutherland, 1965; Walker, 1990; Woolley, 1994) and constrains the consideration of these environments in the education context to their collaborative and scripting capabilities.
Other authors have adopted definitions wider than that posited by Bartle of the virtual world concept, although in most cases constrained from some portion of the body of work that has contributed to the space. Dickey (2005, p. 439) implies an exclusion of 2D and non visual environments while providing: “Three-dimensional virtual worlds are a networked desktop virtual reality in which users move and interact in simulated 3D spaces.” Similarly, McLellan (2004) presents 10 classifications of virtual reality, a single virtual world being classified as ‘through the window’ where as a multi-user virtual world would be classified as ‘cyberspace’. Mazuryk and Gervautz (1996) make no distinction in the number of users in the virtual world but define a virtual world to be a ‘desktop VR (virtual reality)’ or a ‘Window on World (WoW)’ system. Biocca and Delaney (1995) defines a virtual world to be a ‘window system’ a computer generated three-dimensional virtual world viewed either by a computer screen or with the assistance of a head mounted display.
This researcher’s view is that all of these definitions are correct, but incomplete and that a definition that allows the participation of all of these examples is the most useful and appropriate in the education context. To appreciate the reasoning behind this argument we must look at some of the history of the development of the technologies and concepts that have contributed to the current family of virtual worlds and the problems and purposes these stepping-stones intended to resolve or achieve.
Authors adopting Bartle’s view have generally also adopted the view that virtual reality is essentially a hardware interfacing technology and hence the environments managed in this space are of no consequence. The misconception that virtual reality is a collection of hardware (data glove, head mounted displays etc) neglects the very meaning of virtual reality, which seeks to evoke a feeling of immersion and presence within the virtual space. In virtual reality research stream, using external hardware devices to enter a virtual world is only one method by which immersion and presence is achieved (Briggs, 1996; Steuer, 1992). No external device will ensure a user’s experience of immersion if the world they enter is an unconvincing generator of an alternative reality for the participant. Furthermore, if virtual reality is to be excluded from the scope of the definition of virtual worlds, then the existence of VR plug-and-play devices such as stereoscopic headsets, data gloves or haptic controls that are readily available to use with many mass market virtual worlds (that otherwise would fall within Bartle’s definition) for example, Vuzix iWear headset, Evolution Motion Glove of PS1, Wii Remote for Nintendo Wii, MS Force Feedback controller for Flight Simulator etc. would seem to contradict the proposed disconnect between the study of virtual worlds and virtual reality. Lastly the exclusion of virtual reality environments from the definition of virtual worlds ignores that fact that in the 3D virtual world space many of the technologies and concepts utilised were contributed by the virtual reality research stream (as will become clear from the history presented in the following sections).
In the education context, virtual reality technologies (as expressed for example in simulators) are a critical and essential contribution to the pantheon of virtual (training) worlds (Bailenson et al., 2007; Dede, 2004). In this researcher’s view, virtual reality environments are a subset of the virtual worlds, which are increasingly converging, if the space has not already converged in current virtual world examples such as America’s Army, Second Life, etc and massive multiplayer training environments like SIMNET (Lang, Maclntyre, & Zugaza, 2008; Lenoir, 2003; Zyda, 2005).
==2.6 Dimensioning Virtual Worlds==
===2.6.1 The Degree of Virtuality===
The degree to which a world is ‘virtual’ can be looked at as a sliding scale between physical and virtual. Milgram and Kishino (1994) presents a taxonomy for mixed reality visual displays called a ‘reality-virtuality continuum’ (Figure 5). On the left hand side of the scale is the ‘real environment’, which is equivalent to the real or tangible world, while on the extreme right is the ‘virtual environment’, which is equivalent to an artificially generated world. Between these two extremes is classified as ‘mixed reality’ (MR) made up of combination of both real and virtual matter.[4]
[[image:Reality_Virtuality_Continuum_005.jpg]]
Figure 5. Reality-Virtuality Continuum: Representation Scale for Visual Display
(Milgram & Kishino, 1994)
Figure 6 illustrates an example of the use of the reality-virtuality continuum taken from the MagicBook Project (Billinghurst, Kato, & Poupyrev, 2001). On the left of the figure is a book that is real (ie. the real world environment); in the middle the same book but viewed though an Augmented Reality (AR) Display where figures appear like pop-up characters on top of the book (ie. mixed reality or augmented reality); while on the right the same book but viewed within a virtual environment where the “reader” becomes the characters within the book.
[[image:The_Magic_Project_006.jpg]]
Figure 6. The MagicBook Project: An Example Of The Full Reality-Virtuality Continuum
While the MagicBook project was conceived around the integration of physical (tangible) real world objects with digital virtual world generated objects, when the real world objects are themselves digital or intangible – such as with course materials of photographic images, text, or other digital content the merging of the ‘Real World’ and the ‘Virtual World’ become less obvious. For example, real world authors Pamela Woodard and Wilbur Witt have published their works in the Second Life virtual world first or simultaneously with publication in the real world (Bell, 2006). Second Life virtual world can integrate conventional HTML web page content directly into the virtual environment (Release Candidate, 2008). Content developers and particularly trainers and presenters in Second Life routinely import textures and slides and stream sound and video from outside of the virtual world into the virtual space.
In the context of Milgram and Kishino’s reality-virtuality continuum, this research focuses on the right hand end of the scale i.e. using a desktop display of a virtual world in which all content is delivered virtually. In contrast to the MagicBook project this research considers (in the education context) the affordances from two virtualisation strategies – a direct reproduction of the real world delivery into the virtual (in part, by importing the non virtual world generated materials into the virtual world), and a transformation of the real world material into virtual material (in part, by recasting the non virtual world materials into virtually generated form).
===2.6.2 The Degree of Immersion and Presence===
====2.6.2.1 Introduction====
Virtual reality literature often separates a user’s experience of a virtual environment into physical and psychological components (Benford, Greenhalgh, Reynard, Brown, & Koleva, 1998; Biocca & Delaney, 1995; Sheridan, 1992; Mal Slater, 1999; Mal Slater & Wilbur, 1997; Steuer, 1992). The psychological components include the interaction (or connectedness) and belief where contribution of the participant or their willingness to believe in the reality of which they would otherwise know to be unreal and the physical aided by external mechanical and functional capabilities of the system.
In exploring the factors determining the effectiveness of Virtual Reality environments, Burdea and Coiffet (2003) determined that the aim of virtual reality is to achieve a trio of ‘Immersion, Interaction and Imagination’ (Figure 7. The Three I's of Virtual Reality), each of which holds equal significance to the user’s experience of virtual reality systems. A virtual reality system seeks fully to engage the user in the virtual space. They proposed that excluding any one of these features exposed a user to passive participation, and ultimately detracted from the perceived ‘reality’ of the experience.
[[image:Immersion_Interaction_Imagination_007.jpg]]
Figure 7. The Three I's of Virtual Reality
Slater (1992) defined user involvement to be a combination of the human experience which in turn is dependent on the technology (Figure 8). Telepresence (or presence) is the human sensation of ‘being there’ in a virtual environment[5] and seen influenced in part by the technology in terms of vividness (richness, realism) and interactivity (response) of the environment.
[[image:Steuer_Variables_Influencing_Telepresence_008.jpg]]
Figure 8. Technological Variables Influencing Telepresence (Steuer, 1992)
Slater and Wilbur (1999; 1997) revisited these concepts in later work, defining a user’s experience in terms of immersion and presence. Immersion is seen as an objective measure of ‘systems immersion’ technology such as field of view, quality of display etc and while presence is seen as a subjective measure, a psychological sensation of ‘being there’. From here on we will be using the terms immersion and presence as defined by Wilbur and Slater.
====2.6.2.2 Immersion====
Benford et al. (1998) propose classifications of artificiality and transportation for collaborative environments (Figure 9) that extends Milgram and Kishino’s reality-virtuality continuum. Artificiality (physical-synthetic) is equivalent to the reality-virtuality continuum. Transportation (local-remote) is the degree to which a participant becomes removed from their local space to operate in a remote space, which they define to be a similar to the concept to immersion. For example, CVEs (Collaborative Virtual Environments[6]) are placed on a scale of partial to remote transportation where a fully immersive CVE would be the ultimate level of transportation in a virtual reality system using devices such as HMD, data gloves, tactical and aural equipment that allowed for no outside distraction, the participant would be operating completely within virtual environment and be fully remote form their local environment[7]. Whereas, a desktop CVE is partially immersive as ones local surroundings form a part of the virtual environment eg field of view that allows for head turning away from the virtual space etc (Sheridan, 1992). In the context of Benford et al. transportation scale this research is conducted using desktop CVEs and is therefore only partially immersive according to their scale.
[[image:Artificiality_Transportation_as_SS_Metrics_009.jpg]]
Figure 9. Shared Space Technology According to Artificiality and Transportation
====2.6.2.3 Presence====
Research in online gaming virtual worlds has tended to focus on the human experience (presence) of virtual worlds rather than the ‘systems immersion’ aspects, while studies of virtual reality environments have tended to consider both. This is possibly a function of the common standard interface for massively multiplayer game environments that has traditionally been the desktop computer equipped with a mouse and keyboard. Although various more advanced mass market input devices (head mounted displays and 3D mice, etc) have been available to the mass-market for many years, they are not yet widely utilised.
The degree of presence is often linked to the effectiveness of a virtual environments (Witmer & Singer, 1998) which due to its subjective nature is possibly the most difficult to comprehend and therefore measure (Mal Slater & Usoh, 1993). Hence, this area has been a widely researched with various explanations as to what constitutes presence in a virtual environment (Schuemie, Straaten, Krijn, & Mast, 2001). The sense of ‘being there’ in the environment is subjective as Slater and Usoh (1993; 1994) describe presence is similar to a person’s ‘willingness to suspend disbelief’, a concept derived from British poet and literary critic Samuel Coleridge (1772-1834) in his autobiography (1817) where he describes the phenomena of when a person becomes so engaged in a narrative that they are willing to believe an event is true if even for only a brief moment. Although suspension of disbelief is often linked today with mediums such as film and literature, virtual worlds (especially Role Playing Game (RPG) worlds) provide many of the same traits in which the user can be thought of as an actor within the virtual world that forms a part of the storyline.
A number of presence classification strategies have been proposed by various authors. We will consider:
#Schroeder - focussing on the importance of social interaction
#Bartle – focussing on the degree of commitment in the environment
Schroeder (2006) presents presence in a continuum of shared virtual environments (SVE) within a three-dimensional model (Figure 10). Presence (x), copresence (y) and connected presence (z) can be described respectively as ‘being there’, ‘being there together’ and ‘being connected together’. Connected presence can be thought as the extent to which a relationship is mediated when presence and copresence exist. Mapping is done on a comparison with a physical face-to-face relationship (0,0,0) and an entirely immersive environment such as a networked Cave (1,1,1). For example, face-to-face is (0,0,0) there is no presence (and thus no copresence) as no meeting is taking place in a virtual environment whereas in the case of a networked Cave (1,1,1) the entire relationship (and environment) is virtual where affordances are such for high connected presence.
[[image:Presence_Copresence_Connected-Presence_010.jpg]]
Figure 10. Presence, Copresence, and Connected Presence
In different media for being there together
Of interest in Schroeder’s model is the comparison of desktop SVE and online computer games. The example given in the model for a desktop SVE is Active Worlds which is a massively multiplayer online (MMO) social virtual world and the example provided in his paper for an online game is Quake, which at the time provided for up to 16 players sharing a common space. Both are virtual worlds, use text chat and sound, and use avatars to project the participant into the virtual world (although Quake takes a first person view exclusively). For the purpose of the analysis the main differences were perceived as the number of simultaneous players sharing the common virtual space and the imposition of clear game driven objectives in Quake, and the absence of those same game driven objectives in Active Worlds. Yet, Active Worlds was seen as providing a higher level of connected presence. So why does Active World provide a higher level of connected presence? The distinction here was seen to be the in the concept of the ‘game’ rather than number of players when you compare it to other SVEs presented in this model above. Active World is a social world in which no plot is provided to measure success or failure of an individual, unlike Quake where the measure of success is clear and the entire activity and function of the environment is the relentless pursuit of that individual success. Therefore it was deduced that a social (game) world provide for more connected presence than that of an individually focussed plot driven gaming virtual world (at least as analysed by Schroeder).
Schroeder observation of higher connected presence in social virtual worlds seems to fit with Heeter’s (1992; 2003) definition of social presence where she defines presence in terms of individual presence, social presence and environmental presence. Presence of an individual is increased when social relationships are formed which is based upon the social component of perceptual stimuli. When an environment or situation is focused on the relationship (rather than killing a monster like in RPGs) a higher social presence will be achieved.[8]
Bartle (2003, p. 42) identifies a system of levels of immersion (which in this paper we have defined as presence[9]) based upon a linear scale of the; Player (the real person), Avatar (the digital puppet), Character (representation in the world e.g. character name, role etc) and Persona (your identity in the virtual world where the player is the Character and is in the virtual world). Persona is similar to the concept presence, if your character is killed ‘you feel like you have died’ there is no distinction between the character and the player, they are one, the Persona. Bartle believes that the avatars and character are just steps along the way to persona. Persona is when a person ‘stops playing the world and starts living in the virtual world’.
==2.7 Influences on Virtual Worlds from Art and Literature==
===2.7.1 Introduction===
The concept of a virtual world is by no means unique to computing. The thought of exploring an imaginary realm has captivated people’s imagination throughout time.
“If we define that a virtual world is a place described by words and/or projected through pictures, which creates a space in the imagination real enough that you can feel you are inside of it, then the painted caves of our ancestors, shadow puppetry, the 17th-century Lanterna Magica, a good book, play or movie are all gateways to virtual worlds. Humanity’s most powerful new tool, the digital computer, was also destined to become a purveyor of virtual worlds, but with a new twist: The computer enables the virtual world to be both inhabited and co-created by people participating from different physical locations.”(Damer, 2007, p. 2)
At least with respect to the massively multiplayer online virtual worlds/role playing games (MMOVW, or MMORPG), all of today’s exhibits can trace their paradigms to literature. Some such as Eve, Entropia Universe, World of Warcraft are amalgams of a body of works and ideas while others such as MUD1 (Sword of the Phoenix (Howard, 1932)) and Second Life (Snow Crash (Stephenson, 1992)) are direct inspirations of specific literary works.
Consequently, to properly understand the ‘state of the art’ represented by today’s multi-user, connected together, virtual worlds and the gaming, social and business rules they have adopted to govern them, it is essential to consider the context from which they have been derived, and the art that has influenced their creators. While some operational paradigms in virtual worlds are technology constraints, functional capability constraints can be as much a condition of the imagined world being implemented as a real constraint of the technology of the day. To appreciate this fact one need only compare the camera controls of Project Entropia versus those of Second Life – two environments of comparable age, or the commercial capabilities of these two environments versus those of World of Warcraft. In each case the differences and apparent restrictions are a game design decision rather than a technology constraint.
===2.7.2 Virtual Worlds of the Arts===
James Pearson (2002) believes that from as early as 30,000 years ago in the Chauvet Cave in France shaman used cave art as a means to document their experiences of travel to the dream world. Packer and Jordan (2002) also draw this similarity in their book on virtual reality describing how the Cro Magnon in 15,000 BC in the Lascaux caves of south-western France used cave art (Figure 11) with candles and the acid aroma of animal fat for a magical theatre of the senses.
[[image:Cave_Art_BC_011.jpg]]
Figure 11. The caves of Lascaux: Cave Art 15,000 BC
The German composer Richard Wagner (1813-1883) (Figure 12) concept of Gesamtkunstwerk (total artwork) has also been cited as an early pioneer in the concept of immersion and presence in virtual worlds (Grau, 1999; Klich, 2007; Packer & Jordan, 2002). Wagner believed that a “Artistic Man can only fully content himself by uniting every branch of Art into the common Artwork” a synergy that not only includes the performance but all that surrounds so that mankind “...forgets the confines of the auditorium, and lives and breathes now only in the artwork which seems to it as Life itself, and on the stage which seems the wide expanse of the whole World” (Wagner, 1849, p. 184 & 186).
[[image:Wagner_Gesamtkunstwerk_012.jpg]]
Figure 12. Richard Wagner's Gesamtkunstwerk (Total Artwork)
===2.7.3 Virtual Worlds of Fiction and Fantasy===
There are numerous examples of virtual world that have been explored though fiction and fantasy. Each has contributed to the illusion of virtual worlds becoming a reality (Bartle, 2003; Chesher, 1994).
In Lewis Carroll’s novel, Alice's Adventures in Wonderland (1865), Alice fell down a rabbit hole to explore a fantasy world inhabited by peculiar and anthropomorphic creatures. Similarly, in Carroll’s follow on novel, Through the Looking Glass (1871), Alice explores a world behind a mirror. Hattori (1991) saw Lewis Carroll’s novels as a paradigm for modern virtual reality systems (Figure 13) blending the physical space with fantasy in a rapidly changing environment. To this extent, Carroll’s works provide a perfect analogy for the design and the development of virtual worlds (Rosenblum, 1995; West Virginia University, 2008). An explorative virtual world was realised as a children’s computer game called The Manhole (1988-2007) where it was based upon Carroll’s novel Alice’s Adventure in Wonderland (Wikipedia, 2008a).
[[image:Alice_via_Caroll_and_Hattori_013.jpg]]
Figure 13. 'Through the Looking Glass' Carroll (1871) & 'The World of Virtual Reality' Hattori (1991)
Within the fantasy literary genre, a key influence has been the works of J R R Tolkien starting with The Hobbit (1937) and its sequel The Lord of the Rings (1954, 1955) (Figure 14). An adventure fantasy that takes place in an imaginary world called Middle-Earth containing races such as Hobbits, Wizards, Elves, Orcs, Dwarves and Trolls. Tolkien’s literature style was so popular that the Oxford dictionary termed his literature approach as tolkienesque[10].
[[image:JRR_Tolkein_Book_Covers_014.jpg]]
Figure 14. The Hobbit & The Lord of the Rings by J. R. R. Tolkien (1937, 1954, 1955)
With respect to today’s virtual worlds, Tolkien’s contribution has not been merely in the construction of a raft of characters, racial groups and social concepts for role playing game inhabitants and interaction rules, but most importantly in his deep backgrounding of the imagined worlds. He did not merely describe his characters within the context and flow of the story line, but extended beyond that which was needed to tell a story, into that which was needed to make us believe of the real existence of his virtual worlds, Tolkien provides the reader with immaculate detail and descriptions to immerse them into the world Middle-Earth. Both books contained land maps (Figure 14) and the final book to The Lord of the Rings (released in 3 parts) containing appendices describing chronologies, histories, family trees, languages and translations and a calendar and dating system. Being a professor at Leeds and Oxford University he approached his work more like an academic anthropological study of an imagined world than a novelist (Macmillan, 2008).
In so doing Tolkien demonstrated a fundamental understanding a core strategy in establishing convincing presence – the necessity for a consistent, credible back story underpinning the virtual world. It is an early example of the depth of design that many later virtual worlds would exhibit in order to create a convincing sense of presence for the participant (Bartle, 2003; Schmidt, Kinzer, & Greenbaum, 2007).
A couple of virtual worlds that has been translated from Tolkien’s literature are the online virtual world ‘Lord of the Rings Online’ (2007) and PLATO’s MUD virtual world ‘Mines of Moria’ (1974).
More recently, literature has turned to imagining realities in which computational virtual worlds are a fundamental component of the plot. It is from this group that many of the terms now used to describe aspects and elements of virtual worlds are derived or were popularised, such as ‘avatar’, ‘metaverse’, ‘cyber-space’, etc. Some recent examples of novels containing a plot of computation virtual world are True Names (Vinge, 1981), Neuromancer (Gibson, 1984) and Snow Cash (Stephenson, 1992) (Figure 15).
[[image:Recent_VR_Literature_Covers_015.jpg]]
Figure 15. Recent Literature True Names (Vinge, 1981), Neuromancer (Gibson, 1984), Snow Cash (Stephenson, 1992)
'''Vernor Vinge’s True Names''' is not as well know as other novels in this genre but it was the first to present the concept of a person entering a computational virtual worlds meeting other people in ‘the other plane’ (Kelly, 1995). It was also unique in bringing the concept of anonymity to the digital world with one’s digital persona (handle) being different from one’s real self and where there was a necessity to hide one’s real identity thus your true name (and hence the title). It was translated to the computational virtual world in the form of ‘Habitat’ – the first graphical social networking virtual world (Farmer, 1992).
'''William Gibson’s Neuromancer''' a true cyberpunk[11] novel is possibly the most widely quoted in the virtual environment space (Chesher, 1994) . In this novel Gibson coined the term cyberspace with the concept of a viable parallel online world capable of critically impacting events and commerce in the real world.
'''Neal Stephenson's Snow Crash''' is where the term Metaverse was coined. Metaverse is a planet-sized city that has one continuous street 65,536 kilometres (216 km) in length where millions of people (known as avatars) travel up and down daily in search for entertainment, trade or social interaction. Although similar, in one sense, to Neuromancer it came from a different perspective in that people actually lived in the Metaverse not as cyberpunks getting up to mischief but as everyday people living a mainstream life real life in the virtual world. In this world real commerce was conducted and virtual artefacts were bought and sold with real world consequences which has been realised in the development of the virtual world Second Life.
Hollywood also contributed to the fantasy of the reality of virtual worlds. Films such as Tron (Lisberger, 1982), The Lawnmower Man (Leonard, 1992) and The Matrix (Wachowski & Wachowski, 1999) (Figure 16) just to name a few gave us the visual of virtual worlds that the books could only describe, and in some cases explored the haptic interfaces now being realised (Chesher, 1994).
[[image:VW_Films_Tron_LawnmowerMan_Matrix_016.jpg]]
Figure 16. Hollywood Films
Tron (Lisberger, 1982), The Lawnmower Man (Leonard, 1992), The Matrix (Wachowski & Wachowski, 1999)
At the time of their release, the novels and movies discussed above may have seemed futuristic and the concepts unobtainable but today we are much closer (if not already past) with advances in networking, computational processing power and understanding of the sociology of virtual environments. Maybe a ‘jack-in’ device that stimulates our nervous system to travel into cyberspace (Neuromancer, Gibson, 1984) is still a little way off (and may be too intrusive for some), or smelling odours or feeling textures within a virtual world may never be quite the same as the real life experience but what once seemed unimaginable in these works has become reality today. With technological advances and the rapid adoption of internet enabled online virtual worlds many of these concepts are less science fiction and more science fact than they once were.
==2.8 The History of Computational Virtual Worlds==
===2.8.1 Introduction===
In a lecture delivered by Ivan Sutherland in 1965 the first steps were made to combine computer design, construction, navigation and habitation of software generated virtual worlds (Packer & Jordan, 2002). Here Sutherland laid down a vision for the development of virtual worlds, as paraphrased by Brooks (1999, p. 16):
<blockquote>
“Don’t think of that thing as a screen, think of it as a window, a window through which one looks into a virtual world. The challenge to computer graphics is to make that virtual world look real, sound real, move and respond to interaction in real-time and even feel real.”
</blockquote>
The new-born medium of the graphical, digital virtual world experienced a “Cambrian Explosion” of diversity in the 1980s and ‘90s, with offspring species of many genres: first-person shooters, fantasy role-playing games, simulators, shared board and game tables, and social virtual worlds. (Damer, 2007)
The massively multiplayer online virtual worlds of today, with their world-wide user bases, are essentially a consequence of the mass adoption of the internet which commenced in the early 1990’s. Since the internet first achieved general acceptance they have advanced substantially in technical capabilities, graphics and number of subscribers (Figure 17) (Woodcock, 2008). See Appendix B: MMOG Analysis, for a break-down of MMOGs contained in this graph.
[[image:MMOVW_Growth_Rate_017.jpg]]
Figure 17. Massive Multiplayer Online Virtual World Growth Chart 98-2008
The virtual worlds of today (such as World of Warcraft, Entropia Universe, America’s Army, and Second Life, etc) represent a convergence of several disparate computational, technical and social origins and drivers. Current virtual worlds combine 3D visualisation, game theory, text messaging, animations, context and text sensitive gesturing, natural language processing, spatial voice & audio, artificial intelligence, agency theory, physics, connectedness, persistence, business strategy, sensory hardware and haptic interfaces, telecommunications, 2D image processing, video chroma-keying, social networking and many other influences to achieve their sense of immersion and presence. In this section we explore some of the milestones along these convergent paths.
As many of the influences that have contributed to our latest virtual world are derived from research streams that were concurrently pursued over more than 50 years, we shall look at the history of virtual worlds in six streams:
#Hardware based user interfaces and virtual reality environments
#Early graphical computer games
#Text and Text+ based Virtual Worlds
#2.5 and 3D graphical multi-player virtual worlds, broken down into:
#: a. MMORPGs
#: b. Social Virtual Worlds
#Simulation and Training Worlds
It should be noted that, while we will be considering the history in these streams, some virtual worlds necessarily exist in more than one stream. The grouping is that of the researcher, based on an extensive assessment of the literature, rather than the view of any one author.
===2.8.2 Hardware Based User Interfaces and Virtual Reality Systems===
====2.8.2.1 Introduction====
These two areas are grouped together, not because Virtual Reality (VR) Systems are a hardware solution, but rather because the work done in virtual reality worlds has generally aimed for extremely high levels of both immersion and presence and has therefore generally (although not always) been coupled with hardware in the form of purpose built user interfaces, designed to assist the sense of immersion such as headsets, or data gloves, etc.
The importance of the progress in VR systems to virtual worlds is that they have contributed or assisted much of the fundamental graphical rendering technologies, 3D animations studies and spatial awareness research and conceptualised the immersive aspects of virtual worlds.
====2.8.2.2 Sensorama====
One of the earliest inventions in the genre of virtual world simulators was developed by a cinematographer Morton Heilig. Inspired by the work of Fred Walker’s with Cinerama[12], Heilig presented a paper in 1955 ‘The Cinema of the Future’ (reprinted in Packer & Jordan, 2002). In an extension of Wagner’s (1849) Gesamtkunstwerk (total artwork) concept (Holmberg, 2003), Heilig believed that the logical extension of cinema was to provide the audience a first person experience of film using all their senses – “Open your eyes, listen, smell, and feel—sense the world in all its magnificent colors, depth, sounds, odors, and textures—this is the cinema of the future! (Packer & Jordan, 2002, p. 246)”
[[image:Morton_Heilig_Sensorama_Simulator_018.jpg]]
Figure 18. Morton Heilig, Sensorama Simulator, U.S. Patent #3050870, 1962
Heilig developed and patented the Sensorama Simulator (Figure 18) in 1962. The Sensorama was a single person simulator that offered the viewer a multi-sensory fully immersive theatre. The viewer could sit to watch a short three-dimensional stereoscopic movie that included stereo sound, an odour generator, force feedback handle bars, chair motion and wind on the viewers face (Rheingold, 1992). Heilig believed that the Sensorama Simulator could be next generation of theatres placed in hotels and lobbies or any small space that could fit his miniature theatre (Heilig, 1955, p. 345).
Heilig also recognised that the Sensorama Simulator offered training and learning potential for educational and industrial intuitions (Rheingold, 1992, p. 58) but unfortunately the Sensorama Simulator never took off, it was “a time when the business community couldn’t figure out what to do with it” (Laurel, 1991, p. 52). This may have been different a decade later when Pong kicked-off the arcade game industry and when education, industry and government saw great potential from investing in virtual world technology as they did with the Head Mounted Display (HMD).
====2.8.2.3 Head-Mounted Display====
In 1968 Ivan Sutherland presented the first computerised graphical HMD (Figure 19) (Sutherland, 1968)[13]. The HMD had a cathode ray tube (CRT) for each eye with a three-dimension simple wire-frame view of a room with motion tracking when the viewer moved their head. Known as ‘The Sword of Damocles’ based upon a Greek legend of a man placed in a precarious position of luxury with a sword above his head (Oxford Dictionary, 1989), similarly the HMD had a computer suspended above the users head attached by a mechanical arm (Figure 19, right) (Carlson, 2003).
[[image:HUD_The_Sword_of_Damocles_019.jpg]]
Figure 19. Head Mounted Display first called The Sword of Damocles (Sutherland,1968)
The HMD was a significant milestone in the development of virtual reality technology, which has since been used in a variety of applications in virtual worlds. Holding advantages over a traditional computer monitor such as total head and body movement, non interrupted viewing for total immersive HMDs and simultaneous viewing of real world and virtual world artefacts in ‘see-though’ HMDs or sometimes called Augmented Reality Displays (Rolland & Hua, 2005).
Today’s HMDs are more compact than Sutherland’s 1960s prototype (Figure 20). In the figure is shown on the left a HMD used for mixed reality environments similar to that designed by Sutherland and right a immersive HMD which is compatible with several online and gaming virtual worlds.
[[image:HUD_See_Through_and_Immersive_020.jpg]]
Figure 20. Today's Head Mounted Displays - Left: See-Though HMD - Right: Immersive HMD
===2.8.3 Early Graphical Computer Games===
Computer games have had a large influence on the evolution of virtual worlds both in the development and use of the technology. The contribution of games includes computational game theory, 2D and 3D graphics, social modelling, simulation, strategies for achieving presence, artificial intelligence, computational game physics and, possibly most significant delivery of a massive consumer market to fund and drive the investment needed for innovation and technology improvement. By far the majority of today’s online virtual worlds were conceived and/or delivered as games, they have subsequently evolved into general business or training platforms which are sometime referred to as Serious Games (Annetta, Murray, Laird, Bohr, & Park, 2006).
The early computer games that can be traced to a few innovative applications (Figure 21):
*'''Tennis for Two''': In 1958 William Higinbotham developed the first electronic game simulator using an oscilloscope display that demonstrated a two-dimensional side view of a tennis court. It was a two player game that the user could control the direction of the bouncing ball by turning a knob on a hand held device. Originally developed by Higinbotham to occupy visitors to Brookhaven National Laboratory during open days the game had queues of people waiting to play (Brookhaven National Laboratory, n.d.). Tennis for Two introduced the concepts of a shared multi-player electronic game experience, a rule based environment managed by a machine, and an electronic space where the actions of one player in the shared space affected the experience of another. The attention the game attracted demonstrated the willingness of participants to accept the visual and sensory limitations of a machine managed game environment and immerse themselves in the experience.
*'''Spacewar!''' The idea originated in 1961 by Steve Russell at the Massachusetts Institute of Technology (MIT) by 1962 the game was released with assistance from his colleges. Spacewar! was the first official release of a two-dimensional computer game.[14] A two player game each with a spaceship that would fire bullets at each other before being pulled into the middle by the sun. Developed originally to demonstrate the power of the new PDP-1 computer, this game was a good demonstration of both the graphic capabilities and the process power of the machine (Computer History Museum, n.d.; Markowitz, 2000). Later in 1969 Rick Blomme modified the game to run on PLATO which made this the first game to be networked (Koster, 2002; Mulligan, 2002). While Tennis for Two was the first multiplayer electronic game, Spacewar was the first computer based multiplayer game. It thus contributed the same key concepts and ideas as Tennis, only for the first time on a computer managed environment.
*'''Maze War''': In 1973-1974 Steve Colley developed the first three-dimensional ‘first person shooter’ (FPS) game Maze War at NASA Ames Research Center. A player would navigate around a maze searching for other players to shoot. As seen below (top right) the player had a first person view, (the eyeball seen in this picture is the other player). This is called a ‘first person’ game, placing the player ‘in-world’ as a part of the game is a significant concept of virtual world games. Maze War also provided other innovations now common to virtual worlds such as instant messaging, levelling and non player robot characters (Damer, 2007). This game which started as a two player game was eventually connected to ARPANET (the forerunner of our current internet network technology) allowing several users from remote locations to play and interact (Colley, n.d.; Damer, 2004). Maze War can therefore lay claim to being the progenitor of virtual worlds but not an actual virtual world because of its lack of persistence.
[[image:Early_Computer_Games_1958_To_1974_021.jpg]]
Figure 21. Early Computer Games 1958 - 1974
*'''DOOM (1993) (II, 1994)''' a 3D FPS game was influential both on a conceptual and technical level (Friedl, 2002; Mulligan, 2000). In DOOM the concept of Maze War was re-implemented in a much more graphically rich 3D environment. Although only a single player game, the key innovation of relevance was the method used to manage the rendering of the 3D space to allow multiple non-player characters to participate in the 3D environment with the player. The strategy adopted was essentially to divide the world into many small rooms surrounded on all sides by walls (essentially a cave system) by rendering only a single room at a time the entire resources of the computer could be devoted to a known confined rendering space, thus achieving the illusion of a highly detailed rendering with the limited computational resources available on the PC’s of the day. Although higher quality 3D rendered games were available some seven years earlier on the Amiga computers from 1986 (including some utilising real-time ray tracing technology), these relied on dedicated proprietary games architected graphics cards and did not provide a 3D space management paradigm that could be easily translated to the future demands of online 3D games. The Doom model could, precisely because it was architected for the graphically and processor challenged generalised home PC’s of the day, rather than proprietary games machines such as the Amiga. The Doom games engine was utilised in many subsequent games and later formed the basis for the model adopted for the online game Quake (Petrich, n.d.; Wikipedia Doom, 2008).
Around the time of DOOM the game industry realised the importance of connecting people together for online gaming, seeing the opportunity they started adding a modem and LAN play and later TCP/IP functionality to their games that allowed both single and multiplayer connectivity. Early games allowed up to 4 players but today’s games can have up to 64 players in a single game session (Quake Wars[15]). Some of the better known brand names included:
*'''Quake''' (1996, a multiplayer extension of DOOM) saw over 80,000 people connected to 10,000 + simultaneous game session (Mulligan, 2000).
Warcraft (1994) (II, 1995) that eventually would become the basis to the largest MMORPG today World of Warcraft (2004) which now has over 11 millions subscribed users (Blizzard Entertainment Inc, 2008).
===2.8.4 Text Based Virtual Worlds===
====2.8.4.1 Text Virtual Worlds: MUDs====
In 1978 the first MUD (Multi User Dungeon) outside of the PLATO system (discussed under Training and Simulators) was created by a Computer Science undergraduate Roy Trubshaw (and shortly afterwards joined by Richard Bartle) from Essex University in England. A text based virtual world, coined a MUD by Bartle was based upon Robert E Howard’s (1932) fictional tale ‘The Phoenix on the Sword’. MUD1[16] was an adventure role playing game, with game levelling and chat rooms which allowed up to 32 players to connect simultaneously over a remote connection (Figure 22) (Bartle, 2003).
[[image:Bartle_The_First_MUD_022.jpg]]
Figure 22. The First MUD: Roy Trubshaw and Richard Bartle (1978)
Early in the game’s history Essex University on whose computers the game was hosted became a part of ARPANET (the forerunner of the internet) and soon after MUD was distributed through that network and being played on universities throughout the world. Some of these institutions were also open for public access. Although copyrighted many variations of MUD1 were made and distributed freely from what Bartle (2003) describes as either player inspiration or pure frustration with the 32 player limitation which made it impossible to play when dial-in lines were fully allocated.
Keegan (1997) identifies two main classification of MUDs developed during this time (Figure 23) - the Essex MUDs (Trubshaw and Bartle’s) and Scepter of Goth (1978). Unfortunately Scepter died an early death, the game was sold and soon afterwards passed onto the creditors when the purchasing company ran out money (Bartle, 2003). Most MUDs were therefore based upon the ideas and technical structure of Trubshaw and Bartle’s MUD (Bartle, 2003; Keegan, 1997).
[[image:Basic_MUD_Tree_Structure_023.jpg]]
Figure 23. Basic Tree Structure for MUD classification
MUD1 introduced a number of concepts retained by most of today’s virtual worlds. Among which are:
*The role and effectiveness of the text based narrative and text communication that contributed to, rather than detracts from the sense of presence.
*Persistence in game play.
*Shared game space and cooperative (team based) activity.
*Non-player artificial intelligences called AI’s (or non player characters) as part of the experience.
*Region based environment management.
*Role-playing as a central game theme.
*Characters and avatars (all be it text based in the early MUDs).
*Game defined goals but player implemented plots.
Region based environment management is a computational aid that warrants particular attention. It was also used by the DOOM 3D graphics engine to manage multi-user environments allowing the computer to render the shared space in a single discrete region at a time. In DOOM this was a room, in MUD1 it was a cave in more recent virtual worlds it may be as much as a 65,000 sqm area (Second Life). This strategy provides a method of scaling the virtual worlds to many regions by distributing the region management across many discrete servers but imposes practical limits on the number of players that can be present in any given region at an instant in time (Hu & Liao, 2004).
MUD1 had a significant impact on virtual world design and development that dominated the online game space until the mid 1990s therefore MUD1 is often marked as the beginning of the first generation in online virtual worlds (Bartle, 2003). MUD1 can still be played online today at british-legends.com (CompuServe, 2007).
====2.8.4.2 ASCII Virtual Worlds====
In the early 1980’s pseudo graphical interfaces were added to some MUDs in the form of ASCII virtual worlds. ACSII (American Standard Code for Information Interchange) is the most widely adopted character encoding on western computer systems. ASCII virtual worlds provided a pseudo-graphical display making use of shape symbols and character positioning escape sequences to create crude planar maps of the terrain (dungeon) environment. The maps enhance the description of the room provided by the text.
ASCII pseudo graphical virtual worlds provided the player with a view of the world improved over the simple text prompt and description of MUDs. An example of an ASCII game can be seen below (Figure 24) Islands of Kesmai (IOK). Developed in 1982 and released in 1984 the game provided a player with a 3rd person view - overhead view of the world. Walls were denoted by [], fire **) and the players were letters (Bartle, 1990). IOK was Compuserve’s (USA ISP) best selling game with players paying up to $12.50 per hour to play (based upon connection time not game played) which usually had between 10-60 players online simultaneously (Bartle, 1990). Other ASCII games around this time were MegaWars I & MegaWars III (1983), NetHack (1987 (O'Donnell, 2003)), Sniper! and The Spy (Bartle, 1990).
[[image:RPG_Islands_Of_Kesmai_024.jpg]]
Figure 24. Islands of Kesmai ASCII Text Role Playing Game (1982-84)
By the mid to late 1980s home computing and online networking service providers opened the gates to huge expansion for on line virtual world. People paid for networking services by the hours, which gave a huge incentive to these providers to get their subscribers hooked on virtual worlds. There was big money to be made with 70% of revenue from one provider (Genie) in the early 1990s being made from games. By 1993 a study showed that 10% of the NSFNET backbone (precursor to the internet consisting of mainly government and universities) network traffic belonged to MUDs (Bartle, 2003).
===2.8.5 Graphical Virtual Worlds===
The text based MUDs evolved into two different streams: the 3D First Person Shooters such as DOOM and Quake which adopted the room at a time view of the world for 3D rendering, and the 2D graphical online virtual worlds that appeared in the early 1990s. Early examples include NeverWinter Nights (1991-1997), Shadow of Yserbius (1992-1996) and Kingdom of Drakkar (1992-Current) (Figure 25).
[[image:Graphical_2D_Virtual_Worlds_025.jpg]]
Figure 25. Graphical 2D Virtual Worlds
Unlike Habitat and Worldsaway (discussed under Social Networking Virtual Worlds) that predated these games appearing in the mid-1980’s, the graphically enhanced text based games were fantasy role playing games -- basically MUDs with graphics. Although 2D some of these games displayed isometrically on an angle which gave an illusion of a three-dimensional view for the player, for this reason these games are sometimes referred to as 2 ½D worlds (Bartle, 2003). These games used more sophisticated graphics (than the pseudo graphical solutions) to improve the sense of presence experienced by the players, while retaining the text based narrative.
By the mid 1990s with nearly 10 million internet hosts (Figure 26) (Slater III, 2002; Zakon, 2006) and price wars between providers the internet opened to doors to millions which saw hordes of inexpert computer users wanting to play games (Bartle, 2003). Game design had improved long with the graphical elements of virtual worlds with graphics rendering capabilities on standard PC’s and the emergence of common graphics file standards which made development of virtual worlds possible, practical and more economical.
[[image:InternetParticipatingHosts_Count_1990_to_1998_026.jpg]]
Figure 26. The Internet No. of Participating Hosts Oct. ‘90 - Apr. ‘98
====2.8.5.1 MMORPGs====
By the mid 1990s we saw the first 3D virtual world online Meridian 59 (1996-2000 & 2002-Current) although technically it used a pseudo-3D graphics engine (Axon, 2008; Bartle, 2003) providing a first person view where the player could view all angles in the environment (Figure 27). We saw the beginnings of a new era of virtual worlds with a massive 25,000 people signing up for the beta release (Axon, 2008), which unfortunately met with limited commercial success (Bartle, 2003; Friedl, 2002) and was shut down in 2000 but resurrected again in 2002 with the updated version online today at meridian59.neardeathstudios.com.
[[image:Meridian_59_First_3D_Online_Virtual_World_027.jpg ]]
Figure 27. Meridian 59 First 3D Online Virtual World (1996)
The turning point for online virtual worlds was Ultima Online (1997-Current). Ultima had already had met with success with the Ultima computer games series. With its online launch it had 50,000 subscribers within 3 months and was the first online virtual world to crack the 100,000 threshold within 12 months of release (which it did so in under 6 months) (Bartle, 2003; Woodcock, 2008). This added a new dimension to the term multiplayer where it has now come to know as a Massive Multiplayer Online, Role Playing Game or MMORPG. Subscription peaked at 250,000 in 2003 with 75,000 being reported in December 2007 (Woodcock, 2008).
Ultima Online consisting of a 2½D graphical virtual world has remained visual much the same (Figure 28) although recently the client that runs the game (the same concept as a web browser) has had a makeover in 2007 with the Kingdom Reborn (right). This game has received regular expansions to the world, which provides for new challenges and adventures for its player. Back in 2001 the client was upgraded to 3D (Wikipedia Ultima, 2008) but recently Electronic Arts announced they will be de-supporting their 3D client continuing only to support the 2D client going forward (Electronic Arts, 2007).
[[image:Ultima_Online_028.jpg]]
Figure 28. Ultima Online (1997-Current)
Other MMORPGs that started around the mid to late 1990s, which can still be played online today, are Furcadia (1996, longest running), The Realm (1996, second longest 15 days out from Furcadia), Lineage (1998), EverQuest (1999) and Asheron's Call (1999).
The more recent MMORPGs of today, not much has changed in game design from the original RPGs but technically they have improved and do provide much better graphics for the player (Figure 29). They have also increased substantially in popularity with the largest subscription based MMORPG World of Warcraft recently climbing to over 11 million players (Blizzard Entertainment Inc, 2008). Although these players do not play in one virtual world they are separated into different realms, the same game but with different people. This contrasts quite differently to the social virtual worlds like Second Life where all the users share one virtual world. In the next section we discuss social online virtual worlds which although they can be a MMORPG within the world itself (as mentioned early) their model of a virtual world is very different than the dedicated MMORPGs.
[[image:MMOZRG_Eve_and_WOW_029.jpg]]
Figure 29. MMORPG's Eve & World of Warcraft
====2.8.5.2 Social Virtual Worlds====
The first attempt for a commercial large scale multi-user game was made by George Lucas’s Lucasfilm Games. Habitat developed by Chip Morningstar and Randall Farmer started development in 1985 (McLellan, 2004; Ray, 2008; Slator et al., 2007). Habitat was built to support thousands of simultaneous users to run on the home computer Commodore 64 to be distributed via Quantum Link network service providers (later known as AOL). Inspired by a science fiction novel ‘True Names’ (Vinge, 1981) the world contained a fully-fledge economy where citizens of the world could own a virtual business, build a house, fall in love, get married and even established their own self governing laws (Morningstar & Farmer, 1990). Habitat a 2D graphical world looked similar to a cartoon (Figure 30, left) with the avatar (digital self) taking a third person view of the world. The storyline was based upon life rather than the fictional storyline of the MUDs, which placed greater emphasis on the social aspect of the world. Lucasfilm's Habitat was first released as a pilot in 1986 then later in 1988 as Club Caribe in North America which reportedly sustained a population of 15,000 participants by 1990 (Morningstar & Farmer, 1990). In 1990 it was released in Japan as Fujitsu Habitat and after extensive modifications. Habitat was realised again in 1995 as WorldsAway (Figure 30, Right) (Damer, 2007) and again as Dreamscape in 2008.
[[image:VW_Habitat_and_Worldsaway_030.jpg]]
Figure 30. Habitat (86) First Graphical Virtual World Precursor to Worldsaway (95)
Habitat introduced some key concepts in virtual worlds;
*The term ‘Avatar’ into the general virtual world community;
*The idea of focussing on social networking as a key form of game play;
*An economy where people could trade both in world currency and artefacts; and
*The most important, the concept that living in a virtual world and leading an alternate life that was not dictated by rules of a game (like with the dedicated MMORPG environments).
More recent social networking virtual worlds include Active Worlds (1995, 1997-current)[17], Second Life (2003-current) and There (2003-current) (Figure 31) – all of which have achieved a significant volume of educational interest as platforms for delivery of learning. The generalised nature of the social networking sites means that they tend to be more diverse in the range of facilities provided and the purposes to which they can be applied than the role playing game systems. They have generally provided participants with some form of content creation tools including the importing and/or exporting of non-virtual world artefacts. In the next section we discuss further the aspect of education in virtual worlds.
[[image:VW_SecondLife_and_There_031.jpg]]
Figure 31. Social Virtual Worlds: Second Life & There
===2.8.6 Simulation and Learning Systems===
====2.8.6.1 PLATO====
PLATO (Programmed Logic for Automated Teaching Operations) was a system designed for computer based education at University of Illinois that started in the early 1960s. Originally developed as a class room course system (Figure 32) with improvements in mainframe technology by 1972 saw up to a thousand simultaneous online users making it the first public online community that featured electronic course delivery, online chat, bulletin boards, 512 x 512 resolution monitors and 1200 baud connection speed (Unger, 1979; Woolley, 1994). With over 15,000 hours of instructional development PLATO was possibility the largest ever investment in educational technology (Garson, 2000).
[[image:PLATO_Lab_Image032.jpg]]
Figure 32. University of Illinois PLATO Lab & Terminal (1961-2006)
By the mid 1970s games made their way onto the university mainframes with great success. Between 1978 and May, 1985 about 20% of time spent on PLATO was game usage (Woolley, 1994). Games appeared such as Spacewar! (1969 game discussed earlier), Empire (1973, multi user space shooter game based upon Star Trek), DND, (1974, MUD[18] based upon the game Dungeons and Dragons), Mines of Moria (1974, MUD, 248 mazes based upon Tolkien’s Lord of the Rings), SPASIM (1974, 32 multi-user, FPS space ship game)[19], Airfight (1974-75 a 3D flight Simulator precursor to Microsoft’s Flight Simulator), Oubliette (1977, first person 3D MUD) and Avatar (19977-79 first person 3D MUD) (Bartle, 2003; Lowood, 2008; Pellett; Wikipedia, 2008b; Woolley, 1994). See below (Figure 33) for some examples of MUDs held on PLATO. Many of the games on PLATO were recreated for commercial use for arcade or personal computer games (Goldberg, 2002; Mulligan, 2002; Woolley, 1994).
[[image:PLATO_Popular_MUD_Games_Developed_For_PLATO_033.jpg]]
Figure 33. PLATO: Some Popular MUDs Games Developed for use on PLATO (1974-1979)
By 1985 after going commercial PLATO had established a systems of over 100 campuses worldwide (Garson, 2000). Known as the ‘ultimate electronic information and communication utility’ offering over 200,000 hours of courseware (Figure 34), with local dial-up of 300 or 1200 baud connection speed, access to both a social and educational contacts were among the many advances of PLATO that made it an attractive system for the academic community at large (Small & Small, 1984). Over time, with improvements in technology, and the cost of maintaining old technology the final PLATO system was turned off in 2006 (Wikipedia, 2008b).
[[image:PLATO_Online_Course_Count_1984_034.jpg]]
Figure 34. PLATO Over 200,000 online courses by 1984
A web site has been established for preservation of PLATO at cyber1.org (VCampus Corporation, 2008) which holds many of PLATO’s games and courseware for public download.
====2.8.6.2 SIMNET====
Military virtual world simulators started with a project called SIMNET (SIMulator NETworking). SIMNET was a DARPA project that enabled the first large scale real-time networked battlefield simulator. Development and implementation occurred on several levels between 1983 and 1990 (Cosby, 1999; Miller & Thorpe, 1995).
Prior to SIMNET military simulators consisted of immersive virtual reality training devices such as cockpit simulators. Cockpit simulators offered a replicated environment of the ‘real thing’ for example, an aeroplane cabin would be built in its entirety with motion and sensory feedback using pre-programmed software to produce repetitive simulations to provide an individual with mastery skills such as low to ground dog-fighting or missile avoidance training (Miller & Thorpe, 1995). SIMNET provided a cheaper alternative for certain types of training than the cockpit simulators and further offered ‘collective skills’ which Miller and Thorpe (1995) define to be cohesive team operations skills distinguished from individual mastery skills taught in cockpit simulators.
SIMNET a multiuser virtual world (Figure 35) consisted of real battle grounds with manned vehicles (tanks and helicopters), command posts, semi-automated forces where a single operator could control many vehicles in the simulation and the ability to record simulations from any view point (known as the flying carpet) so that it could replayed and statistically analysed and reported upon. At the conclusion of the program there were 250 simulators operating in nine locations (4 of which were in Europe) which provided real-time battle engagements that was directly under the control of the participants (Lenoir, 2003; Miller & Thorpe, 1995).
[[image:SIMNET_Battlefield_Simulator_035.jpg]]
Figure 35. SIMNET: Battlefield Simulator at Fort Knox USA (1983-1990)
SIMNET had a substantial impact on military training after being recognised as the key success factor in winning the 3 day ‘Battle of 73 Easting’ in the Gulf War (1991) which lead to several projects based upon the SIMNET technology (Figure 36) (Foley & Gifford, 2002) with the USA government commissioning $2,549 million dollars in 1997 for modelling and simulation projects (Lenoir, 2003).
[[image:US_Military_Networked_Simlator_Projects_1938_To_2001_036.jpg]]
Figure 36. Timeline of US Military Network Modelling and Simulator Projects (1983-2001)
In 1997 a project named Synthetic Theater of War (SToW) commenced which was a program to construct an environment to combine varies simulators into one large-scale distributed battle simulator capable of involving thousands of participants (Budge, Strini, Dehncke, & Hunt, 1998; Tiernan, 1996). This project has since become Joint Semi-Automated Forces (JSAF) (Hardy et al., 2001) which now enables more than 100,000 simultaneous simulations at a time (US Joint Forces Command, 2008). Australia military has also adopted the JSAF platform to build their our own Course Of Action Simulation (COA-Sim) for joint military operations training, exercises and planning (Carless, 2006; Gabrisch & Burgess, 2005)
====2.8.6.3 Military Use of Commercial Games Engines & The America’s Army====
In 1996, General Krulak of the US Marines tasked the Marine Combat Development Command to explore and approve the use of commercial games engines for military training purposes. One outcome of this effort was the collaboratively developed Marine Doom, based on the Software Id Corporation’s shareware Doom engine and Doom Level Editor. The simulation could be configured for simulation of special missions (such as hostage rescue) immediately prior to engagement and used to rehearse the planned mission (Lenoir, 2003).
In July of 2002 the US Military released a milestone in multi-user training game simulators in the form of America’s Army: Operations (Lenoir, 2003; Zyda, 2005). Based on Epic Games ‘Unreal’ games engine, the game created a virtual world that reproduced aspects of a career in the US Army, including ‘boot-camp’ commencement and weapons and tactical training through to various operations scenarios. Although originally developed and released as a recruitment tool, the game was also claimed to be utilised to improve training outcomes by army instructors at Fort Benning (Zyda, 2005).
Now, with 26 subsequent releases (as of 2008) and available for the PC, cell phone and Xbox, the game has more than 9 million registered users exploring entry level to advanced training, and operations in small units (Figure 37). Beyond a focus on realism that extends to accurate tree placement in training courses at the simulated training camps, the game adds an added dimension of presence to the participants through the active involvement of current and former real-world soldiers as players in the game (designated with a star icon in player profiles), interacting with non-military participants (Department of the Army, 2008).
[[image:Americas_Army_037.jpg]]
Figure 37 America's Army (2002)
From a training perspective anecdotal evidence from army trainers regarding the game is that sessions in the training scenarios such as the firing range or obstacle courses improve subsequent results in the real-life versions of these activities (Zyda, 2005). The US Army possibly one of the largest investors of virtual world game technology recently announced their plans to spend $50 million USD over the next 5 years to create 70 gaming systems in 53 locations around the world for combat training (Robson, 2008).
==2.9 Virtual Worlds for Education==
===2.9.1 Architecture Considerations===
====2.9.1.1 Introduction====
To appreciate properly the discussion of the literature examining educational directions in virtual worlds, the researcher considers a brief overview of the key architectural differences to assist the reader. This material is based on the researcher’s examination of a variety of game environments and virtual worlds, and discussions with experienced and knowledgeable users of these environments, rather than sourced from the work of other authors. As such the discussion is interpretive rather than authoritative.
Some of these environments have existed for only a few years, and have not yet enjoyed a comparative analysis undertaken by the academic community. As such, this discussion might not normally reside in the literature review, but it is felt that the placement of this discussion in this sub-section will assist the reader in better appreciating the issues explored in the literature discussion throughout the remainder of the section.
====2.9.1.2 Considerations of Operational Design====
While all of today’s major virtual worlds include capabilities for user interaction, sharing of the environment, persistence, avatars, business rules, streamed audio and text there are substantial differences in the technologies used to deliver the virtual experience. While some of these differences may create marginal differences in the world experience of the casual user, from the perspective of the educator and content creator the differences are substantial.
The major offerings can be viewed under the following groups (note: in each category the researcher has selected only a few example worlds, in most cases other options also exist):
#Proprietary closed engine (e.g. World of Warcraft, Everquest)
#Client resident closed content and world model with open engine (e.g. Shareware Doom )
#Streamed (or semi streamed) closed content and world model with closed engine (Entropia Universe)
#Open client resident content and world model with closed engine (Flight Simulator X, America’s Army, Unreal games, Quake, Doom)
#Open streamed content and world model (Hipi Hi, TruePlay, Active Worlds)
#Open streamed content and world model with out-of-world interfaces (Second Life V1, VastPark)
#Open streamed content and world model with out-of-world interfaces and open client (Second Life V1.2)
#Open streamed content and world model with out-of-world interfaces, open client and open server (DeepSim)
Architectural Components and Implications in Education
Below are some of the architectural components and implementations on the structure of a virtual education environment.
{| border="1"
|'''Architectural Components'''
|'''Implications in Education'''
|-
|Closed Proprietary System
|A closed proprietary system cannot generally be altered. These systems are generally not appropriate for education purposes unless the existing virtual world itself is built for the purpose of the training (such as a purpose built simulator). Closed systems can be used in education for group interaction and discussions, if not for lectures or anything requiring more than text or audio (assuming the system supports group audio communications).
|-
|Closed or Open Environment
|Whether content and world model is closed or open determines whether the textures, objects and artefacts of the world can be modified or created by users. This ability is essential if the world is to be utilised in education as anything more than a 3D discussion forum.
|-
|World Content
|Whether the content and world model is client resident or streamed goes to the complexity of distributing course content, and the dynamics available in delivery. If the content is streamed, it can be changed in real time, but will usually require a high speed internet connection. Systems supporting streamed content generally also include the tools for developing some if not all of the streamable content. If the content is client resident, client interfacing speeds can generally be slower, but the content must be centrally published and distributed to client systems and installed locally prior to use. It cannot be changed in real time, and content production will not generally be supported directly in the virtual world tool set, and will often require advanced 3D modelling skills in dedicated 3D modelling environments.
|-
|World Interfaces
|The existence of out-of-world interfaces goes to whether content from other sources such as the internet web pages, audio or video, etc can be streamed into the world and integrated with the world content and model. Systems capable of providing this capability with streamable open content offer the greatest potential for in-expensive production of course material and publication distribution of that material to students.
|-
|Client / Server Engine
|Whether the client or server engine is open or closed goes to whether the hosting software itself can be modified. Generally this should not be necessary for education if the capabilities of the engines driving the world are otherwise sufficient. Where the content / world are otherwise closed, but the engines are open, the existing content and world could be replaced by interfacing the games engine to a new world with new content.
|}
====2.9.1.3 Options for Content Modification====
The ability to modify the content of a virtual world is essential if the educator is to deliver course content in-world beyond that of an interactive discussion, or monologue.
There are essentially three ways content can be modified by end-users in current virtual world environments (as opposed to systems providers or publishers) depending on the operational design of the environment:
#'''Level Editor''' (eg: Doom, Half Life, America’s Army, Flight Simulator). Applicable to client resident worlds (i.e. systems where the world is stored on each client computer and distributed as a separately published down load). A level editor is a content editing tool that allows an entire simulation to be created including the world model, textures, characters, behaviours, etc. They usually support importation of textures and animations, etc into the ‘level’ and then distribution of the entire level to a central server for redistribution to clients.
#'''Client Content Editing Tool''' with import/export (eg: Second Life, Vast Park, etc). For environments where building and content creation is part of the ‘game play’ the client will have a content editor provided. These environments provide a simplified model for constructing shapes and objects (e.g. Second Life’s prims) and some means for importing complex objects such as organic shapes, textures, animations, sound, etc.
#'''Out-of-world interface''' (e.g. Second Life, Active Worlds). Potentially available in both client resident and server resident (streamed) worlds. An out-of-world interface allows for the connection of some aspect of the user experience while in world to be drawn directly and live from an off world location like a web page, internet resident database or streaming SoundCast server, etc.
====2.9.1.4 Implications of differential content capabilities====
Virtual world are comprised of components (objects) and functions that are managed by the virtual world (or game) engine and together comprise the capabilities of the world. Not all worlds have the same object management capabilities built into their engines. For the purposes of this discussion, the range of capabilities will be considered to be:
#'''Terrain''' – the land form or map of the virtual space. Essentially all virtual worlds offer some form of terrain map (although the terrain map may not be ground, but rather simply a 3D space.
#'''Avatars''' – Discussed extensively already, the avatar is the user’s projection into the virtual world and may or may not be customisable.
#'''Structural objects''' – Including buildings, furniture, ornaments, statues, models, etc. These are the virtual world equivalent to objects in the real world. They may or may not be animatible and scriptable. If they are scriptable they may be able to become autonomous agents, depending in the capabilities of the scripting engine.
#'''Textures''' – The visual covering of any object, terrain, or even avatars. The ability to display and upload/import textures is (generally) essential to the ability to ability to display lecture materials like slides, etc (but note the existence of streams as a potential alternative).
#'''Animations''' – An avatar and a non-player character appears to walk, sit, stand, change facial expressions, etc because of the animation it is playing at the time. Without animations an object might move from one point to another, but it will not change it apparent state. The ability to modify animations is advantageous for creating a sense of realism, but possibly not generally essential for the ability to deliver a lecture or every type of simulation. All virtual worlds examined, offered some range of built-in animations within their worlds. Some allow the animations to be imported or modified, or strung together to create more complex animations.
#'''Scripts''' – Scripting is a capability to programme the objects and behaviours in the world. In worlds modified by level editors and programming language is generally provided as part of the level editing environment and ‘compiled into’ the level before it is published and distributed. In user modifiable worlds, where scripting is supported (like Second Life) the scripting editor and compiler is provided as part of the client application and scripts are dynamically modifiable. In some architectures the scripts are stored in the objects and distributed with the objects (and therefore if the object is moved between worlds/simulators the script and behaviours move with it), whereas in others the scripts are centrally stored controlled for the world/level and not available outside of the world or level or simulator (as appropriate). Scripts govern the behaviour (movement, animations, actions, sounds, appearance, world responses, inter-object communication, etc) of objects. The capability and simplicity of language design of the scripting engine is critical to the options available for educators in building a simulation.
#'''Streams''' – Streams include any media that is streamable such as audio, video, web-page content, etc. The availability of streams is an extension (or possible an alternative) to the ability to import textures. From an educational standpoint it represents the ability to deliver video or sound presentations, or draw lecture materials directly from the internet. Depending on the world engine, stream content may be able to be dynamically published (drawn down to the client as required – such as in Second Life) or packaged into the client resident world (such as in America’s Army).
#'''Non-player Characters''' (also called Bots, AI’s or MOBs – mobile objects) - These are essentially characters that look like avatars but are completely controlled and managed by the engine. They interact with players/avatars in a semi-intelligent manner. The availability and capability of these vary significantly across worlds. In HalfLife and America’s Army, the AI capability is available within the engine and has considerable ‘intelligence’ and in some cases the ability to learn and modify their behaviour. In other worlds (such as Second Life) they are not directly supported by the virtual world engine at all. The existence of non-player characters can directly impact on the type of learning simulation that an educator can build as it can provide user feedback and the feeling of presence within the environment for the user (if implemented to provide a realistic experience for the user).
#'''Text Communication''' - Text chat (including instant messages, group communication chat, etc) is the standard communication strategy in all worlds. It is always instant and dynamic (in that it does not have to be pre-packaged into the world). It is a functional capability rather than an object, and may or may not be logged or copied depending on the client capabilities.
#'''Multi-way Voice Communication''' - Most virtual worlds do not support voice directly, although this has been an increasingly offered function over the last twelve months. Multi-way voice communication enables a group of players to converse as if they were in a conference call, without the necessity to type all communication in text. It is different from streams, in that every client can be a sound source to every other client, whereas streams are a one-way communication from a point source to many destination receivers. Clearly the availability of voice communication impacts both the type student and the form of discussion that can be undertaken in a learning situation.
In selecting the platform for delivering an educational experience, the extent to which the educator requires any or all of these capabilities within a virtual world will probably influence the decision. Some of these capabilities have only recently become generally available, and others are still in only rudimentary forms. In the literature review that follows, the approaches and content adopted, and the outcomes achieved have necessarily been constrained by capabilities of the technology options available at the time and the architectural constraints of the virtual world used.
===2.9.2 Education Applications in Virtual Worlds===
====2.9.2.1 Introduction====
During the 1970’s, 1980’s and early 1990’s, perhaps the most significant multi-user online environment for education was the PLATO system. From the mid 1990’s onwards, the influence of this system waned as it was progressively superseded in user interface capabilities by the emerging 3D online games, social networking systems and custom built virtual worlds for the specific application of subject matter.
Today the use of public online virtual worlds for is gaining popularity with educators with a recent special purpose committee of educators (The New Media Consortium & EDCAUSE, 2007) identifying that virtual worlds will have a significant impact in the future of teaching, learning, or creative expression within higher education. In the next section we will discuss some of the research findings of virtual worlds being used for educational purposes.
====2.9.2.2 Education Uses in Virtual Worlds====
Early work in education using text based MUDs showed that they offered support for constructive knowledge-building communities that offered affordances of coordinated presence with evidence for interactive learning and collaboration across time and space (Dickey, 2003).
The period of the late 1990’s until today has been typified by educators experimenting with the potential for mass market games engines (and more recently virtual worlds) to be re-tasked as education environments (Annetta et al., 2006; Beedle & Wright, 2007; Gikas & Van Eck, 2004). In some cases, such as America’s Army the ‘game’ environment was built with the specific goal of recruitment and training in mind (Zyda, 2005), or as with MicroSoft’s Flight Simulator a game evolved over time with the assistance of subject matter experts to create an accurate simulation tool for the games audience (Lenoir, 2003). In other cases a games engine (the operating system of a game) has been adapted to create a purpose built learning tool, such as educators and students at MIT utilising the Neverwinter Nights tools to create a historical game based on a battle in the Revolutionary War or MIT's Games-to-Teach Project produced playable prototypes of four games, including Biohazard, developed jointly by MIT and the Entertainment Technology Center at Carnegie Mellon University which trained emergency workers to deal with a cataclysmic attack (King, 2003).
The early 3D virtual worlds with their simplistic graphics bearing little resemblance to the real world provided students with advantages over traditional learning methods whilst fostering collaboration in multiuser virtual worlds. An extensive study of virtual reality technology in education was performed by Youngblut (1998) where she looked at 35 different research studies in education that varied in technology use, subject discipline and age group from 1993-1998. Below is an example of VARI House and Virtual Physics both of which were custom built (Figure 38), VARI a single user virtual world and Virtual Physics a multiuser virtual world. Although studies were mainly research based (as opposed to the application in course work) research showed for both single and multi user environments that virtual world technology in many cases surpassed traditional learning methods in areas such as subject matter understanding, memory retention, student collaboration and constructive learning methods. Some obvious disadvantages were technology constraints, cost and development and usability (Youngblut, 1998) which in most part could be contributed to the infancy of this technology, formative years of computer based learning and the lack of general use of computers by students which had yet to permeate sociality as a whole.
[[image:Education_In_Virtual_Worlds_in_1950_to_60_038.jpg]]
Figure 38. Education in Virtual World Mid 1990s
====2.9.2.3 Online Education Uses in Virtual Worlds====
As identified in the architecture considerations section, virtual worlds that are to be used in educational settings must enable content modification if learning is to consist of anything more advanced than an interactive conversation. For the purposes of this research, the researcher is choosing to focus on virtual worlds that support the dynamic delivery or streaming of content (and the building tools are provided as part of the environment), rather than those worlds where a separate level editor is required and a client resident world model must be installed on the client computer prior to use. The literature surveyed in this sub-section will therefore focus on the work done in two such environments – Active Worlds and Second Life.
=====2.9.2.3.1 Active Worlds=====
Online virtual worlds enabled educators’ access to environments without the cost and complexity of developing their own custom software. One of the first online virtual worlds that made it feasible for research and development in education (given its architecture qualities) was Active Worlds (1995, 1997). Officially known as Active Worlds Universe because it consists of many worlds, Active Worlds provided educators with the opportunity to rent or buy their own world allowing restricted access to invited guest, building tools and content management capabilities. Below is a screenshot of Active Worlds (Figure 39). As can be seen, the current client consist of four sections; left – communications and navigations options, right – integrated web browser, bottom – chat window and middle – 3D environment. This type of client is generally called a “browser” by the environment developers.
[[image:Active_Worlds_Universe_039.jpg]]
Figure 39. Early Online Social Virtual World: Active Worlds Universe
'''Active Worlds Research'''
During the late 1990s to the early 2000s several educational institutions setup up a presence in Active Worlds for various projects from research to actively using Active Worlds as an online learning environment (see Smith, 1999 for a list of Virtual Learning projects most of which being in Active Worlds). The early research into online virtual world based education using Active Worlds showed promise.
Dickey (1999, 2003, 2005) undertook research into the viability of Active Worlds being used for geographically distant learners for both formal (a university business computing skills course) and informal courses (Active Worlds building course). His research studies showed that the 3D Virtual Word offered advantages in fostering constructive learning, student and teacher collaboration, visual representation of course context and course content and student engagement and participation. Some of the disadvantages identified were essentially environment specific and included a lack of support for collaborative activities like a whiteboard or collaborative interactive writing spaces, chat tool single posting word limitation, a single shared channel for chat tool providing no separation of teacher / student discussion and no ability for turn taking and kinetics (animation) constraints such as hand raising for alerting the attention of the instructor.[20]
Dickey also identified a number of opportunities specifically enabled by a 3D environment. While some of the previously identified advantages (such as collaboration and student management and participation) might be duplicated in other forms of online education tools, the 3D modelling of the course itself (the visual representation of course context and course content) was an advantage specific to the 3D environment.
Course context modelling as provided in Dickey’s research (1999) was a 3D representation that illustrated the structure of the course by the use of individual buildings and plazas (Figure 40). Each building was a topic in the subject, which provided resources to aid learning and a meeting place where students could collaborate for group projects around this topic.
[[image:Visual_Course_Structure_in_Virtual_Buildings_040.jpg]]
Figure 40. Visual Representation of Course Structure by the use of Individual Buildings
Course content modelling as provided in Dickey’s research (1999) was a 3D representation that the student had to build in order to understand the concept of the subject material (Figure 41).
[[image:Visual_Represnetation_of_Course_Content_041.jpg]]
Figure 41. Visual Representation of Course Content
These alternative methods provide a good example of the power and adaptability of 3D modelling environment applied to education. The course context provided the student a method by which they could visualise the learning objectives and progression of the course. The student had to visit each building within a specific time frame and complete the contained content. The 3D modelling of course content enabled the learner multiple viewpoints of actual subject material which provided interactive learning that was believed to enhance the student’s understand of the subject topic.
Clark & Maher (2006) looked at the role of place and identity in a 3D virtual learning environment using Active Worlds by the analysis of chat logs and physical locality of the avatars within group discussions. They found that a sense of place can be achieved in a 3D virtual learning environment where identity and presence plays a role in establishing the context of the learning place. The students formed a strong bond with their avatar and indicated that they felt a sense of presence, as measured by a series of subjective scales, within the virtual learning environment. Similar Dickey (2003) found that the 3D virtual desktop world provided qualities of presence similar to that of an immersive virtual reality virtual world.
=====2.9.2.3.2 Second Life=====
Second Life (started 2003) consists of two worlds. These are: Second Life Teen Grid and Second Life Adult Grid. The teen grid provides access to 13-17 year olds and educational instructors. The functionality of the teen grid is the same as the adult grid with exception that all content has a PG rating. The Adult Grid is where you find all the universities and colleges for students over 17 years of age. Other educational content in Second Life is an extensive list of museums, galleries, simulations, business product development, role-playing spaces, employee and public business training course, etc. Similar to Active Worlds educators are able to rent or purchase land, allow open or closed access to the public and build and develop on land.
One major difference between Second Life and Active Worlds is that the former has an in world economy with in-built functional support enabling the trading of virtual products and services using ‘Linden dollars’, backed by content copyright and duplication controls and augmented by a provider managed exchange where real dollars can be exchanged for Linden dollars (and vice versa). This fundamental difference provides an incentive for content developers and service providers to actively support and expand the world with content and therefore enables access to a large body of pre-constructed content or access to an entire world-wide industry of content developers at extremely reasonable rates (compared to the real world 3D developers providing the similar content outside of Second Life) (Joseph, 2007). The building and scripting tools are easier to master than traditional 3D rendering tools, and delivered free as part of every user’s world browser and are sufficiently powerful that just about anything imaginable can be constructed (Schmidt et al., 2007).
Second Life’s standard interface as seen below (Figure 42) offers extensive functionality over that of Active Worlds. Some of the more common features as seen in the figure below are built-in world, content and people search facilities (left), a mini map (top right), an inventory library (bottom right), local chat channel (with a standard ranges of 15, 30 meters or 60 meters from text source) and group chat channels (worldwide range for up to 25 groups per avatar), customisable streaming media players (for sound, video and web page content), in world or external web html browser (link for both in world and outside world content), private or public multi-player voice facilities etc.
[[image:Second_Life_042.jpg]]
Figure 42. Online Virtual Social World Second Life (Circa 2008)
Another difference from Active World is avatar control, Second Life avatars can use roaming camera (whereas Active Worlds only provides First and Third person view). Roaming camera enables the user to use their mouse to control the moment around the world without the need to move their avatar. This functionality once mastered offers the users a powerful tool that provides an easy and fast way in which to navigate objects (that can even go through objects such as walls).
Due these and other technological advances over Active Worlds, Second Life has developed a large education community over the last couple of years. For instance, SIMTeach (June, 2008) the Second Life Education Wiki identifies over 200 Educational Institutions in Second Life of which 138 listed are universities, colleges and schools. The Second Life Education (SLED) list server has over 5,000 world-wide members. The New Media Consortium (NMC, a group that hosts education islands) has over 100 universities on their land and Second Life Teen Grid has over 90 educational projects (Linden & Linden, 2008). Figure 44 p88 provides some examples of the training and learning activities in Second Life representing a mixture of educational institutions, corporations and governments agencies.
The content of Second Life is entirely user created. The availability of content developers and potential students already experienced in using the environment is dependent on the take-up and expected future growth of the environment. In Figure 43 are the user base and economical statistics for the first quarter 2008 as provided by Second Life’s proprietor Linden Lab (2008a). As of November 2008 Second Life had 16,318,063 million users (60 day logons 1,344,215 million). A beak-down of Second Life’s demographics as at November 2008 can be seen in Appendix I: Second Life Demographics.
[[image:Second_Life_User_and_Econ_Stats_Q12008_043.jpg]]
Figure 43. Second Life User & Economic Statistics for Q1 2008
[[image:Second_Life_Training_and_Learning_044.jpg]]
Figure 44. Second Life Training and Learning
'''Second Life Research'''
Educators are using Second Life for both formal and informal purposes. Some Educational intuitions have set up entire virtual campuses modelling their real world campus while others are modelling purpose built virtual education structures. The relative youth of Second Life means that there is considerable variation in the maturity of educational efforts across the virtual world, and limited peer reviewed studies yet published. Many educators are still experimenting while others, having active support of their institutions are actively using the environment for partial or entire subject delivery. Here we will look at some of the current research at the time of writing that has been undertaken in Second Life most of which has been recently published since 2006 although given the technological advances that has occurred in Second Life since 2007 onwards we will specifically concentrate on the later research.
Martinez, Martinez, & Warkentin (2007) researched the implementation of a lecture to geographically distributed third year university students in Second Life. The lecture was delivered in a conventional lecture room setting using traditional chalk and talk style delivery with lecture slides and the chat channel for instruction, no voice was used.[21] According to the lecturer’s experience using text only delivery, the time to deliver the content was double that of a face to face lecture. This was also confirmed by the students in their survey. In the student survey some admitted they felt distracted by the novelty of the environment and were overly concerned with ancillary aspects such as their avatar’s appearance etc. Others admitted to being distracted by external (to the environment) concurrent activities occurring simultaneously on their PC’s such as multi tasking with other programs (e.g. MSN messaging) whilst at the lecture. Others experienced technical difficulties and could not get back into the lecture after they were accidentally logged out. In spite of these short-comings, when asked to rate the lecture experience on a scale of 1-10 the average student response was 8.5. In this study it was noted that some of these distractions and difficulties could be put down to first time user experience. The lecturer also felt that this lecture could have easily been pre-recorded and delivered online and that active learning techniques could have improved the delivery of this lecture in Second Life (Arreguin, 2007).
Joseph (2007) notes a consequence of using Second Life (or a virtual worlds in general) for teaching is that sessions generally take longer than traditional methods but believes that this is not an issue per se as time to complete the task should come second to the effectiveness of the experience. Joseph also believes (from experience) that the avatar projected on the screen and sense of presence experienced by the participants is more effective for learning than a live image of a video feed.
Kofi, Svihla, Gawel, and Bransford (2007) researched the potential that virtual worlds could provide efficiency and innovation for adaptive learning. In their study, students were present with a maze to navigate that simulated problem solving skills required for learning similar to that in a real life learning scenario. Kofi, et al found that Second Life was able to provide enough functionality and support for the learner to apply new concepts in order to solve presented problems as long as they were provided key indicators of possible outcomes. They also found that the use of 3D learning environments required the same amount of instruction that would be provided in equivalent real world learning and that simply building a model did not provide sufficient information, of itself, for the learner to learn in this instance; they also needed to be continuously prompted and guided in order to reach the end learning objective.
In another example, Second Life was used to support learning objectives of a total of 13 students aged between 19-26 for a third year level college students on a course for Digital Entertainment and Society where the students were geographically distributed around the world (Gonzalez, 2007). Both lectures and assignment work was conducted within Second Life. The lectures consisted of a video presentation and an in world field excursion. Assignment work required some in-world building, an exercise using linden dollars with a student presentation on completion. No students had used this environment before but an acclimation exercise was sufficient in providing them with the skills required to undergo course work in Second Life. At the end of the course students were given a survey with results presented below (Table 1).
{|
|Elements that Second Life Added:
|-
|
|Agree
|Disagree
|-
|Enjoyment
|100%
|0%
|-
|Technical difficulties
|100%
|0%
|-
|Interaction with tutor
|62%
|38%
|-
|Interact ion with classmates
|62%
|38%
|}
Table 1. Survey Results for Digital Entertainment and Society Second Life Subject
The technical difficulties result was explained largely by network latency experienced by the students. Each student used their own computers with an average of 512 Kbs connection speed – not especially fast, nor ideal for the use in the Second Life environment. No mention was made in the study as to whether the student computers met the Linden Lab systems requirements (2008c). As Second Life is streaming virtual world where content is downloaded on-demand from Linden Labs servers located in the USA to the local computer connection speed can an important factor in technical difficulty performance. Other major impacts from a technical perspective include the computer graphics cards and the size of onboard computer RAM. The Second Life browser does offer many settings for optimising performance on low-end machines but if the minimum system requirements are not met then the user’s experience of the virtual world will be reduced significantly with dropouts, lag and poor graphics.
==2.10 Learning & Instructional Design Theory==
===2.10.1 Introduction===
Learning in any world (real or virtual) requires well thought out instructional design. Learning is a process of the mind regardless of whether your body is present in the virtual world or real world. Instructional components for learning regardless of medium include (DONCIO et al., 2008):
*Clear, concise, and appropriately structured content
*Activities that draw relationships between concepts, challenge learners' thinking and understanding, and reinforce information
*Evaluative measures that determine if knowledge assimilation and retention have occurred
In this research the focus was on the use of new technology in education as opposed to education applied to new technology; therefore this section only provides an overview of applicable theory required to assist in the instructional design, delivery and assessment of the subject material presented to the research participants in this study. Gagne’s Nine Events of Instruction and Bloom’s Taxonomy of the Cognitive Domain were selected to assist in this task.
===2.10.2 Behaviourism and Cognitivism===
There are two main traditional schools of thought in learning theory. These are Behaviourism and Cognitivism (DONCIO et al., 2008; Lewis, 2001).
*Behaviourist (Objectivist) views the mind as a ‘black box’ no consideration of personal or past experience is taken into consideration. The mind starts off with a clean slate where a stimulus produces a response. Only when a change in behaviour is observed learning has occurred. Learning is discrete, measurable and quantifiable.
*Cognitivist (Constructivist) views the mind as a continuous organism that evolves. Knowledge is constructed based upon from past material and personal experience. Learning is unique to the individual; relating new information based upon pervious knowledge learnt.
The University of Washington, Seattle (2008) compares the two approaches of and a provides a discussion of each in terms of philosophy (Table 2, p93), learning outcomes, instructor role, student role, activities and assessment. The philosophies of these approaches are opposing and therefore produce different methods of instruction (Lewis, 2001; Nash, 2007).
Behaviourism was the first to be defined in learning theory while cognitivism developed later as a response to perceived limitations of behaviourism in understanding and adapting to new learning concepts (Lewis, 2001; Mergel, 1998).
While some constructivists argue the merits of constructivism as a distinct theory, viewing knowledge as a something constructed by a learner through the process of learning other writers view constructivist ideas as an evolution of the fundamental cognitivist school. This position is illustrated in Table 2 where the behaviourist and constructivist-enhanced-cognitivist philosophies are compared using a consistent comparative organisation of views (see Dabbagh, 2006; Mergel, 1998).
Constructivists argue a distinction between cognitive constructivism and social constructivism, in which the former emphasises the exploration and discovery on the part of each learner, while the latter emphasises the collaborative efforts of groups of learners as sources of learning, but for our purposes it is sufficient to distinguish the behaviourist and cognitive approaches. Throughout the years many practical teaching methods have evolved with concepts that encompass both approaches.
[[image:TABLE_Instructional_Design_Behaviorism_Cognitivism_045.jpg]]
Table 2. Instructional Design: Comparative Summary Behaviorism and Cognitivism
(University of Washington, 2008)
===2.10.3 Gagne’s Nine Events of Instruction===
Gagne theory of instruction can be divided into three areas (Corry, 1996); taxonomy of learning outcomes, conditions of learning and levels of instruction. There are considerable similarities between Gagne’s ‘taxonomy of learning outcomes’ and Bloom’s ‘taxonomy of the cognitive domain’ therefore a discussion of these will be provided in the next section of this thesis.
Gagne breaks down ‘conditions of learning’ into internal learning and external learning conditions. Internal learning is concerned with previous learned capabilities of the learner and external learning is the instruction or stimuli that will be presented to the learner. While Gagne’s theory takes an essentially cognitivist approach, it recognises both behaviourism and cognitivism influences to instructional learning. For our purposes, it is the ‘levels of instruction’ as outlined by Gagne that are of particular interest which we will explore in this section.
Gagne (1985) presents a systematic approach to instructional design termed the ‘nine levels of instruction’ as presented below in Figure 45 (Clarke, 2000)[22]. These nine levels have been specifically designed for the teaching of intellectual skills.
[[image:GAGNE_Nine_Steps_To_Instruction_046.gif]]
Figure 45. Robert Gagne's Nine Steps of Instruction (Clarke, 2000)
The nine instructional events with their corresponding cognitive processes can be described as follows (Clarke, 2000; Kearsley, 2008):
#'''Gaining Attention (Reception)''': Grab the attention of the participant by presenting a teaser in order to get the participant interested and motivate them to learn more about the topic that will be presented. This could be done using methods such as a movie, phrase, storytelling or a demonstration.
#'''Informing Learners of the Objective (Expectancy)''': Provide the participant with the objectives in order to assist them in organising their thoughts ready to receive the new information that will be presented.
#'''Stimulating Recall of Prior Learning (Retrieval)''': Provide the participant with any background that my assist them in building upon the new knowledge that they are about to receive. This helps to place a framework in their mind based upon previous knowledge.
#'''Presenting the Stimulus (Selective Perception)''': This is where the new learning begins. Information should be chunked and organised meaningfully in order to avoid memory overload and assist in the learning of new knowledge. Chunking the information into sequence of learning events and breaking it down into constituent parts with a structure and purpose that spans across different areas of comprehension. The revised Bloom’s taxonomy (discussed in the next section) can be used to assist in forming of the presented information.
#'''Providing Learning Guidance (Semantic Encoding)''': Assisting the participant to obtain a deeper level of understanding of the new knowledge so that information can be encoded into their long term memory. During instruction try to provide examples, non examples, analogies, graphical representation etc. to assist in semantic encoding process.
#'''Eliciting Performance (Responding)''': Letting the learner do something with the new knowledge or test their new knowledge to confirm they have a correct understanding of the information.
#'''Providing Feedback (Reinforcement)''': Analyse the learner’s understanding of the subject matter presented and provide feedback to correct any misunderstood knowledge. Immediate feedback and reinforcement of the new knowledge (e.g. question and answers).
#'''Assessing Performance (Retrieval)''': Test that the new knowledge is understood and the learning objectives have been met. This could be in the form of a test or a demonstration by the learner to assess if they have mastered the information.
#'''Enhancing Retention and Transfer (Generalisation)''': Generalise the information so that the knowledge transfer can occur, inform them of similar problems or a similar situation so that the acquired knowledge can be put into a new context.
===2.10.4 Bloom’s Taxonomy===
The Taxonomy of Educational Objectives also known as Bloom’s Taxonomy is widely used[23] to assist in the preparation of learning objectives and the assessment of learning outcomes. The learning outcomes of a student are the results of their learning experience of a course that should be a direct consequence of the course objectives (Monash University, 2008). Hence the application of Bloom’s taxonomy of educational objectives in forming course objectives provides a measure by which to assess student’s learning outcomes.
The original work of Bloom’s Taxonomy was developed by an American committee of educational psychologists chaired by Benjamin Bloom that presented over a period of time three domains: cognitive (knowledge) (Bloom, Englehart, Furst, Hill, & Krathwohl, 1956), affective (attitudes) (Krathwohl, Bloom, & Masia, 1964), and psychomotor (motor skills) (Dave, 1967, 1970; Harrow, 1972; Simpson, 1972). In forming educational course objectives Bloom’s cognitive domain is applied to assess the knowledge and intellectual component of a curriculum.
After nearly 47 years had passed Bloom’s cognitive domain was revised (Anderson et al., 2001; D R Krathwohl, 2002) by a committee of eight, two of whom had worked on the original published work (committee: Krathwohl and editor: Anderson). The revision was made as a result of many years of application and research and has since been accepted by many educators as a replacement for Bloom’s original work. The changes that were made are as follows (Figure 46) (Anderson Research Group, n.d.; D R Krathwohl, 2002):
*The names of six major categories were changed from noun to verb forms.
*Comprehension and synthesis were retitled to understand and create respectively, in order to better reflect the nature of the thinking defined in each category.
*Create was moved to the highest, that is, most complex, category.
*The revised Taxonomy is not a cumulative hierarchy.
*A taxon of remember was devised to replace that of Knowledge, and
*A two dimensional Cognitive Taxonomy Table was formed by sub dividing the original Knowledge taxon.
[[image:BLOOM_Changes_in_Cognitive_Domain_047.jpg]]
Figure 46. Changes in Bloom’s Cognitive Domain
====2.10.4.1 Revised Bloom’s Taxonomy of the Cognitive Domain====
A substantive difference is in the handling of “Knowledge”. The revised Bloom’s cognitive domain as shown in Table 3 was extended to include the dimension of Knowledge. So now the revised Bloom’s cognitive domain consists of a two dimensional table with The Knowledge Dimension and The Cognitive Process Dimension. This table provides the instructor with a tool with which to classify learning objectives where learning objectives are specific and inclusive to the discrete learning outcomes or intended results that are hoped to be achieved by the end of instruction. The instructor defines the learning objectives where these objectives are classified into the appropriate cell in the 2D matrix of cognitive and knowledge dimensions which then assists in instructional design, and assessment and provides a tool to enable balancing of the learning objectives across methods of instructional design.
[[image:BLOOM_TABLE_Revised_Taxonomy_048.jpg]]
Table 3. Revised Bloom’s Taxonomy Table
(Anderson et al., 2001, p. 28)
'''The Cognitive Process Dimension'''
The Cognitive Process Dimension is the column values for Table 3 above. This dimension provides the level of learning and comprehension required to complete a task where each differs in their complexity on a scale from 1-6. Cognitive dimensions are defined as 1.Remembering, 2.Understanding, 3.Applying, 4.Analysing, 5.Evaluating and 6.Creating each of which contain further sub-process with 19 specific cognitive processes in total. Table 4 provides an overview of each cognitive process with their defining verbs. Verbs are used to classify an objective. For example, an objective ‘to recall the 7 states of Australia’ would be classified under remembering. Recall in this instance is the verb that classifies the learning objective into level “1. Remember” of the cognitive dimension.
[[image:Cognitive_Process_Dimension_Processes_049.jpg:
Table 4. The Six Categories of The Cognitive Process Dimension And Related Cognitive Processes (Anderson et al., 2001, p. 31)
Bloom’s cognitive taxonomy was solely based upon the values contained in the cognitive dimension (with the exception of the differences previously discussed). Bloom believed that the cognitive process was a cumulative learning process in order to achieve a learning outcome. For example, in order to ‘analyse’ subject matter the student would need to have mastered using the old Bloom’s taxonomy of the cognitive domain knowledge/remember, comprehension/ understand and application/ apply whereas the revised taxonomy of the cognitive domain does not assume this cumulative hierarchy. The early Bloom’s cognitive domain took a behaviourist approach to instruction whereas the revised Bloom’s cognitive domain believes that learning can take place at any level without mastering previous levels. This is a fundamental shift in the philosophical grounding of Bloom’s taxonomy of the cognitive domain where it has moved away from the behaviourist approach of learning.
'''The Knowledge Dimension'''
The Knowledge Dimension provides an additional dimension that has been added to the taxonomy by the subdivision (and modification) of Bloom’s original knowledge category, which can be seen as row values in Table 3 above. The knowledge dimension defines how knowledge is constructed which can be Factual, Conceptual, Procedural or Metacognitive. Table 5 provides an overview of the knowledge dimension and their meanings.
The knowledge dimension separates the noun (or subject matter) from the stated learning objective. For example, continuing on from the objective discussed above ‘to recall the 7 states of Australia’ would be factual knowledge where the bolded words make up the noun construct. This noun is factual because the learner either knows the states or they don’t, to know is the basic element required in order to solve the problem.
[[image:Major_Types_and_Subtypes_Knowledge_Dimension_050.jpg]]
Table 5. The Major Types And Subtypes Of Knowledge Dimension (Anderson et al., 2001, p. 31)
The knowledge dimension has been added as it provides further insight to the type of knowledge a student is required to master. In the original work this assumption was also made as it was the first level in a cumulative hierarchy but the revised knowledge dimension provides the instructor with a greater understanding and assists in defining knowledge as a separate dimension. For example, the objective ‘to recall the 7 states of Australia’ the student needs to Remember Factual Knowledge.
The knowledge dimension like the cognitive dimension is not a cumulative hierarchy, learning can start anywhere within the knowledge dimension.
'''Using the Revised Bloom’s Cognitive Domain to Assist in Instructional Design'''
To assist in formulating instructional design Anderson et al. (2001) provides in their book for the cognitive dimension; sample objectives, corresponding assessments and assessment formats (chapter 5) and in the knowledge dimension; specific details, elements, generalisation, structures and models etc (chapter 4). This assists in the formulation of specific tasks and in defining the level of knowledge required of the student. It also assists in ensuring those objectives and testing of those objectives lie across the required range of cognitive and /or knowledge categories and that the student is being fairly assessed in areas that are directly related to the objectives.
====2.10.4.2 Bloom’s Taxonomy of the Cognitive Domain Applied to a Digital Environment====
'''Bloom’s Digital Taxonomy of the Cognitive Domain'''
Churches (2008) has extended the (revised) Bloom’s cognitive domain for digital learning by taking the cognitive process dimension and included verbs for emerging technology. As can be seen below (Figure 47) the words highlighted in blue are the digital emerging technology verbs that have been categorised by using (revised) Bloom’s cognitive levels as the basis for interpretation of complexity. For example bookmarking is a remembering process is simpler than programming (which is a creating process).
[[image:BLOOM_Revised_As_Digital_Taxonomy_051.jpg]]
Figure 47. Bloom's Digital Taxonomy
Churches further added within his classification system a rubric (scoring criteria) of these technologies similar to that that has been defined in the sub-classification system used in Bloom’s cognitive domain. For example, Table 6 displays the rubric for Bookmarking where it has been broken down from simplest to highest.
[[image:BLOOM_Bookmarking_Rubric_For_Digital_Taxonomy_052.jpg]]
Table 6. Bookmarking Rubric for Bloom’s Digital Taxonomy
'''Bloom’s Taxonomy of the Cognitive Domain applied to Games'''
Wang & Tzeng (2007) proposed using the (revised) Bloom’s taxonomy of the cognitive domain as a method for understanding the application of knowledge in digital games. They believed that players learn in various ways within computer games and recognised how little work (if any) had been done in analysing such e-learning platforms in a structured taxonomic manner and in structuring the implementation and understanding of the cognitive processes. They proposed using Bloom’s taxonomy of the cognitive domain as a method by which to assess cognitive processes in a computer game.
[[image:BLOOM_Taxonomy_For_Games_053.jpg]]
Figure 48. Bloom’s Taxonomy for Games
The research included using a game called Food Force, which was a problem solving and mission-oriented game. Figure 48 summarises the conclusion of their research. As can be seen in Figure 48, players exhibited both personal and social feedback cross Bloom’s cognitive levels. They found that the players experienced cognitive processes for individuals across all categories of the Bloom’s cognitive model and displayed social interaction for the higher level Bloom’s categories of Analyse, Evaluate and Create.
==2.11 Summary==
The acceptance of the latest crop of virtual worlds such World Of Warcraft, Second Life, Entropia Universe, There, Eve, America’s Army and others by the internet using public as an integral part of their life style is possibly the most significant paradigm shift to occur in the last 10 years. With the statistics of user volumes and retention rates shows consumption numbers in the tens of millions of users, spread evenly across ages from youth to middle age and an approximately even gender balance (at least in the social worlds) (KZERO Research, 2007; Woodcock, 2008; Yee, 2006). The growth rates of these worlds collectively have been, and are projected (by industry analysts) to continue to be, rising dramatically for the foreseeable future.
With the current convergence of disparate technologies represented by these systems, the general public now have affordable single platform multi-media collaborative environments with sufficient realism to create virtual immersive spaces where presence is achieved at a level sufficient for them to lead virtual existences and establish social networks that rival their real world existence.
The linking of these spaces with the affordable (often free) tools that enable the public to create new 3D spaces and content for these spaces over the last eight years has resulted in a world-wide content developer base that with substantial skills and a highly competitive market for purchasers of those skills at often very low rates.
With the combined market pressures of minimising education delivery costs, improving education outcomes, and reaching as wide a market as possible it is understandable that educators have shown an extended interest over many years in the possibilities of virtual environments for education delivery. So with the advent of the latest generation of creativity focused social worlds like Second Life over the last few years, it is not surprising that the uptake by universities and educators (numbering in the hundreds of institutions) has been as substantial as it is.
A brief retrospective of the work in simulators, virtual reality and 3D games, shows that the potential of these environments extends beyond the virtual ‘chalk-and-talk’ to enabling education delivery strategies for even campus based students that cannot economically be delivered using reality bound means.
With traditional real world learning environments there is an extensive body of tested knowledge that can provide clear guidance as to workable frameworks for the design of course work. The extent to which and how these methods can or should be applied to the virtual world learning space remains an open question.
</div >
[[Category:Featured Article]]
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
d875646505c796db4cb09ab9a1b432278c0a2c28
330
304
2018-10-29T11:57:30Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 2: Virtual Worlds - Concepts, History, and Use in Education (Literature Review)=
==2.1 Introduction==
Gartner (2007) predicts that as many as 80% of active internet users will have a ‘Second Life’ in a virtual world by the end of 2011. Depending on your definition of ‘virtual world’ this may seem a little ambitious. Certainly, the extent to which virtual worlds are seen to include massively multi-user online environments supporting collaborative exchange of information in shared virtual space, the prediction might prove reasonably safe. To the extent that this definition is constrained to massively multi-player online games then prediction may prove a little “braver”.
Today’s virtual worlds represent the convergence of multiple technology streams, with the latest examples of the genre representing the merger of internet, telecommunications, instant messaging, virtual reality, 2D & 3D graphics, a variety of 3D modelling technologies, spatial sound, distributed databases, spatial indexing, mapping, streaming data transmission, physics, scripting languages, object-oriented software, agent theory, artificial intelligence, networking, economic modelling, online trading systems, game theory and many, many more technologies.
While the developers of many virtual worlds are content within the game space, some virtual world developers, such as Linden Research (developers of Second Life) have ambitions to be the web platform of the future (Bulkley, 2007). To this end a number of the commercial developers of virtual worlds have joined forces with a number of major corporate consumers, systems integrators and US government bodies to explore common standards for inter-operability of virtual world platforms which is a necessary first step in moving the technologies from the isolated proprietary place they now inhabit to a world-wide shared web platform (Terdiman, 2007).
This chapter explores virtual worlds, reviews the literature considering alternative definitions, characteristics, history, key architectural features, research outcomes and applications in education. The chapter concludes with an examination of traditional education taxonomy and relates that to the virtual world context as a basis for structuring an approach to exploring education affordances offered by two approaches to education in virtual worlds.
==2.2 Virtual Worlds==
===2.2.1 What is a Virtual World?===
====2.2.1.1 In Search of a Definition====
“Virtual worlds are places where the imaginary meets the real”. (Bartle, 2003, p. 1)
Virtual, as defined in the Oxford Dictionary (1989) with respect to the computing context is: “… not physically existing as such but made by software to appear to do so from the point of view of the program or the user….” and defined in the virtual reality context to be “… a notional image or environment generated by computer software, with which a user can interact realistically as by using a helmet containing a screen, gloves fitted with sensors, etc.” (1997).
The term world is defined in the Oxford Dictionary (1989) as “the ‘realm’ within which one moves or lives”.
In simple terms, therefore, a ‘virtual world’ can be defined as a generated computer software realm in which a user moves, exists or lives in a manner that appears to be real to the user.
A common definition for the term ‘virtual world’ is passionately debated in the literature (see Combs, 2004; Jennings, 2007; Reynolds, 2008; Wilson, 2007). It is a term that is used to describe many types of software environments from a simple MUD (Multi User Dungeons, also referred to as Multi User Dimensions or Domains) (Bartle, 2003; Keegan, 1997; Slator et al., 2007) to a sophisticated fully immersive 3D virtual reality environment used in gaming, physical training simulators or social interaction spaces (MetaMersion; Patel, Bailenson, Jung, Diankov, & Bajcsy, 2006; Van Dam, Forsberg, Laidlaw, LaViola, & Simpson, 2000). The term virtual world can be used to describe a single user walk-through simulated environment (Dalgarno, 2004; Youngblut, 1998) or an environment such as a massive multiplayer online role playing game (MMORPG) like World of Warcarft (Bainbridge, 2007). The term virtual world is also interchanged with other terms such as - virtual environment, synthetic world, mirror world, metaverse, virtual universe, artificial world etc[2] (Grøstad, 2007).
Bartle (2003, p. 1) provides the following definition:
<blockquote>
“Virtual worlds are implemented by a computer (or network of computers) that simulate an environment. Some -but not all- of the entities in this environment act under the direct control of individual people. Because several such people can affect the same environment simultaneously, the world is said to be shared or multi-user. The environment continues to exist and develop internally (at least to some degree) even when there are no people interacting with it; this means it is persistent.”
</blockquote>
Therefore, using Bartle’s definition in conjunction with the Oxford Dictionary definition provided above a virtual world can be defined as:
<blockquote>A shared software environment (or realm) in which a person represented as a projected entity (such as an digitally projected image, text identity or other computationally representational object) moves, exists or lives in a manner that appears to be real to the person and capable of affecting that environment and, being affected by, in a manner that simultaneously effects the experiences of other entities within the environment and which generally remains persistent once the user has left the world.
</blockquote>
The key components of this definition are:
#A shared environment in which a real-world participant shares a computationally generated artificial space with other real world participants and/or other computationally generated entities.
#The nature of the real-world participant’s projection into the computationally generated virtual space.
#The characteristics of the space, which establish a sense of realism to the participant.
#The manner and extent to which the real world participant is able to affect the shared space.
#The nature and form of persistence that the artificial space retains.
Throughout this section we will examine the current state of these components; the ideas and literature analysing contributing to the current expression of these concepts in the form of currently available virtual worlds. The realisation of virtual worlds in software has been (and continues to be) a rapidly evolving field continually consolidating mixed influences from a fiction, mechanical and electrical engineering, computer science, gaming theory, telecommunications, social science, commerce, religion and sociology. It is a field where advances are made as much in the act of amateur invention as in formal science, and a field in which the academic literature frequently lags the leading edge of the advances by a significant degree.
===2.2.2 Recognising a Virtual World by its Features===
While there is not as yet a single common set of universally accepted attributes, the literature offers a variety of feature based definitions that attempt to provide a basis for classifying whether a given application or environment is, or is not, a virtual world. Across these competing views there are some features that are most frequently repeated.
Coming from the perspective of virtual worlds as gaming platforms, Bartle (2003, pp. 3-4) proposes that a virtual world should adhere to the following conventions:
*'''Physics''': The world contains automated rules for the players that effect change in the world.
*'''Character''': The player is a part of in world experience that is represented by a character and with which they strongly identify.
*'''Interactions''': All interactions with the world are channelled thought the character.
*'''Real-time''': Interaction in the world take place in real-time.
*'''Shared''': The world is shared by others characters in common.
*'''Persistence''': The world is continuous and evolves to some degree regardless of whether or not the player is present in the word. While not present the player’s state in the game remains unchanged.
Bartle tends to use the term character, for what this thesis refers to as an avatar, and considers that the player (which will be identified as ‘the intelligence’ in this thesis) must strongly identify with that character. In the context of role playing games where the player assumes an identity not their own, this aspect of the feature list goes to recognise the effectiveness of the immersion and sense of presence the player experiences (concepts we will be exploring later), but outside of this space, where the player and the ‘character’ may be one and the same, this feature is less of a distinguishing criterion.
His use of the term Physics in the context of an application genre that may include 3D environments is perhaps a little confusing. In these spaces Physics most commonly refers to the physics engine that manages the simulation of an avatar and object dynamics in the space (such as gravity, acceleration, force, momentum and limb movement, etc). As used by Bartle, the term includes the ‘business rules’ and behaviours of the system – the rules governing all interaction, not just those simulating physical movement.
The nature of the shared space and interactive channel imply that the actions of one player affect the experience of another.
Edward Castronova (2001, pp. 5-6) proposes that a virtual world should have the following features:
*'''Interactivity''': Existing on one computer and can be accessed via a network (or the internet) by many simultaneous users. The actions of each user have influence on other users in the world.
*'''Physicality''': Users access the world by a computer, which provides a first person view of the world, the world is generally ruled by natural laws much like the real world with scarcity of resources.
*'''Persistence''': The world is continuous and evolves to some degree regardless of whether or not the player is present in the word. While not present the player’s state in the game remains unchanged.
Castronova’s feature requirements are essentially a subset of Bartle’s, although with the possible omission of the expectation that interaction is necessarily real time.
Sun Microsystems Inc (2008, p. 3) proposed the following common features of open virtual worlds (ie multi-user virtual worlds open to public access over the internet):
*Shared space, allowing multiple users to participate simultaneously.
*Users interact with one another and the environment.
*Persistence.
*Immediacy of the interactions.
*Similarities to the real world rules.
We might, perhaps reject Sun’s expectation of any need to assimilate ‘real world rules’ as this would exclude many fantasy role playing games from being classed as virtual worlds, but outside from this aspect Sun’s list is essentially consistent with the views of Bartle and Castronova.
These three sources are essentially consistent with the body of the literature, making allowance for the additional attributes and some latitude in interpretation we can establish a minimum feature list that would be generally accepted:
*The environment is shared;
*Interaction are in real-time;
*A person participates in the world through some form of representation with which they identify and are identified and that facilitates interaction and recognition (such as a character or avatar);
*Interactivity in the world is channelled though the avatar;
*Changes induced by a participant influence the experience of the space for other participants;
*Rules govern the world and interactions are shared and commonly applied; and
*The world is persistent.
==2.3 The Avatar–The Nature of a Participant’s Projection into a Virtual World==
While Bartle (2003) refers to a participant’s projection into a virtual world as a “Character”, the more widely accepted name today for a real world participant’s projection into a virtual world is an Avatar. This is the term this thesis will be adopting in this research.
The word avatar derives from avatara a Sanskrit word meaning “descent of a deity” or incarnation and utilised by the Vaishnavism religious tradition of Hinduism. The Hindi concept of an avatar is thought to originate as early as the second century B.C.E (Sheth 2002). One of the most recognised Hindu deities is Vishnu (Figure 1). In Hinduism, Vishnu, is said to have a standard list of ten avataras (collectively known as Dasavatara) with one of them said to be Buddha (Siddhārtha Gautama) the founder of Buddhism (Sheth 2002).
[[image:Vishnu_Hindu_Avatar_001.jpg]]
Figure 1. Hindu Avatara
Left: Visnu (or Vishnu) Hindu deity the protector and preserver of the universe
Right: Ten avatars of Visnu (Dasavatara)
(Vivekananda Centre, 2008)
In computing terms, little has changed from the original Hindi meaning of avatar. As with Hindu avatara, the virtual world participant can be thought of as “descending” or “projected” from reality to become a computational representational in a virtual world. In virtual worlds, an avatar is generally (although not exclusively) a graphical representation of the user’s persona (Deuchar & Nodder, 2003) although it can also be a representation of a system or a function in some applications (Sheth, 2003), a simple name in the form of a text string (in some text based MUD’s) and is evolving to include virtualisations of other senses (such as aural and tactile) (S.-Y. Lee, Kim, Ahn, Lim, & Kim, 2005). The graphical representation of an Avatar was thought to originate from a networked multi-user virtual world game called Habitat in 1984 (Bye, 2008; Morningstar & Farmer, 1990). Early research seems to suggest that the use of digital avatars in virtual worlds provides the user with reduced inhibitions and dissolves social status, or reconstructs social status among users (Dede, 1995; Dickey, 2003; Rheingold, 1993).
The projected form is not necessarily a recognisable representation of the real world human form. In his or her projected form, for example, the avatar might be represented as an image of a human, an animal, an animated mechanical object, a simple name, or any form appropriate to the virtual world, and within the technical capabilities of that world’s object management systems. For example, in Eve (a space based virtual world) all avatars are space ships whereas in Second Life (a social based virtual world) an avatar can take any form (Figure 2) but regardless of appearance your avatar’s name remains the same.
[[image:SecondLife_Digital_Avatars_002.jpg]]
Figure 2. Digital Avatars of Second Life (Levine, 2007)
In terms of today’s virtual worlds, and for the purposes of this research, an avatar should be thought of as a combination of a representation, an agent and an intelligence:
#The ''representation'' may be visual, aural, tactile or any other sense conveying the presence of the avatar to other avatars or agents in a virtual world.
#The ''agent'' is the library of capabilities of the avatar in a virtual world.
#The ''intelligence'' (or actor) provides the tactical and strategic control of the avatar, which could be artificial or natural (eg human).
In a virtual world the decisions of the intelligence are communicated to, and realised by, the agent. The consequence of the agent realising (enacting/implementing) the intelligence’s commands may result in a change in the state of both the agent and the representation, eg, in a 3D Graphical virtual world, a command to walk issued by the intelligence might result in the agent changing position and entering a movement or walking state and triggering the representation to display a walking animation (enter a walking animation state).
==2.4 A Taxonomy of Virtual Worlds==
===2.4.1 Introduction===
As might be expected, the literature contains extensive discussion of the appropriate taxa to be applied in classifying virtual worlds, and also an equal measure of disagreement among authors as to the appropriate criterion so to be applied. In spite of the range of discussions, most attempts are incomplete and therefore capable of classifying in a useable form only a portion of the genre. To be fair, this space is rapidly evolving and possibly as fast as it is classified a new entrant appears that change the paradigm, and old entrants are updated to include new capabilities.
===2.4.2 A Taxon for Virtual Worlds===
Outside of the education and virtual reality streams, possibly the largest single family of virtual worlds are those developed for games. While not actually claiming to propose a taxon, Bartle (2003, pp. 38-61), whose pedigree is essentially from the gaming stream, proposes a set of attributes that can be used to classify Virtual (game) Worlds. Not surprisingly, the attributes are most relevant to multi-user game focussed virtual worlds, but provide a workable superset of the current thought on the matter and with some adjustment can be extended to the more general examples of virtual worlds. He suggests that a virtual world can be categorised according to the following taxa:
#'''Appearance''': To a ‘newbie’ (Bartle’s term for a new user of a virtual world application) the distinction is whether the virtual world is a ‘text based’ MUD, ASCI, graphical 2D or graphical 3D etc. To an ‘oldbie’ (as described by Bartle) this is only an interface issue and therefore not as important as the other listed categories.
#'''Genre''': Is the world fantasy, cyberpunk, horror, social etc. The plot or the settings of the virtual world. This taxon is most helpful with purpose focussed virtual worlds. In the non-gaming or semi-gaming space occupied by some generalised social worlds, the virtual world is as much a platform on which other ‘sub-worlds’ can be based, and thus the genre of the virtual world can be all other genres. Examples of this might include PLATO and Second Life.
#'''Codebase''': Although not as important for the user as it is hidden from them this is an important aspect to the designer of a virtual world. The codebase defines the technical makeup of the world - reusable content and controls, scripting language, database structure etc. This researcher suggests that the codebase is not a single taxon, but perhaps should be separated into multiple taxa. In its place one might propose the content management, asset management, game engine, environment application programming interface, AI, and scripting function library within the system as more relevant technical matters.
#'''Age''': How long the virtual world lasts is an important aspect for the measure of success of the virtual world. Generally the longer you can keep a player (or user) interested the longer the virtual world survives which in turn attracts new users which adds to the player base of the virtual world.
#'''Player base''': How large is the player (or user) base of the virtual world. This measure varies depending upon what you are counting for example, the number of registered users, the number of avatars (a user can have more than one character in a virtual world but in general not for simultaneous use), simultaneous users logged in, hours played per user, access over a period of time, number of active subscriptions, etc. In some worlds the meaningful measure of player base is in fact the number of owner occupied ‘acres’ of virtual land (as opposed to general users of the virtual world). The player base measures the current success of the virtual world, its popularity so to speak, which in turn lengthens the age of the virtual world. Given the number of ways a player base can be structured and measured a single measure is open to both misinterpretation and reporting manipulation, and for some measures (like subscribed users – where some subscriptions are costed and others free) may be completely erroneous when comparing one virtual world to the next.
#'''Degree to which they can be changed''': Virtual worlds vary in the degree to which a user can change the content or add to the content of the virtual world. Virtual worlds such as World of Warcraft (and most game based virtual environments) allow no change by the player with all content created by the developers of the virtual world. Other virtual worlds such as Second Life, Active Worlds, TruePlay and PLATO rely on content created by the community. In the case of Second Life (for example) the entire virtual world is made from user created content by providing them with building tools, import and export capabilities, out-of-world interfaces and communications capabilities, an extensive library of API functions and a scripting language. The degree to which a virtual world’s content can be changed by the user adds to the technical codebase complexity and the user’s (and other user’s for multi-user virtual worlds) experience of and within the virtual world.
#'''Degree of persistence''': Bartle defines persistence to be the degree to which a world’s state remains intact if you shutdown and restart the virtual world. He classifies persistence into ‘discrete’ or ‘continuous’ groups. At the extreme a discrete virtual world would regenerate - described a ‘Ground Hog’ world (named after the movie). Here all content and the location of the player would be reset to the start of play. In a continuous virtual world the content and locations are retained through a restart.<BR />Persistence also relates to what happens to the world when a user logs off, does the virtual world continue to evolve without the individual player – and if so can the player’s state be affected while off line? A virtual world generally displays some level of persistence and is generally a term used to distinguish if a ‘virtual world’ is really a ‘world’ or in fact just a simple ‘Ground Hog’ environment (see Gehorsam, 2003). The ultimate level of persistence being that akin to the real world which is constantly evolving and changing regardless of our existence within the World.
With some modification and generalisation most of the taxa can be applied in the general case of gaming and non-gaming virtual worlds. To be applied outside of the narrow RPG (Role Playing Game) grouping, the classification system would benefit from some subdivision of elements.
We have already noted codebase as one such category. Codebase is such a wide group that is could be applied to every functional capability of the virtual world not covered by another taxon, and thus is of limited help in establishing a consistent framework for classification. For example Castronova (2001) taxonomy recognises a grouping under marketplaces (implying commercial functionality) while both Kish (2007) and Cavazza (2007) recognise groupings covering Paraverses (although they use different terms). In Bartle’s taxa these might both be covered as distinguishing characteristics under codebase, yet the one relates to the ability to conduct real-world commercial transactions in the space, while the other addresses the merging of real-world content with virtual world content.
Persistence as framed by Bartle mixes up multiple discrete concepts – host state persistence, user state persistence, environmental evolution, and scenario persistence. This last item is generally typical of games (such as quest driven environments where on restarting a ‘quest’ the user can rely on the sequence of events being a repetition of the sequence that occurred previously – effectively a ground-hog space within a larger persistent environment), and absolutely essential for simulators and learning systems where a user taking a course should be able to rely on the lesson replaying in a consistent and predictable way each time (unless variation is an intended part of the training like in a military battlefield virtual world). In order to classify virtual worlds, recognising these attributes independently of each other would be more helpful than identifying the world as persistent or not persistent, nor are the sub-features linearly related – i.e. one form of persistence does not imply the inclusion of another form of persistence (Purbrick & Greenhalgh, 2002).
===2.4.3 Applied Taxonomies===
While Bartle proposes a reasonably extensive set of attributes (taxa) for classification, some authors have proposed simpler classification regimes, although all seem as yet to avoid claiming an actual taxonomy.
Kish (2007) recognised that with the appearance of the weakly defined ‘Web 2’ technologies, virtual worlds could be seen to encompass a wider range of social networking and world-imagining spaces. Kish’s classification groups virtual environments into the broad categories (Figure 3):
#'''MMORPGs''': Massively Multiplayer Online Role Playing Games. A category which includes text and graphical gaming environments with the common theme of role playing and containing internally a hierarchical, level based player grading system to determine expertise and implied seniority, and generally plot or quest driven and goal oriented as their linking characteristic. Typical examples might include World of Warcraft, Entropia Universe, Everquest, MUDs, etc.
#'''Metaverses''': Imagined public fantasy spaces, emphasising social interaction, creativity and lacking a single plot or purpose for participation. Generally exhibiting a devolved structure without a single levelling system or clear environment imposed hierarchic seniority system[3]. Typical examples might include Habitat, Second Life, Active Worlds, Furcadia, etc
#'''Paraverses''': Spaces that intersect with the real world, incorporating content from the real world and thus could be described as virtual extensions of the real world. This group potentially includes many of the Web 2 spaces that contain sufficient functionality to create in the minds of their users a ‘real’ virtual community as strongly present to the participant as their real world existence.
#'''Intraverses''': Spaces that are otherwise Metaverses or MMOLE’s but private or closed to the broader public. Virtual reality environments could be seen generally to fall into this category as well as private/corporate implementations of public virtual world spaces. Typical examples might include Qwaq, Sun System’s Wonderland, IBM’s Metaverse, etc.
#'''MMOLEs''': Massively Multi-user Online Learning Environments. Possibly the oldest class of virtual worlds as it includes systems such as PLATO and is typified by educational environments supporting user social interaction. Primarily purpose (or although not goal) driven – such as learning, training, idea exchange, simulation, etc. This space includes the dedicated training / teaching environments of PLATO and planning / simulation management systems of SIMNET, Blackboard, Boston College’s Media Grid, etc.
[[image:Kish_Virtual_Geography_003.jpg]]
Figure 3. Virtual Geography (Kish, 2007)
Cavazza (2007) proposes that a virtual world should be open (public) and contain taxa supporting strong and generalised capabilities in each of the dimensions (Figure 4):
#Social networking
#Gaming
#Entertainment
#Business
[[image:Cavazza_Virtual_Universes_Landscape_004.jpg]]
Figure 4. Virtual Universes Landscape (Cavazza, 2007)
Consequently most of the virtual worlds identified by other authors are excluded from Cavazza definition of virtual worlds, but included under the broad category of ‘Virtual Universe’. To illustrate this idea Cavazza has classified a huge range of the existing virtual environments:
#Social
#*2.5 & 3D Chats
#*Avatar Centric
#*Social Platforms
#*Branded Universe
#*Virtual Worlds
#Game
#*MOG
#*Sports
#*MMORPG
#*Avatar Centric
#*Social Platforms
#*Branded Universe
#*Adult Games
#*Virtual Worlds
#Entertainment
#*Virtual Sex
#*Virtual City Guides
#*2.5 & 3D Chats
#*Avatar Centric
#*Branded Universe
#*Virtual World Generators
#*Virtual Worlds
#Business
#*Serious Games
#*Virtual Marketplaces
#*Adult Games
#*Virtual World Generators
#*Virtual Worlds
Cavazza’s definition and classification system is extensive, and possibly the most comprehensive to date. While Kish’s classification tends to focus on functionality, Cavazza’s emphasises purpose. Never-the-less, there is significant cross-over in their ideas. For example, both recognise the difference between games and social networking, and both accommodate the paraverses in a special category (Cavazza includes them in ‘Virtual City Guides’ among other groups). Cavazza’s analysis, however, lacks the accommodation of the education, training and simulation virtual spaces present in Kish’s categorisation, although, it might be argued that these are covered in multiple categories including ‘Virtual World Generators’ (eg PLATO, VastPark) and Serious Games (training simulators).
==2.5 What’s in a Name? – Virtual Worlds versus Virtual Reality==
Virtual Reality environments are generally a combination of user interface hardware (such as headsets and data gloves) and software. The availability of the (often costly or purpose built) user interface hardware has meant that the majority of these environments are either single user or very small scale multi-user environments (Jones & Hicks, 2004; Miller & Thorpe, 1995). A direct consequence of this is that Virtual Reality environments have tended to ignore the dimensions of user interaction, game play and collaboration in favour of the technology of immersion. This fact, possibly more than any other, has predisposed some authors to exclude virtual reality spaces from the domain of virtual worlds (Bartle, 2003; Yee, 2006).
While Bartle’s virtual world definition, contributes part of the definition we have adopted for virtual worlds in this research, the researcher departs from the entirety of Bartle’s embodiment of virtual worlds as expanded in that work. Bartle believes that a virtual world has a meaning divergent from that of virtual reality believing that “Virtual reality is primarily concerned with the mechanism by which human beings interact with computer simulations… [rather than] the nature of the simulations themselves” (2003, p. 3). To this extent Bartle’s definition specifically excludes Virtual Reality spaces from the definition of virtual worlds.
This researcher adopts a view consistent with some other writers in the field that excluding the body of work in virtual reality from the concept of a virtual world by writing virtual reality spaces out of the definition, places the emphasis narrowly on the social and gaming dimensions of these worlds, and away from the immersive experience thus excluding the vast body of research that predates or has been done in parallel to the development of gaming virtual worlds (Cosby, 1999; Heilig, 1955; Pimentel & Teixeira, 1994; Rheingold, 1992; Schroeder, 1997; Steuer, 1992; Sutherland, 1965; Walker, 1990; Woolley, 1994) and constrains the consideration of these environments in the education context to their collaborative and scripting capabilities.
Other authors have adopted definitions wider than that posited by Bartle of the virtual world concept, although in most cases constrained from some portion of the body of work that has contributed to the space. Dickey (2005, p. 439) implies an exclusion of 2D and non visual environments while providing: “Three-dimensional virtual worlds are a networked desktop virtual reality in which users move and interact in simulated 3D spaces.” Similarly, McLellan (2004) presents 10 classifications of virtual reality, a single virtual world being classified as ‘through the window’ where as a multi-user virtual world would be classified as ‘cyberspace’. Mazuryk and Gervautz (1996) make no distinction in the number of users in the virtual world but define a virtual world to be a ‘desktop VR (virtual reality)’ or a ‘Window on World (WoW)’ system. Biocca and Delaney (1995) defines a virtual world to be a ‘window system’ a computer generated three-dimensional virtual world viewed either by a computer screen or with the assistance of a head mounted display.
This researcher’s view is that all of these definitions are correct, but incomplete and that a definition that allows the participation of all of these examples is the most useful and appropriate in the education context. To appreciate the reasoning behind this argument we must look at some of the history of the development of the technologies and concepts that have contributed to the current family of virtual worlds and the problems and purposes these stepping-stones intended to resolve or achieve.
Authors adopting Bartle’s view have generally also adopted the view that virtual reality is essentially a hardware interfacing technology and hence the environments managed in this space are of no consequence. The misconception that virtual reality is a collection of hardware (data glove, head mounted displays etc) neglects the very meaning of virtual reality, which seeks to evoke a feeling of immersion and presence within the virtual space. In virtual reality research stream, using external hardware devices to enter a virtual world is only one method by which immersion and presence is achieved (Briggs, 1996; Steuer, 1992). No external device will ensure a user’s experience of immersion if the world they enter is an unconvincing generator of an alternative reality for the participant. Furthermore, if virtual reality is to be excluded from the scope of the definition of virtual worlds, then the existence of VR plug-and-play devices such as stereoscopic headsets, data gloves or haptic controls that are readily available to use with many mass market virtual worlds (that otherwise would fall within Bartle’s definition) for example, Vuzix iWear headset, Evolution Motion Glove of PS1, Wii Remote for Nintendo Wii, MS Force Feedback controller for Flight Simulator etc. would seem to contradict the proposed disconnect between the study of virtual worlds and virtual reality. Lastly the exclusion of virtual reality environments from the definition of virtual worlds ignores that fact that in the 3D virtual world space many of the technologies and concepts utilised were contributed by the virtual reality research stream (as will become clear from the history presented in the following sections).
In the education context, virtual reality technologies (as expressed for example in simulators) are a critical and essential contribution to the pantheon of virtual (training) worlds (Bailenson et al., 2007; Dede, 2004). In this researcher’s view, virtual reality environments are a subset of the virtual worlds, which are increasingly converging, if the space has not already converged in current virtual world examples such as America’s Army, Second Life, etc and massive multiplayer training environments like SIMNET (Lang, Maclntyre, & Zugaza, 2008; Lenoir, 2003; Zyda, 2005).
==2.6 Dimensioning Virtual Worlds==
===2.6.1 The Degree of Virtuality===
The degree to which a world is ‘virtual’ can be looked at as a sliding scale between physical and virtual. Milgram and Kishino (1994) presents a taxonomy for mixed reality visual displays called a ‘reality-virtuality continuum’ (Figure 5). On the left hand side of the scale is the ‘real environment’, which is equivalent to the real or tangible world, while on the extreme right is the ‘virtual environment’, which is equivalent to an artificially generated world. Between these two extremes is classified as ‘mixed reality’ (MR) made up of combination of both real and virtual matter.[4]
[[image:Reality_Virtuality_Continuum_005.jpg]]
Figure 5. Reality-Virtuality Continuum: Representation Scale for Visual Display
(Milgram & Kishino, 1994)
Figure 6 illustrates an example of the use of the reality-virtuality continuum taken from the MagicBook Project (Billinghurst, Kato, & Poupyrev, 2001). On the left of the figure is a book that is real (ie. the real world environment); in the middle the same book but viewed though an Augmented Reality (AR) Display where figures appear like pop-up characters on top of the book (ie. mixed reality or augmented reality); while on the right the same book but viewed within a virtual environment where the “reader” becomes the characters within the book.
[[image:The_Magic_Project_006.jpg]]
Figure 6. The MagicBook Project: An Example Of The Full Reality-Virtuality Continuum
While the MagicBook project was conceived around the integration of physical (tangible) real world objects with digital virtual world generated objects, when the real world objects are themselves digital or intangible – such as with course materials of photographic images, text, or other digital content the merging of the ‘Real World’ and the ‘Virtual World’ become less obvious. For example, real world authors Pamela Woodard and Wilbur Witt have published their works in the Second Life virtual world first or simultaneously with publication in the real world (Bell, 2006). Second Life virtual world can integrate conventional HTML web page content directly into the virtual environment (Release Candidate, 2008). Content developers and particularly trainers and presenters in Second Life routinely import textures and slides and stream sound and video from outside of the virtual world into the virtual space.
In the context of Milgram and Kishino’s reality-virtuality continuum, this research focuses on the right hand end of the scale i.e. using a desktop display of a virtual world in which all content is delivered virtually. In contrast to the MagicBook project this research considers (in the education context) the affordances from two virtualisation strategies – a direct reproduction of the real world delivery into the virtual (in part, by importing the non virtual world generated materials into the virtual world), and a transformation of the real world material into virtual material (in part, by recasting the non virtual world materials into virtually generated form).
===2.6.2 The Degree of Immersion and Presence===
====2.6.2.1 Introduction====
Virtual reality literature often separates a user’s experience of a virtual environment into physical and psychological components (Benford, Greenhalgh, Reynard, Brown, & Koleva, 1998; Biocca & Delaney, 1995; Sheridan, 1992; Mal Slater, 1999; Mal Slater & Wilbur, 1997; Steuer, 1992). The psychological components include the interaction (or connectedness) and belief where contribution of the participant or their willingness to believe in the reality of which they would otherwise know to be unreal and the physical aided by external mechanical and functional capabilities of the system.
In exploring the factors determining the effectiveness of Virtual Reality environments, Burdea and Coiffet (2003) determined that the aim of virtual reality is to achieve a trio of ‘Immersion, Interaction and Imagination’ (Figure 7. The Three I's of Virtual Reality), each of which holds equal significance to the user’s experience of virtual reality systems. A virtual reality system seeks fully to engage the user in the virtual space. They proposed that excluding any one of these features exposed a user to passive participation, and ultimately detracted from the perceived ‘reality’ of the experience.
[[image:Immersion_Interaction_Imagination_007.jpg]]
Figure 7. The Three I's of Virtual Reality
Slater (1992) defined user involvement to be a combination of the human experience which in turn is dependent on the technology (Figure 8). Telepresence (or presence) is the human sensation of ‘being there’ in a virtual environment[5] and seen influenced in part by the technology in terms of vividness (richness, realism) and interactivity (response) of the environment.
[[image:Steuer_Variables_Influencing_Telepresence_008.jpg]]
Figure 8. Technological Variables Influencing Telepresence (Steuer, 1992)
Slater and Wilbur (1999; 1997) revisited these concepts in later work, defining a user’s experience in terms of immersion and presence. Immersion is seen as an objective measure of ‘systems immersion’ technology such as field of view, quality of display etc and while presence is seen as a subjective measure, a psychological sensation of ‘being there’. From here on we will be using the terms immersion and presence as defined by Wilbur and Slater.
====2.6.2.2 Immersion====
Benford et al. (1998) propose classifications of artificiality and transportation for collaborative environments (Figure 9) that extends Milgram and Kishino’s reality-virtuality continuum. Artificiality (physical-synthetic) is equivalent to the reality-virtuality continuum. Transportation (local-remote) is the degree to which a participant becomes removed from their local space to operate in a remote space, which they define to be a similar to the concept to immersion. For example, CVEs (Collaborative Virtual Environments[6]) are placed on a scale of partial to remote transportation where a fully immersive CVE would be the ultimate level of transportation in a virtual reality system using devices such as HMD, data gloves, tactical and aural equipment that allowed for no outside distraction, the participant would be operating completely within virtual environment and be fully remote form their local environment[7]. Whereas, a desktop CVE is partially immersive as ones local surroundings form a part of the virtual environment eg field of view that allows for head turning away from the virtual space etc (Sheridan, 1992). In the context of Benford et al. transportation scale this research is conducted using desktop CVEs and is therefore only partially immersive according to their scale.
[[image:Artificiality_Transportation_as_SS_Metrics_009.jpg]]
Figure 9. Shared Space Technology According to Artificiality and Transportation
====2.6.2.3 Presence====
Research in online gaming virtual worlds has tended to focus on the human experience (presence) of virtual worlds rather than the ‘systems immersion’ aspects, while studies of virtual reality environments have tended to consider both. This is possibly a function of the common standard interface for massively multiplayer game environments that has traditionally been the desktop computer equipped with a mouse and keyboard. Although various more advanced mass market input devices (head mounted displays and 3D mice, etc) have been available to the mass-market for many years, they are not yet widely utilised.
The degree of presence is often linked to the effectiveness of a virtual environments (Witmer & Singer, 1998) which due to its subjective nature is possibly the most difficult to comprehend and therefore measure (Mal Slater & Usoh, 1993). Hence, this area has been a widely researched with various explanations as to what constitutes presence in a virtual environment (Schuemie, Straaten, Krijn, & Mast, 2001). The sense of ‘being there’ in the environment is subjective as Slater and Usoh (1993; 1994) describe presence is similar to a person’s ‘willingness to suspend disbelief’, a concept derived from British poet and literary critic Samuel Coleridge (1772-1834) in his autobiography (1817) where he describes the phenomena of when a person becomes so engaged in a narrative that they are willing to believe an event is true if even for only a brief moment. Although suspension of disbelief is often linked today with mediums such as film and literature, virtual worlds (especially Role Playing Game (RPG) worlds) provide many of the same traits in which the user can be thought of as an actor within the virtual world that forms a part of the storyline.
A number of presence classification strategies have been proposed by various authors. We will consider:
#Schroeder - focussing on the importance of social interaction
#Bartle – focussing on the degree of commitment in the environment
Schroeder (2006) presents presence in a continuum of shared virtual environments (SVE) within a three-dimensional model (Figure 10). Presence (x), copresence (y) and connected presence (z) can be described respectively as ‘being there’, ‘being there together’ and ‘being connected together’. Connected presence can be thought as the extent to which a relationship is mediated when presence and copresence exist. Mapping is done on a comparison with a physical face-to-face relationship (0,0,0) and an entirely immersive environment such as a networked Cave (1,1,1). For example, face-to-face is (0,0,0) there is no presence (and thus no copresence) as no meeting is taking place in a virtual environment whereas in the case of a networked Cave (1,1,1) the entire relationship (and environment) is virtual where affordances are such for high connected presence.
[[image:Presence_Copresence_Connected-Presence_010.jpg]]
Figure 10. Presence, Copresence, and Connected Presence
In different media for being there together
Of interest in Schroeder’s model is the comparison of desktop SVE and online computer games. The example given in the model for a desktop SVE is Active Worlds which is a massively multiplayer online (MMO) social virtual world and the example provided in his paper for an online game is Quake, which at the time provided for up to 16 players sharing a common space. Both are virtual worlds, use text chat and sound, and use avatars to project the participant into the virtual world (although Quake takes a first person view exclusively). For the purpose of the analysis the main differences were perceived as the number of simultaneous players sharing the common virtual space and the imposition of clear game driven objectives in Quake, and the absence of those same game driven objectives in Active Worlds. Yet, Active Worlds was seen as providing a higher level of connected presence. So why does Active World provide a higher level of connected presence? The distinction here was seen to be the in the concept of the ‘game’ rather than number of players when you compare it to other SVEs presented in this model above. Active World is a social world in which no plot is provided to measure success or failure of an individual, unlike Quake where the measure of success is clear and the entire activity and function of the environment is the relentless pursuit of that individual success. Therefore it was deduced that a social (game) world provide for more connected presence than that of an individually focussed plot driven gaming virtual world (at least as analysed by Schroeder).
Schroeder observation of higher connected presence in social virtual worlds seems to fit with Heeter’s (1992; 2003) definition of social presence where she defines presence in terms of individual presence, social presence and environmental presence. Presence of an individual is increased when social relationships are formed which is based upon the social component of perceptual stimuli. When an environment or situation is focused on the relationship (rather than killing a monster like in RPGs) a higher social presence will be achieved.[8]
Bartle (2003, p. 42) identifies a system of levels of immersion (which in this paper we have defined as presence[9]) based upon a linear scale of the; Player (the real person), Avatar (the digital puppet), Character (representation in the world e.g. character name, role etc) and Persona (your identity in the virtual world where the player is the Character and is in the virtual world). Persona is similar to the concept presence, if your character is killed ‘you feel like you have died’ there is no distinction between the character and the player, they are one, the Persona. Bartle believes that the avatars and character are just steps along the way to persona. Persona is when a person ‘stops playing the world and starts living in the virtual world’.
==2.7 Influences on Virtual Worlds from Art and Literature==
===2.7.1 Introduction===
The concept of a virtual world is by no means unique to computing. The thought of exploring an imaginary realm has captivated people’s imagination throughout time.
“If we define that a virtual world is a place described by words and/or projected through pictures, which creates a space in the imagination real enough that you can feel you are inside of it, then the painted caves of our ancestors, shadow puppetry, the 17th-century Lanterna Magica, a good book, play or movie are all gateways to virtual worlds. Humanity’s most powerful new tool, the digital computer, was also destined to become a purveyor of virtual worlds, but with a new twist: The computer enables the virtual world to be both inhabited and co-created by people participating from different physical locations.”(Damer, 2007, p. 2)
At least with respect to the massively multiplayer online virtual worlds/role playing games (MMOVW, or MMORPG), all of today’s exhibits can trace their paradigms to literature. Some such as Eve, Entropia Universe, World of Warcraft are amalgams of a body of works and ideas while others such as MUD1 (Sword of the Phoenix (Howard, 1932)) and Second Life (Snow Crash (Stephenson, 1992)) are direct inspirations of specific literary works.
Consequently, to properly understand the ‘state of the art’ represented by today’s multi-user, connected together, virtual worlds and the gaming, social and business rules they have adopted to govern them, it is essential to consider the context from which they have been derived, and the art that has influenced their creators. While some operational paradigms in virtual worlds are technology constraints, functional capability constraints can be as much a condition of the imagined world being implemented as a real constraint of the technology of the day. To appreciate this fact one need only compare the camera controls of Project Entropia versus those of Second Life – two environments of comparable age, or the commercial capabilities of these two environments versus those of World of Warcraft. In each case the differences and apparent restrictions are a game design decision rather than a technology constraint.
===2.7.2 Virtual Worlds of the Arts===
James Pearson (2002) believes that from as early as 30,000 years ago in the Chauvet Cave in France shaman used cave art as a means to document their experiences of travel to the dream world. Packer and Jordan (2002) also draw this similarity in their book on virtual reality describing how the Cro Magnon in 15,000 BC in the Lascaux caves of south-western France used cave art (Figure 11) with candles and the acid aroma of animal fat for a magical theatre of the senses.
[[image:Cave_Art_BC_011.jpg]]
Figure 11. The caves of Lascaux: Cave Art 15,000 BC
The German composer Richard Wagner (1813-1883) (Figure 12) concept of Gesamtkunstwerk (total artwork) has also been cited as an early pioneer in the concept of immersion and presence in virtual worlds (Grau, 1999; Klich, 2007; Packer & Jordan, 2002). Wagner believed that a “Artistic Man can only fully content himself by uniting every branch of Art into the common Artwork” a synergy that not only includes the performance but all that surrounds so that mankind “...forgets the confines of the auditorium, and lives and breathes now only in the artwork which seems to it as Life itself, and on the stage which seems the wide expanse of the whole World” (Wagner, 1849, p. 184 & 186).
[[image:Wagner_Gesamtkunstwerk_012.jpg]]
Figure 12. Richard Wagner's Gesamtkunstwerk (Total Artwork)
===2.7.3 Virtual Worlds of Fiction and Fantasy===
There are numerous examples of virtual world that have been explored though fiction and fantasy. Each has contributed to the illusion of virtual worlds becoming a reality (Bartle, 2003; Chesher, 1994).
In Lewis Carroll’s novel, Alice's Adventures in Wonderland (1865), Alice fell down a rabbit hole to explore a fantasy world inhabited by peculiar and anthropomorphic creatures. Similarly, in Carroll’s follow on novel, Through the Looking Glass (1871), Alice explores a world behind a mirror. Hattori (1991) saw Lewis Carroll’s novels as a paradigm for modern virtual reality systems (Figure 13) blending the physical space with fantasy in a rapidly changing environment. To this extent, Carroll’s works provide a perfect analogy for the design and the development of virtual worlds (Rosenblum, 1995; West Virginia University, 2008). An explorative virtual world was realised as a children’s computer game called The Manhole (1988-2007) where it was based upon Carroll’s novel Alice’s Adventure in Wonderland (Wikipedia, 2008a).
[[image:Alice_via_Caroll_and_Hattori_013.jpg]]
Figure 13. 'Through the Looking Glass' Carroll (1871) & 'The World of Virtual Reality' Hattori (1991)
Within the fantasy literary genre, a key influence has been the works of J R R Tolkien starting with The Hobbit (1937) and its sequel The Lord of the Rings (1954, 1955) (Figure 14). An adventure fantasy that takes place in an imaginary world called Middle-Earth containing races such as Hobbits, Wizards, Elves, Orcs, Dwarves and Trolls. Tolkien’s literature style was so popular that the Oxford dictionary termed his literature approach as tolkienesque[10].
[[image:JRR_Tolkein_Book_Covers_014.jpg]]
Figure 14. The Hobbit & The Lord of the Rings by J. R. R. Tolkien (1937, 1954, 1955)
With respect to today’s virtual worlds, Tolkien’s contribution has not been merely in the construction of a raft of characters, racial groups and social concepts for role playing game inhabitants and interaction rules, but most importantly in his deep backgrounding of the imagined worlds. He did not merely describe his characters within the context and flow of the story line, but extended beyond that which was needed to tell a story, into that which was needed to make us believe of the real existence of his virtual worlds, Tolkien provides the reader with immaculate detail and descriptions to immerse them into the world Middle-Earth. Both books contained land maps (Figure 14) and the final book to The Lord of the Rings (released in 3 parts) containing appendices describing chronologies, histories, family trees, languages and translations and a calendar and dating system. Being a professor at Leeds and Oxford University he approached his work more like an academic anthropological study of an imagined world than a novelist (Macmillan, 2008).
In so doing Tolkien demonstrated a fundamental understanding a core strategy in establishing convincing presence – the necessity for a consistent, credible back story underpinning the virtual world. It is an early example of the depth of design that many later virtual worlds would exhibit in order to create a convincing sense of presence for the participant (Bartle, 2003; Schmidt, Kinzer, & Greenbaum, 2007).
A couple of virtual worlds that has been translated from Tolkien’s literature are the online virtual world ‘Lord of the Rings Online’ (2007) and PLATO’s MUD virtual world ‘Mines of Moria’ (1974).
More recently, literature has turned to imagining realities in which computational virtual worlds are a fundamental component of the plot. It is from this group that many of the terms now used to describe aspects and elements of virtual worlds are derived or were popularised, such as ‘avatar’, ‘metaverse’, ‘cyber-space’, etc. Some recent examples of novels containing a plot of computation virtual world are True Names (Vinge, 1981), Neuromancer (Gibson, 1984) and Snow Cash (Stephenson, 1992) (Figure 15).
[[image:Recent_VR_Literature_Covers_015.jpg]]
Figure 15. Recent Literature True Names (Vinge, 1981), Neuromancer (Gibson, 1984), Snow Cash (Stephenson, 1992)
'''Vernor Vinge’s True Names''' is not as well know as other novels in this genre but it was the first to present the concept of a person entering a computational virtual worlds meeting other people in ‘the other plane’ (Kelly, 1995). It was also unique in bringing the concept of anonymity to the digital world with one’s digital persona (handle) being different from one’s real self and where there was a necessity to hide one’s real identity thus your true name (and hence the title). It was translated to the computational virtual world in the form of ‘Habitat’ – the first graphical social networking virtual world (Farmer, 1992).
'''William Gibson’s Neuromancer''' a true cyberpunk[11] novel is possibly the most widely quoted in the virtual environment space (Chesher, 1994) . In this novel Gibson coined the term cyberspace with the concept of a viable parallel online world capable of critically impacting events and commerce in the real world.
'''Neal Stephenson's Snow Crash''' is where the term Metaverse was coined. Metaverse is a planet-sized city that has one continuous street 65,536 kilometres (216 km) in length where millions of people (known as avatars) travel up and down daily in search for entertainment, trade or social interaction. Although similar, in one sense, to Neuromancer it came from a different perspective in that people actually lived in the Metaverse not as cyberpunks getting up to mischief but as everyday people living a mainstream life real life in the virtual world. In this world real commerce was conducted and virtual artefacts were bought and sold with real world consequences which has been realised in the development of the virtual world Second Life.
Hollywood also contributed to the fantasy of the reality of virtual worlds. Films such as Tron (Lisberger, 1982), The Lawnmower Man (Leonard, 1992) and The Matrix (Wachowski & Wachowski, 1999) (Figure 16) just to name a few gave us the visual of virtual worlds that the books could only describe, and in some cases explored the haptic interfaces now being realised (Chesher, 1994).
[[image:VW_Films_Tron_LawnmowerMan_Matrix_016.jpg]]
Figure 16. Hollywood Films
Tron (Lisberger, 1982), The Lawnmower Man (Leonard, 1992), The Matrix (Wachowski & Wachowski, 1999)
At the time of their release, the novels and movies discussed above may have seemed futuristic and the concepts unobtainable but today we are much closer (if not already past) with advances in networking, computational processing power and understanding of the sociology of virtual environments. Maybe a ‘jack-in’ device that stimulates our nervous system to travel into cyberspace (Neuromancer, Gibson, 1984) is still a little way off (and may be too intrusive for some), or smelling odours or feeling textures within a virtual world may never be quite the same as the real life experience but what once seemed unimaginable in these works has become reality today. With technological advances and the rapid adoption of internet enabled online virtual worlds many of these concepts are less science fiction and more science fact than they once were.
==2.8 The History of Computational Virtual Worlds==
===2.8.1 Introduction===
In a lecture delivered by Ivan Sutherland in 1965 the first steps were made to combine computer design, construction, navigation and habitation of software generated virtual worlds (Packer & Jordan, 2002). Here Sutherland laid down a vision for the development of virtual worlds, as paraphrased by Brooks (1999, p. 16):
<blockquote>
“Don’t think of that thing as a screen, think of it as a window, a window through which one looks into a virtual world. The challenge to computer graphics is to make that virtual world look real, sound real, move and respond to interaction in real-time and even feel real.”
</blockquote>
The new-born medium of the graphical, digital virtual world experienced a “Cambrian Explosion” of diversity in the 1980s and ‘90s, with offspring species of many genres: first-person shooters, fantasy role-playing games, simulators, shared board and game tables, and social virtual worlds. (Damer, 2007)
The massively multiplayer online virtual worlds of today, with their world-wide user bases, are essentially a consequence of the mass adoption of the internet which commenced in the early 1990’s. Since the internet first achieved general acceptance they have advanced substantially in technical capabilities, graphics and number of subscribers (Figure 17) (Woodcock, 2008). See Appendix B: MMOG Analysis, for a break-down of MMOGs contained in this graph.
[[image:MMOVW_Growth_Rate_017.jpg]]
Figure 17. Massive Multiplayer Online Virtual World Growth Chart 98-2008
The virtual worlds of today (such as World of Warcraft, Entropia Universe, America’s Army, and Second Life, etc) represent a convergence of several disparate computational, technical and social origins and drivers. Current virtual worlds combine 3D visualisation, game theory, text messaging, animations, context and text sensitive gesturing, natural language processing, spatial voice & audio, artificial intelligence, agency theory, physics, connectedness, persistence, business strategy, sensory hardware and haptic interfaces, telecommunications, 2D image processing, video chroma-keying, social networking and many other influences to achieve their sense of immersion and presence. In this section we explore some of the milestones along these convergent paths.
As many of the influences that have contributed to our latest virtual world are derived from research streams that were concurrently pursued over more than 50 years, we shall look at the history of virtual worlds in six streams:
#Hardware based user interfaces and virtual reality environments
#Early graphical computer games
#Text and Text+ based Virtual Worlds
#2.5 and 3D graphical multi-player virtual worlds, broken down into:
#: a. MMORPGs
#: b. Social Virtual Worlds
#Simulation and Training Worlds
It should be noted that, while we will be considering the history in these streams, some virtual worlds necessarily exist in more than one stream. The grouping is that of the researcher, based on an extensive assessment of the literature, rather than the view of any one author.
===2.8.2 Hardware Based User Interfaces and Virtual Reality Systems===
====2.8.2.1 Introduction====
These two areas are grouped together, not because Virtual Reality (VR) Systems are a hardware solution, but rather because the work done in virtual reality worlds has generally aimed for extremely high levels of both immersion and presence and has therefore generally (although not always) been coupled with hardware in the form of purpose built user interfaces, designed to assist the sense of immersion such as headsets, or data gloves, etc.
The importance of the progress in VR systems to virtual worlds is that they have contributed or assisted much of the fundamental graphical rendering technologies, 3D animations studies and spatial awareness research and conceptualised the immersive aspects of virtual worlds.
====2.8.2.2 Sensorama====
One of the earliest inventions in the genre of virtual world simulators was developed by a cinematographer Morton Heilig. Inspired by the work of Fred Walker’s with Cinerama[12], Heilig presented a paper in 1955 ‘The Cinema of the Future’ (reprinted in Packer & Jordan, 2002). In an extension of Wagner’s (1849) Gesamtkunstwerk (total artwork) concept (Holmberg, 2003), Heilig believed that the logical extension of cinema was to provide the audience a first person experience of film using all their senses – “Open your eyes, listen, smell, and feel—sense the world in all its magnificent colors, depth, sounds, odors, and textures—this is the cinema of the future! (Packer & Jordan, 2002, p. 246)”
[[image:Morton_Heilig_Sensorama_Simulator_018.jpg]]
Figure 18. Morton Heilig, Sensorama Simulator, U.S. Patent #3050870, 1962
Heilig developed and patented the Sensorama Simulator (Figure 18) in 1962. The Sensorama was a single person simulator that offered the viewer a multi-sensory fully immersive theatre. The viewer could sit to watch a short three-dimensional stereoscopic movie that included stereo sound, an odour generator, force feedback handle bars, chair motion and wind on the viewers face (Rheingold, 1992). Heilig believed that the Sensorama Simulator could be next generation of theatres placed in hotels and lobbies or any small space that could fit his miniature theatre (Heilig, 1955, p. 345).
Heilig also recognised that the Sensorama Simulator offered training and learning potential for educational and industrial intuitions (Rheingold, 1992, p. 58) but unfortunately the Sensorama Simulator never took off, it was “a time when the business community couldn’t figure out what to do with it” (Laurel, 1991, p. 52). This may have been different a decade later when Pong kicked-off the arcade game industry and when education, industry and government saw great potential from investing in virtual world technology as they did with the Head Mounted Display (HMD).
====2.8.2.3 Head-Mounted Display====
In 1968 Ivan Sutherland presented the first computerised graphical HMD (Figure 19) (Sutherland, 1968)[13]. The HMD had a cathode ray tube (CRT) for each eye with a three-dimension simple wire-frame view of a room with motion tracking when the viewer moved their head. Known as ‘The Sword of Damocles’ based upon a Greek legend of a man placed in a precarious position of luxury with a sword above his head (Oxford Dictionary, 1989), similarly the HMD had a computer suspended above the users head attached by a mechanical arm (Figure 19, right) (Carlson, 2003).
[[image:HUD_The_Sword_of_Damocles_019.jpg]]
Figure 19. Head Mounted Display first called The Sword of Damocles (Sutherland,1968)
The HMD was a significant milestone in the development of virtual reality technology, which has since been used in a variety of applications in virtual worlds. Holding advantages over a traditional computer monitor such as total head and body movement, non interrupted viewing for total immersive HMDs and simultaneous viewing of real world and virtual world artefacts in ‘see-though’ HMDs or sometimes called Augmented Reality Displays (Rolland & Hua, 2005).
Today’s HMDs are more compact than Sutherland’s 1960s prototype (Figure 20). In the figure is shown on the left a HMD used for mixed reality environments similar to that designed by Sutherland and right a immersive HMD which is compatible with several online and gaming virtual worlds.
[[image:HUD_See_Through_and_Immersive_020.jpg]]
Figure 20. Today's Head Mounted Displays - Left: See-Though HMD - Right: Immersive HMD
===2.8.3 Early Graphical Computer Games===
Computer games have had a large influence on the evolution of virtual worlds both in the development and use of the technology. The contribution of games includes computational game theory, 2D and 3D graphics, social modelling, simulation, strategies for achieving presence, artificial intelligence, computational game physics and, possibly most significant delivery of a massive consumer market to fund and drive the investment needed for innovation and technology improvement. By far the majority of today’s online virtual worlds were conceived and/or delivered as games, they have subsequently evolved into general business or training platforms which are sometime referred to as Serious Games (Annetta, Murray, Laird, Bohr, & Park, 2006).
The early computer games that can be traced to a few innovative applications (Figure 21):
*'''Tennis for Two''': In 1958 William Higinbotham developed the first electronic game simulator using an oscilloscope display that demonstrated a two-dimensional side view of a tennis court. It was a two player game that the user could control the direction of the bouncing ball by turning a knob on a hand held device. Originally developed by Higinbotham to occupy visitors to Brookhaven National Laboratory during open days the game had queues of people waiting to play (Brookhaven National Laboratory, n.d.). Tennis for Two introduced the concepts of a shared multi-player electronic game experience, a rule based environment managed by a machine, and an electronic space where the actions of one player in the shared space affected the experience of another. The attention the game attracted demonstrated the willingness of participants to accept the visual and sensory limitations of a machine managed game environment and immerse themselves in the experience.
*'''Spacewar!''' The idea originated in 1961 by Steve Russell at the Massachusetts Institute of Technology (MIT) by 1962 the game was released with assistance from his colleges. Spacewar! was the first official release of a two-dimensional computer game.[14] A two player game each with a spaceship that would fire bullets at each other before being pulled into the middle by the sun. Developed originally to demonstrate the power of the new PDP-1 computer, this game was a good demonstration of both the graphic capabilities and the process power of the machine (Computer History Museum, n.d.; Markowitz, 2000). Later in 1969 Rick Blomme modified the game to run on PLATO which made this the first game to be networked (Koster, 2002; Mulligan, 2002). While Tennis for Two was the first multiplayer electronic game, Spacewar was the first computer based multiplayer game. It thus contributed the same key concepts and ideas as Tennis, only for the first time on a computer managed environment.
*'''Maze War''': In 1973-1974 Steve Colley developed the first three-dimensional ‘first person shooter’ (FPS) game Maze War at NASA Ames Research Center. A player would navigate around a maze searching for other players to shoot. As seen below (top right) the player had a first person view, (the eyeball seen in this picture is the other player). This is called a ‘first person’ game, placing the player ‘in-world’ as a part of the game is a significant concept of virtual world games. Maze War also provided other innovations now common to virtual worlds such as instant messaging, levelling and non player robot characters (Damer, 2007). This game which started as a two player game was eventually connected to ARPANET (the forerunner of our current internet network technology) allowing several users from remote locations to play and interact (Colley, n.d.; Damer, 2004). Maze War can therefore lay claim to being the progenitor of virtual worlds but not an actual virtual world because of its lack of persistence.
[[image:Early_Computer_Games_1958_To_1974_021.jpg]]
Figure 21. Early Computer Games 1958 - 1974
*'''DOOM (1993) (II, 1994)''' a 3D FPS game was influential both on a conceptual and technical level (Friedl, 2002; Mulligan, 2000). In DOOM the concept of Maze War was re-implemented in a much more graphically rich 3D environment. Although only a single player game, the key innovation of relevance was the method used to manage the rendering of the 3D space to allow multiple non-player characters to participate in the 3D environment with the player. The strategy adopted was essentially to divide the world into many small rooms surrounded on all sides by walls (essentially a cave system) by rendering only a single room at a time the entire resources of the computer could be devoted to a known confined rendering space, thus achieving the illusion of a highly detailed rendering with the limited computational resources available on the PC’s of the day. Although higher quality 3D rendered games were available some seven years earlier on the Amiga computers from 1986 (including some utilising real-time ray tracing technology), these relied on dedicated proprietary games architected graphics cards and did not provide a 3D space management paradigm that could be easily translated to the future demands of online 3D games. The Doom model could, precisely because it was architected for the graphically and processor challenged generalised home PC’s of the day, rather than proprietary games machines such as the Amiga. The Doom games engine was utilised in many subsequent games and later formed the basis for the model adopted for the online game Quake (Petrich, n.d.; Wikipedia Doom, 2008).
Around the time of DOOM the game industry realised the importance of connecting people together for online gaming, seeing the opportunity they started adding a modem and LAN play and later TCP/IP functionality to their games that allowed both single and multiplayer connectivity. Early games allowed up to 4 players but today’s games can have up to 64 players in a single game session (Quake Wars[15]). Some of the better known brand names included:
*'''Quake''' (1996, a multiplayer extension of DOOM) saw over 80,000 people connected to 10,000 + simultaneous game session (Mulligan, 2000).
Warcraft (1994) (II, 1995) that eventually would become the basis to the largest MMORPG today World of Warcraft (2004) which now has over 11 millions subscribed users (Blizzard Entertainment Inc, 2008).
===2.8.4 Text Based Virtual Worlds===
====2.8.4.1 Text Virtual Worlds: MUDs====
In 1978 the first MUD (Multi User Dungeon) outside of the PLATO system (discussed under Training and Simulators) was created by a Computer Science undergraduate Roy Trubshaw (and shortly afterwards joined by Richard Bartle) from Essex University in England. A text based virtual world, coined a MUD by Bartle was based upon Robert E Howard’s (1932) fictional tale ‘The Phoenix on the Sword’. MUD1[16] was an adventure role playing game, with game levelling and chat rooms which allowed up to 32 players to connect simultaneously over a remote connection (Figure 22) (Bartle, 2003).
[[image:Bartle_The_First_MUD_022.jpg]]
Figure 22. The First MUD: Roy Trubshaw and Richard Bartle (1978)
Early in the game’s history Essex University on whose computers the game was hosted became a part of ARPANET (the forerunner of the internet) and soon after MUD was distributed through that network and being played on universities throughout the world. Some of these institutions were also open for public access. Although copyrighted many variations of MUD1 were made and distributed freely from what Bartle (2003) describes as either player inspiration or pure frustration with the 32 player limitation which made it impossible to play when dial-in lines were fully allocated.
Keegan (1997) identifies two main classification of MUDs developed during this time (Figure 23) - the Essex MUDs (Trubshaw and Bartle’s) and Scepter of Goth (1978). Unfortunately Scepter died an early death, the game was sold and soon afterwards passed onto the creditors when the purchasing company ran out money (Bartle, 2003). Most MUDs were therefore based upon the ideas and technical structure of Trubshaw and Bartle’s MUD (Bartle, 2003; Keegan, 1997).
[[image:Basic_MUD_Tree_Structure_023.jpg]]
Figure 23. Basic Tree Structure for MUD classification
MUD1 introduced a number of concepts retained by most of today’s virtual worlds. Among which are:
*The role and effectiveness of the text based narrative and text communication that contributed to, rather than detracts from the sense of presence.
*Persistence in game play.
*Shared game space and cooperative (team based) activity.
*Non-player artificial intelligences called AI’s (or non player characters) as part of the experience.
*Region based environment management.
*Role-playing as a central game theme.
*Characters and avatars (all be it text based in the early MUDs).
*Game defined goals but player implemented plots.
Region based environment management is a computational aid that warrants particular attention. It was also used by the DOOM 3D graphics engine to manage multi-user environments allowing the computer to render the shared space in a single discrete region at a time. In DOOM this was a room, in MUD1 it was a cave in more recent virtual worlds it may be as much as a 65,000 sqm area (Second Life). This strategy provides a method of scaling the virtual worlds to many regions by distributing the region management across many discrete servers but imposes practical limits on the number of players that can be present in any given region at an instant in time (Hu & Liao, 2004).
MUD1 had a significant impact on virtual world design and development that dominated the online game space until the mid 1990s therefore MUD1 is often marked as the beginning of the first generation in online virtual worlds (Bartle, 2003). MUD1 can still be played online today at british-legends.com (CompuServe, 2007).
====2.8.4.2 ASCII Virtual Worlds====
In the early 1980’s pseudo graphical interfaces were added to some MUDs in the form of ASCII virtual worlds. ACSII (American Standard Code for Information Interchange) is the most widely adopted character encoding on western computer systems. ASCII virtual worlds provided a pseudo-graphical display making use of shape symbols and character positioning escape sequences to create crude planar maps of the terrain (dungeon) environment. The maps enhance the description of the room provided by the text.
ASCII pseudo graphical virtual worlds provided the player with a view of the world improved over the simple text prompt and description of MUDs. An example of an ASCII game can be seen below (Figure 24) Islands of Kesmai (IOK). Developed in 1982 and released in 1984 the game provided a player with a 3rd person view - overhead view of the world. Walls were denoted by [], fire **) and the players were letters (Bartle, 1990). IOK was Compuserve’s (USA ISP) best selling game with players paying up to $12.50 per hour to play (based upon connection time not game played) which usually had between 10-60 players online simultaneously (Bartle, 1990). Other ASCII games around this time were MegaWars I & MegaWars III (1983), NetHack (1987 (O'Donnell, 2003)), Sniper! and The Spy (Bartle, 1990).
[[image:RPG_Islands_Of_Kesmai_024.jpg]]
Figure 24. Islands of Kesmai ASCII Text Role Playing Game (1982-84)
By the mid to late 1980s home computing and online networking service providers opened the gates to huge expansion for on line virtual world. People paid for networking services by the hours, which gave a huge incentive to these providers to get their subscribers hooked on virtual worlds. There was big money to be made with 70% of revenue from one provider (Genie) in the early 1990s being made from games. By 1993 a study showed that 10% of the NSFNET backbone (precursor to the internet consisting of mainly government and universities) network traffic belonged to MUDs (Bartle, 2003).
===2.8.5 Graphical Virtual Worlds===
The text based MUDs evolved into two different streams: the 3D First Person Shooters such as DOOM and Quake which adopted the room at a time view of the world for 3D rendering, and the 2D graphical online virtual worlds that appeared in the early 1990s. Early examples include NeverWinter Nights (1991-1997), Shadow of Yserbius (1992-1996) and Kingdom of Drakkar (1992-Current) (Figure 25).
[[image:Graphical_2D_Virtual_Worlds_025.jpg]]
Figure 25. Graphical 2D Virtual Worlds
Unlike Habitat and Worldsaway (discussed under Social Networking Virtual Worlds) that predated these games appearing in the mid-1980’s, the graphically enhanced text based games were fantasy role playing games -- basically MUDs with graphics. Although 2D some of these games displayed isometrically on an angle which gave an illusion of a three-dimensional view for the player, for this reason these games are sometimes referred to as 2 ½D worlds (Bartle, 2003). These games used more sophisticated graphics (than the pseudo graphical solutions) to improve the sense of presence experienced by the players, while retaining the text based narrative.
By the mid 1990s with nearly 10 million internet hosts (Figure 26) (Slater III, 2002; Zakon, 2006) and price wars between providers the internet opened to doors to millions which saw hordes of inexpert computer users wanting to play games (Bartle, 2003). Game design had improved long with the graphical elements of virtual worlds with graphics rendering capabilities on standard PC’s and the emergence of common graphics file standards which made development of virtual worlds possible, practical and more economical.
[[image:InternetParticipatingHosts_Count_1990_to_1998_026.jpg]]
Figure 26. The Internet No. of Participating Hosts Oct. ‘90 - Apr. ‘98
====2.8.5.1 MMORPGs====
By the mid 1990s we saw the first 3D virtual world online Meridian 59 (1996-2000 & 2002-Current) although technically it used a pseudo-3D graphics engine (Axon, 2008; Bartle, 2003) providing a first person view where the player could view all angles in the environment (Figure 27). We saw the beginnings of a new era of virtual worlds with a massive 25,000 people signing up for the beta release (Axon, 2008), which unfortunately met with limited commercial success (Bartle, 2003; Friedl, 2002) and was shut down in 2000 but resurrected again in 2002 with the updated version online today at meridian59.neardeathstudios.com.
[[image:Meridian_59_First_3D_Online_Virtual_World_027.jpg ]]
Figure 27. Meridian 59 First 3D Online Virtual World (1996)
The turning point for online virtual worlds was Ultima Online (1997-Current). Ultima had already had met with success with the Ultima computer games series. With its online launch it had 50,000 subscribers within 3 months and was the first online virtual world to crack the 100,000 threshold within 12 months of release (which it did so in under 6 months) (Bartle, 2003; Woodcock, 2008). This added a new dimension to the term multiplayer where it has now come to know as a Massive Multiplayer Online, Role Playing Game or MMORPG. Subscription peaked at 250,000 in 2003 with 75,000 being reported in December 2007 (Woodcock, 2008).
Ultima Online consisting of a 2½D graphical virtual world has remained visual much the same (Figure 28) although recently the client that runs the game (the same concept as a web browser) has had a makeover in 2007 with the Kingdom Reborn (right). This game has received regular expansions to the world, which provides for new challenges and adventures for its player. Back in 2001 the client was upgraded to 3D (Wikipedia Ultima, 2008) but recently Electronic Arts announced they will be de-supporting their 3D client continuing only to support the 2D client going forward (Electronic Arts, 2007).
[[image:Ultima_Online_028.jpg]]
Figure 28. Ultima Online (1997-Current)
Other MMORPGs that started around the mid to late 1990s, which can still be played online today, are Furcadia (1996, longest running), The Realm (1996, second longest 15 days out from Furcadia), Lineage (1998), EverQuest (1999) and Asheron's Call (1999).
The more recent MMORPGs of today, not much has changed in game design from the original RPGs but technically they have improved and do provide much better graphics for the player (Figure 29). They have also increased substantially in popularity with the largest subscription based MMORPG World of Warcraft recently climbing to over 11 million players (Blizzard Entertainment Inc, 2008). Although these players do not play in one virtual world they are separated into different realms, the same game but with different people. This contrasts quite differently to the social virtual worlds like Second Life where all the users share one virtual world. In the next section we discuss social online virtual worlds which although they can be a MMORPG within the world itself (as mentioned early) their model of a virtual world is very different than the dedicated MMORPGs.
[[image:MMOZRG_Eve_and_WOW_029.jpg]]
Figure 29. MMORPG's Eve & World of Warcraft
====2.8.5.2 Social Virtual Worlds====
The first attempt for a commercial large scale multi-user game was made by George Lucas’s Lucasfilm Games. Habitat developed by Chip Morningstar and Randall Farmer started development in 1985 (McLellan, 2004; Ray, 2008; Slator et al., 2007). Habitat was built to support thousands of simultaneous users to run on the home computer Commodore 64 to be distributed via Quantum Link network service providers (later known as AOL). Inspired by a science fiction novel ‘True Names’ (Vinge, 1981) the world contained a fully-fledge economy where citizens of the world could own a virtual business, build a house, fall in love, get married and even established their own self governing laws (Morningstar & Farmer, 1990). Habitat a 2D graphical world looked similar to a cartoon (Figure 30, left) with the avatar (digital self) taking a third person view of the world. The storyline was based upon life rather than the fictional storyline of the MUDs, which placed greater emphasis on the social aspect of the world. Lucasfilm's Habitat was first released as a pilot in 1986 then later in 1988 as Club Caribe in North America which reportedly sustained a population of 15,000 participants by 1990 (Morningstar & Farmer, 1990). In 1990 it was released in Japan as Fujitsu Habitat and after extensive modifications. Habitat was realised again in 1995 as WorldsAway (Figure 30, Right) (Damer, 2007) and again as Dreamscape in 2008.
[[image:VW_Habitat_and_Worldsaway_030.jpg]]
Figure 30. Habitat (86) First Graphical Virtual World Precursor to Worldsaway (95)
Habitat introduced some key concepts in virtual worlds;
*The term ‘Avatar’ into the general virtual world community;
*The idea of focussing on social networking as a key form of game play;
*An economy where people could trade both in world currency and artefacts; and
*The most important, the concept that living in a virtual world and leading an alternate life that was not dictated by rules of a game (like with the dedicated MMORPG environments).
More recent social networking virtual worlds include Active Worlds (1995, 1997-current)[17], Second Life (2003-current) and There (2003-current) (Figure 31) – all of which have achieved a significant volume of educational interest as platforms for delivery of learning. The generalised nature of the social networking sites means that they tend to be more diverse in the range of facilities provided and the purposes to which they can be applied than the role playing game systems. They have generally provided participants with some form of content creation tools including the importing and/or exporting of non-virtual world artefacts. In the next section we discuss further the aspect of education in virtual worlds.
[[image:VW_SecondLife_and_There_031.jpg]]
Figure 31. Social Virtual Worlds: Second Life & There
===2.8.6 Simulation and Learning Systems===
====2.8.6.1 PLATO====
PLATO (Programmed Logic for Automated Teaching Operations) was a system designed for computer based education at University of Illinois that started in the early 1960s. Originally developed as a class room course system (Figure 32) with improvements in mainframe technology by 1972 saw up to a thousand simultaneous online users making it the first public online community that featured electronic course delivery, online chat, bulletin boards, 512 x 512 resolution monitors and 1200 baud connection speed (Unger, 1979; Woolley, 1994). With over 15,000 hours of instructional development PLATO was possibility the largest ever investment in educational technology (Garson, 2000).
[[image:PLATO_Lab_Image032.jpg]]
Figure 32. University of Illinois PLATO Lab & Terminal (1961-2006)
By the mid 1970s games made their way onto the university mainframes with great success. Between 1978 and May, 1985 about 20% of time spent on PLATO was game usage (Woolley, 1994). Games appeared such as Spacewar! (1969 game discussed earlier), Empire (1973, multi user space shooter game based upon Star Trek), DND, (1974, MUD[18] based upon the game Dungeons and Dragons), Mines of Moria (1974, MUD, 248 mazes based upon Tolkien’s Lord of the Rings), SPASIM (1974, 32 multi-user, FPS space ship game)[19], Airfight (1974-75 a 3D flight Simulator precursor to Microsoft’s Flight Simulator), Oubliette (1977, first person 3D MUD) and Avatar (19977-79 first person 3D MUD) (Bartle, 2003; Lowood, 2008; Pellett; Wikipedia, 2008b; Woolley, 1994). See below (Figure 33) for some examples of MUDs held on PLATO. Many of the games on PLATO were recreated for commercial use for arcade or personal computer games (Goldberg, 2002; Mulligan, 2002; Woolley, 1994).
[[image:PLATO_Popular_MUD_Games_Developed_For_PLATO_033.jpg]]
Figure 33. PLATO: Some Popular MUDs Games Developed for use on PLATO (1974-1979)
By 1985 after going commercial PLATO had established a systems of over 100 campuses worldwide (Garson, 2000). Known as the ‘ultimate electronic information and communication utility’ offering over 200,000 hours of courseware (Figure 34), with local dial-up of 300 or 1200 baud connection speed, access to both a social and educational contacts were among the many advances of PLATO that made it an attractive system for the academic community at large (Small & Small, 1984). Over time, with improvements in technology, and the cost of maintaining old technology the final PLATO system was turned off in 2006 (Wikipedia, 2008b).
[[image:PLATO_Online_Course_Count_1984_034.jpg]]
Figure 34. PLATO Over 200,000 online courses by 1984
A web site has been established for preservation of PLATO at cyber1.org (VCampus Corporation, 2008) which holds many of PLATO’s games and courseware for public download.
====2.8.6.2 SIMNET====
Military virtual world simulators started with a project called SIMNET (SIMulator NETworking). SIMNET was a DARPA project that enabled the first large scale real-time networked battlefield simulator. Development and implementation occurred on several levels between 1983 and 1990 (Cosby, 1999; Miller & Thorpe, 1995).
Prior to SIMNET military simulators consisted of immersive virtual reality training devices such as cockpit simulators. Cockpit simulators offered a replicated environment of the ‘real thing’ for example, an aeroplane cabin would be built in its entirety with motion and sensory feedback using pre-programmed software to produce repetitive simulations to provide an individual with mastery skills such as low to ground dog-fighting or missile avoidance training (Miller & Thorpe, 1995). SIMNET provided a cheaper alternative for certain types of training than the cockpit simulators and further offered ‘collective skills’ which Miller and Thorpe (1995) define to be cohesive team operations skills distinguished from individual mastery skills taught in cockpit simulators.
SIMNET a multiuser virtual world (Figure 35) consisted of real battle grounds with manned vehicles (tanks and helicopters), command posts, semi-automated forces where a single operator could control many vehicles in the simulation and the ability to record simulations from any view point (known as the flying carpet) so that it could replayed and statistically analysed and reported upon. At the conclusion of the program there were 250 simulators operating in nine locations (4 of which were in Europe) which provided real-time battle engagements that was directly under the control of the participants (Lenoir, 2003; Miller & Thorpe, 1995).
[[image:SIMNET_Battlefield_Simulator_035.jpg]]
Figure 35. SIMNET: Battlefield Simulator at Fort Knox USA (1983-1990)
SIMNET had a substantial impact on military training after being recognised as the key success factor in winning the 3 day ‘Battle of 73 Easting’ in the Gulf War (1991) which lead to several projects based upon the SIMNET technology (Figure 36) (Foley & Gifford, 2002) with the USA government commissioning $2,549 million dollars in 1997 for modelling and simulation projects (Lenoir, 2003).
[[image:US_Military_Networked_Simlator_Projects_1938_To_2001_036.jpg]]
Figure 36. Timeline of US Military Network Modelling and Simulator Projects (1983-2001)
In 1997 a project named Synthetic Theater of War (SToW) commenced which was a program to construct an environment to combine varies simulators into one large-scale distributed battle simulator capable of involving thousands of participants (Budge, Strini, Dehncke, & Hunt, 1998; Tiernan, 1996). This project has since become Joint Semi-Automated Forces (JSAF) (Hardy et al., 2001) which now enables more than 100,000 simultaneous simulations at a time (US Joint Forces Command, 2008). Australia military has also adopted the JSAF platform to build their our own Course Of Action Simulation (COA-Sim) for joint military operations training, exercises and planning (Carless, 2006; Gabrisch & Burgess, 2005)
====2.8.6.3 Military Use of Commercial Games Engines & The America’s Army====
In 1996, General Krulak of the US Marines tasked the Marine Combat Development Command to explore and approve the use of commercial games engines for military training purposes. One outcome of this effort was the collaboratively developed Marine Doom, based on the Software Id Corporation’s shareware Doom engine and Doom Level Editor. The simulation could be configured for simulation of special missions (such as hostage rescue) immediately prior to engagement and used to rehearse the planned mission (Lenoir, 2003).
In July of 2002 the US Military released a milestone in multi-user training game simulators in the form of America’s Army: Operations (Lenoir, 2003; Zyda, 2005). Based on Epic Games ‘Unreal’ games engine, the game created a virtual world that reproduced aspects of a career in the US Army, including ‘boot-camp’ commencement and weapons and tactical training through to various operations scenarios. Although originally developed and released as a recruitment tool, the game was also claimed to be utilised to improve training outcomes by army instructors at Fort Benning (Zyda, 2005).
Now, with 26 subsequent releases (as of 2008) and available for the PC, cell phone and Xbox, the game has more than 9 million registered users exploring entry level to advanced training, and operations in small units (Figure 37). Beyond a focus on realism that extends to accurate tree placement in training courses at the simulated training camps, the game adds an added dimension of presence to the participants through the active involvement of current and former real-world soldiers as players in the game (designated with a star icon in player profiles), interacting with non-military participants (Department of the Army, 2008).
[[image:Americas_Army_037.jpg]]
Figure 37 America's Army (2002)
From a training perspective anecdotal evidence from army trainers regarding the game is that sessions in the training scenarios such as the firing range or obstacle courses improve subsequent results in the real-life versions of these activities (Zyda, 2005). The US Army possibly one of the largest investors of virtual world game technology recently announced their plans to spend $50 million USD over the next 5 years to create 70 gaming systems in 53 locations around the world for combat training (Robson, 2008).
==2.9 Virtual Worlds for Education==
===2.9.1 Architecture Considerations===
====2.9.1.1 Introduction====
To appreciate properly the discussion of the literature examining educational directions in virtual worlds, the researcher considers a brief overview of the key architectural differences to assist the reader. This material is based on the researcher’s examination of a variety of game environments and virtual worlds, and discussions with experienced and knowledgeable users of these environments, rather than sourced from the work of other authors. As such the discussion is interpretive rather than authoritative.
Some of these environments have existed for only a few years, and have not yet enjoyed a comparative analysis undertaken by the academic community. As such, this discussion might not normally reside in the literature review, but it is felt that the placement of this discussion in this sub-section will assist the reader in better appreciating the issues explored in the literature discussion throughout the remainder of the section.
====2.9.1.2 Considerations of Operational Design====
While all of today’s major virtual worlds include capabilities for user interaction, sharing of the environment, persistence, avatars, business rules, streamed audio and text there are substantial differences in the technologies used to deliver the virtual experience. While some of these differences may create marginal differences in the world experience of the casual user, from the perspective of the educator and content creator the differences are substantial.
The major offerings can be viewed under the following groups (note: in each category the researcher has selected only a few example worlds, in most cases other options also exist):
#Proprietary closed engine (e.g. World of Warcraft, Everquest)
#Client resident closed content and world model with open engine (e.g. Shareware Doom )
#Streamed (or semi streamed) closed content and world model with closed engine (Entropia Universe)
#Open client resident content and world model with closed engine (Flight Simulator X, America’s Army, Unreal games, Quake, Doom)
#Open streamed content and world model (Hipi Hi, TruePlay, Active Worlds)
#Open streamed content and world model with out-of-world interfaces (Second Life V1, VastPark)
#Open streamed content and world model with out-of-world interfaces and open client (Second Life V1.2)
#Open streamed content and world model with out-of-world interfaces, open client and open server (DeepSim)
Architectural Components and Implications in Education
Below are some of the architectural components and implementations on the structure of a virtual education environment.
{| border="1"
|'''Architectural Components'''
|'''Implications in Education'''
|-
|Closed Proprietary System
|A closed proprietary system cannot generally be altered. These systems are generally not appropriate for education purposes unless the existing virtual world itself is built for the purpose of the training (such as a purpose built simulator). Closed systems can be used in education for group interaction and discussions, if not for lectures or anything requiring more than text or audio (assuming the system supports group audio communications).
|-
|Closed or Open Environment
|Whether content and world model is closed or open determines whether the textures, objects and artefacts of the world can be modified or created by users. This ability is essential if the world is to be utilised in education as anything more than a 3D discussion forum.
|-
|World Content
|Whether the content and world model is client resident or streamed goes to the complexity of distributing course content, and the dynamics available in delivery. If the content is streamed, it can be changed in real time, but will usually require a high speed internet connection. Systems supporting streamed content generally also include the tools for developing some if not all of the streamable content. If the content is client resident, client interfacing speeds can generally be slower, but the content must be centrally published and distributed to client systems and installed locally prior to use. It cannot be changed in real time, and content production will not generally be supported directly in the virtual world tool set, and will often require advanced 3D modelling skills in dedicated 3D modelling environments.
|-
|World Interfaces
|The existence of out-of-world interfaces goes to whether content from other sources such as the internet web pages, audio or video, etc can be streamed into the world and integrated with the world content and model. Systems capable of providing this capability with streamable open content offer the greatest potential for in-expensive production of course material and publication distribution of that material to students.
|-
|Client / Server Engine
|Whether the client or server engine is open or closed goes to whether the hosting software itself can be modified. Generally this should not be necessary for education if the capabilities of the engines driving the world are otherwise sufficient. Where the content / world are otherwise closed, but the engines are open, the existing content and world could be replaced by interfacing the games engine to a new world with new content.
|}
====2.9.1.3 Options for Content Modification====
The ability to modify the content of a virtual world is essential if the educator is to deliver course content in-world beyond that of an interactive discussion, or monologue.
There are essentially three ways content can be modified by end-users in current virtual world environments (as opposed to systems providers or publishers) depending on the operational design of the environment:
#'''Level Editor''' (eg: Doom, Half Life, America’s Army, Flight Simulator). Applicable to client resident worlds (i.e. systems where the world is stored on each client computer and distributed as a separately published down load). A level editor is a content editing tool that allows an entire simulation to be created including the world model, textures, characters, behaviours, etc. They usually support importation of textures and animations, etc into the ‘level’ and then distribution of the entire level to a central server for redistribution to clients.
#'''Client Content Editing Tool''' with import/export (eg: Second Life, Vast Park, etc). For environments where building and content creation is part of the ‘game play’ the client will have a content editor provided. These environments provide a simplified model for constructing shapes and objects (e.g. Second Life’s prims) and some means for importing complex objects such as organic shapes, textures, animations, sound, etc.
#'''Out-of-world interface''' (e.g. Second Life, Active Worlds). Potentially available in both client resident and server resident (streamed) worlds. An out-of-world interface allows for the connection of some aspect of the user experience while in world to be drawn directly and live from an off world location like a web page, internet resident database or streaming SoundCast server, etc.
====2.9.1.4 Implications of differential content capabilities====
Virtual world are comprised of components (objects) and functions that are managed by the virtual world (or game) engine and together comprise the capabilities of the world. Not all worlds have the same object management capabilities built into their engines. For the purposes of this discussion, the range of capabilities will be considered to be:
#'''Terrain''' – the land form or map of the virtual space. Essentially all virtual worlds offer some form of terrain map (although the terrain map may not be ground, but rather simply a 3D space.
#'''Avatars''' – Discussed extensively already, the avatar is the user’s projection into the virtual world and may or may not be customisable.
#'''Structural objects''' – Including buildings, furniture, ornaments, statues, models, etc. These are the virtual world equivalent to objects in the real world. They may or may not be animatible and scriptable. If they are scriptable they may be able to become autonomous agents, depending in the capabilities of the scripting engine.
#'''Textures''' – The visual covering of any object, terrain, or even avatars. The ability to display and upload/import textures is (generally) essential to the ability to ability to display lecture materials like slides, etc (but note the existence of streams as a potential alternative).
#'''Animations''' – An avatar and a non-player character appears to walk, sit, stand, change facial expressions, etc because of the animation it is playing at the time. Without animations an object might move from one point to another, but it will not change it apparent state. The ability to modify animations is advantageous for creating a sense of realism, but possibly not generally essential for the ability to deliver a lecture or every type of simulation. All virtual worlds examined, offered some range of built-in animations within their worlds. Some allow the animations to be imported or modified, or strung together to create more complex animations.
#'''Scripts''' – Scripting is a capability to programme the objects and behaviours in the world. In worlds modified by level editors and programming language is generally provided as part of the level editing environment and ‘compiled into’ the level before it is published and distributed. In user modifiable worlds, where scripting is supported (like Second Life) the scripting editor and compiler is provided as part of the client application and scripts are dynamically modifiable. In some architectures the scripts are stored in the objects and distributed with the objects (and therefore if the object is moved between worlds/simulators the script and behaviours move with it), whereas in others the scripts are centrally stored controlled for the world/level and not available outside of the world or level or simulator (as appropriate). Scripts govern the behaviour (movement, animations, actions, sounds, appearance, world responses, inter-object communication, etc) of objects. The capability and simplicity of language design of the scripting engine is critical to the options available for educators in building a simulation.
#'''Streams''' – Streams include any media that is streamable such as audio, video, web-page content, etc. The availability of streams is an extension (or possible an alternative) to the ability to import textures. From an educational standpoint it represents the ability to deliver video or sound presentations, or draw lecture materials directly from the internet. Depending on the world engine, stream content may be able to be dynamically published (drawn down to the client as required – such as in Second Life) or packaged into the client resident world (such as in America’s Army).
#'''Non-player Characters''' (also called Bots, AI’s or MOBs – mobile objects) - These are essentially characters that look like avatars but are completely controlled and managed by the engine. They interact with players/avatars in a semi-intelligent manner. The availability and capability of these vary significantly across worlds. In HalfLife and America’s Army, the AI capability is available within the engine and has considerable ‘intelligence’ and in some cases the ability to learn and modify their behaviour. In other worlds (such as Second Life) they are not directly supported by the virtual world engine at all. The existence of non-player characters can directly impact on the type of learning simulation that an educator can build as it can provide user feedback and the feeling of presence within the environment for the user (if implemented to provide a realistic experience for the user).
#'''Text Communication''' - Text chat (including instant messages, group communication chat, etc) is the standard communication strategy in all worlds. It is always instant and dynamic (in that it does not have to be pre-packaged into the world). It is a functional capability rather than an object, and may or may not be logged or copied depending on the client capabilities.
#'''Multi-way Voice Communication''' - Most virtual worlds do not support voice directly, although this has been an increasingly offered function over the last twelve months. Multi-way voice communication enables a group of players to converse as if they were in a conference call, without the necessity to type all communication in text. It is different from streams, in that every client can be a sound source to every other client, whereas streams are a one-way communication from a point source to many destination receivers. Clearly the availability of voice communication impacts both the type student and the form of discussion that can be undertaken in a learning situation.
In selecting the platform for delivering an educational experience, the extent to which the educator requires any or all of these capabilities within a virtual world will probably influence the decision. Some of these capabilities have only recently become generally available, and others are still in only rudimentary forms. In the literature review that follows, the approaches and content adopted, and the outcomes achieved have necessarily been constrained by capabilities of the technology options available at the time and the architectural constraints of the virtual world used.
===2.9.2 Education Applications in Virtual Worlds===
====2.9.2.1 Introduction====
During the 1970’s, 1980’s and early 1990’s, perhaps the most significant multi-user online environment for education was the PLATO system. From the mid 1990’s onwards, the influence of this system waned as it was progressively superseded in user interface capabilities by the emerging 3D online games, social networking systems and custom built virtual worlds for the specific application of subject matter.
Today the use of public online virtual worlds for is gaining popularity with educators with a recent special purpose committee of educators (The New Media Consortium & EDCAUSE, 2007) identifying that virtual worlds will have a significant impact in the future of teaching, learning, or creative expression within higher education. In the next section we will discuss some of the research findings of virtual worlds being used for educational purposes.
====2.9.2.2 Education Uses in Virtual Worlds====
Early work in education using text based MUDs showed that they offered support for constructive knowledge-building communities that offered affordances of coordinated presence with evidence for interactive learning and collaboration across time and space (Dickey, 2003).
The period of the late 1990’s until today has been typified by educators experimenting with the potential for mass market games engines (and more recently virtual worlds) to be re-tasked as education environments (Annetta et al., 2006; Beedle & Wright, 2007; Gikas & Van Eck, 2004). In some cases, such as America’s Army the ‘game’ environment was built with the specific goal of recruitment and training in mind (Zyda, 2005), or as with MicroSoft’s Flight Simulator a game evolved over time with the assistance of subject matter experts to create an accurate simulation tool for the games audience (Lenoir, 2003). In other cases a games engine (the operating system of a game) has been adapted to create a purpose built learning tool, such as educators and students at MIT utilising the Neverwinter Nights tools to create a historical game based on a battle in the Revolutionary War or MIT's Games-to-Teach Project produced playable prototypes of four games, including Biohazard, developed jointly by MIT and the Entertainment Technology Center at Carnegie Mellon University which trained emergency workers to deal with a cataclysmic attack (King, 2003).
The early 3D virtual worlds with their simplistic graphics bearing little resemblance to the real world provided students with advantages over traditional learning methods whilst fostering collaboration in multiuser virtual worlds. An extensive study of virtual reality technology in education was performed by Youngblut (1998) where she looked at 35 different research studies in education that varied in technology use, subject discipline and age group from 1993-1998. Below is an example of VARI House and Virtual Physics both of which were custom built (Figure 38), VARI a single user virtual world and Virtual Physics a multiuser virtual world. Although studies were mainly research based (as opposed to the application in course work) research showed for both single and multi user environments that virtual world technology in many cases surpassed traditional learning methods in areas such as subject matter understanding, memory retention, student collaboration and constructive learning methods. Some obvious disadvantages were technology constraints, cost and development and usability (Youngblut, 1998) which in most part could be contributed to the infancy of this technology, formative years of computer based learning and the lack of general use of computers by students which had yet to permeate sociality as a whole.
[[image:Education_In_Virtual_Worlds_in_1950_to_60_038.jpg]]
Figure 38. Education in Virtual World Mid 1990s
====2.9.2.3 Online Education Uses in Virtual Worlds====
As identified in the architecture considerations section, virtual worlds that are to be used in educational settings must enable content modification if learning is to consist of anything more advanced than an interactive conversation. For the purposes of this research, the researcher is choosing to focus on virtual worlds that support the dynamic delivery or streaming of content (and the building tools are provided as part of the environment), rather than those worlds where a separate level editor is required and a client resident world model must be installed on the client computer prior to use. The literature surveyed in this sub-section will therefore focus on the work done in two such environments – Active Worlds and Second Life.
=====2.9.2.3.1 Active Worlds=====
Online virtual worlds enabled educators’ access to environments without the cost and complexity of developing their own custom software. One of the first online virtual worlds that made it feasible for research and development in education (given its architecture qualities) was Active Worlds (1995, 1997). Officially known as Active Worlds Universe because it consists of many worlds, Active Worlds provided educators with the opportunity to rent or buy their own world allowing restricted access to invited guest, building tools and content management capabilities. Below is a screenshot of Active Worlds (Figure 39). As can be seen, the current client consist of four sections; left – communications and navigations options, right – integrated web browser, bottom – chat window and middle – 3D environment. This type of client is generally called a “browser” by the environment developers.
[[image:Active_Worlds_Universe_039.jpg]]
Figure 39. Early Online Social Virtual World: Active Worlds Universe
'''Active Worlds Research'''
During the late 1990s to the early 2000s several educational institutions setup up a presence in Active Worlds for various projects from research to actively using Active Worlds as an online learning environment (see Smith, 1999 for a list of Virtual Learning projects most of which being in Active Worlds). The early research into online virtual world based education using Active Worlds showed promise.
Dickey (1999, 2003, 2005) undertook research into the viability of Active Worlds being used for geographically distant learners for both formal (a university business computing skills course) and informal courses (Active Worlds building course). His research studies showed that the 3D Virtual Word offered advantages in fostering constructive learning, student and teacher collaboration, visual representation of course context and course content and student engagement and participation. Some of the disadvantages identified were essentially environment specific and included a lack of support for collaborative activities like a whiteboard or collaborative interactive writing spaces, chat tool single posting word limitation, a single shared channel for chat tool providing no separation of teacher / student discussion and no ability for turn taking and kinetics (animation) constraints such as hand raising for alerting the attention of the instructor.[20]
Dickey also identified a number of opportunities specifically enabled by a 3D environment. While some of the previously identified advantages (such as collaboration and student management and participation) might be duplicated in other forms of online education tools, the 3D modelling of the course itself (the visual representation of course context and course content) was an advantage specific to the 3D environment.
Course context modelling as provided in Dickey’s research (1999) was a 3D representation that illustrated the structure of the course by the use of individual buildings and plazas (Figure 40). Each building was a topic in the subject, which provided resources to aid learning and a meeting place where students could collaborate for group projects around this topic.
[[image:Visual_Course_Structure_in_Virtual_Buildings_040.jpg]]
Figure 40. Visual Representation of Course Structure by the use of Individual Buildings
Course content modelling as provided in Dickey’s research (1999) was a 3D representation that the student had to build in order to understand the concept of the subject material (Figure 41).
[[image:Visual_Represnetation_of_Course_Content_041.jpg]]
Figure 41. Visual Representation of Course Content
These alternative methods provide a good example of the power and adaptability of 3D modelling environment applied to education. The course context provided the student a method by which they could visualise the learning objectives and progression of the course. The student had to visit each building within a specific time frame and complete the contained content. The 3D modelling of course content enabled the learner multiple viewpoints of actual subject material which provided interactive learning that was believed to enhance the student’s understand of the subject topic.
Clark & Maher (2006) looked at the role of place and identity in a 3D virtual learning environment using Active Worlds by the analysis of chat logs and physical locality of the avatars within group discussions. They found that a sense of place can be achieved in a 3D virtual learning environment where identity and presence plays a role in establishing the context of the learning place. The students formed a strong bond with their avatar and indicated that they felt a sense of presence, as measured by a series of subjective scales, within the virtual learning environment. Similar Dickey (2003) found that the 3D virtual desktop world provided qualities of presence similar to that of an immersive virtual reality virtual world.
=====2.9.2.3.2 Second Life=====
Second Life (started 2003) consists of two worlds. These are: Second Life Teen Grid and Second Life Adult Grid. The teen grid provides access to 13-17 year olds and educational instructors. The functionality of the teen grid is the same as the adult grid with exception that all content has a PG rating. The Adult Grid is where you find all the universities and colleges for students over 17 years of age. Other educational content in Second Life is an extensive list of museums, galleries, simulations, business product development, role-playing spaces, employee and public business training course, etc. Similar to Active Worlds educators are able to rent or purchase land, allow open or closed access to the public and build and develop on land.
One major difference between Second Life and Active Worlds is that the former has an in world economy with in-built functional support enabling the trading of virtual products and services using ‘Linden dollars’, backed by content copyright and duplication controls and augmented by a provider managed exchange where real dollars can be exchanged for Linden dollars (and vice versa). This fundamental difference provides an incentive for content developers and service providers to actively support and expand the world with content and therefore enables access to a large body of pre-constructed content or access to an entire world-wide industry of content developers at extremely reasonable rates (compared to the real world 3D developers providing the similar content outside of Second Life) (Joseph, 2007). The building and scripting tools are easier to master than traditional 3D rendering tools, and delivered free as part of every user’s world browser and are sufficiently powerful that just about anything imaginable can be constructed (Schmidt et al., 2007).
Second Life’s standard interface as seen below (Figure 42) offers extensive functionality over that of Active Worlds. Some of the more common features as seen in the figure below are built-in world, content and people search facilities (left), a mini map (top right), an inventory library (bottom right), local chat channel (with a standard ranges of 15, 30 meters or 60 meters from text source) and group chat channels (worldwide range for up to 25 groups per avatar), customisable streaming media players (for sound, video and web page content), in world or external web html browser (link for both in world and outside world content), private or public multi-player voice facilities etc.
[[image:Second_Life_042.jpg]]
Figure 42. Online Virtual Social World Second Life (Circa 2008)
Another difference from Active World is avatar control, Second Life avatars can use roaming camera (whereas Active Worlds only provides First and Third person view). Roaming camera enables the user to use their mouse to control the moment around the world without the need to move their avatar. This functionality once mastered offers the users a powerful tool that provides an easy and fast way in which to navigate objects (that can even go through objects such as walls).
Due these and other technological advances over Active Worlds, Second Life has developed a large education community over the last couple of years. For instance, SIMTeach (June, 2008) the Second Life Education Wiki identifies over 200 Educational Institutions in Second Life of which 138 listed are universities, colleges and schools. The Second Life Education (SLED) list server has over 5,000 world-wide members. The New Media Consortium (NMC, a group that hosts education islands) has over 100 universities on their land and Second Life Teen Grid has over 90 educational projects (Linden & Linden, 2008). Figure 44 p88 provides some examples of the training and learning activities in Second Life representing a mixture of educational institutions, corporations and governments agencies.
The content of Second Life is entirely user created. The availability of content developers and potential students already experienced in using the environment is dependent on the take-up and expected future growth of the environment. In Figure 43 are the user base and economical statistics for the first quarter 2008 as provided by Second Life’s proprietor Linden Lab (2008a). As of November 2008 Second Life had 16,318,063 million users (60 day logons 1,344,215 million). A beak-down of Second Life’s demographics as at November 2008 can be seen in Appendix I: Second Life Demographics.
[[image:Second_Life_User_and_Econ_Stats_Q12008_043.jpg]]
Figure 43. Second Life User & Economic Statistics for Q1 2008
[[image:Second_Life_Training_and_Learning_044.jpg]]
Figure 44. Second Life Training and Learning
'''Second Life Research'''
Educators are using Second Life for both formal and informal purposes. Some Educational intuitions have set up entire virtual campuses modelling their real world campus while others are modelling purpose built virtual education structures. The relative youth of Second Life means that there is considerable variation in the maturity of educational efforts across the virtual world, and limited peer reviewed studies yet published. Many educators are still experimenting while others, having active support of their institutions are actively using the environment for partial or entire subject delivery. Here we will look at some of the current research at the time of writing that has been undertaken in Second Life most of which has been recently published since 2006 although given the technological advances that has occurred in Second Life since 2007 onwards we will specifically concentrate on the later research.
Martinez, Martinez, & Warkentin (2007) researched the implementation of a lecture to geographically distributed third year university students in Second Life. The lecture was delivered in a conventional lecture room setting using traditional chalk and talk style delivery with lecture slides and the chat channel for instruction, no voice was used.[21] According to the lecturer’s experience using text only delivery, the time to deliver the content was double that of a face to face lecture. This was also confirmed by the students in their survey. In the student survey some admitted they felt distracted by the novelty of the environment and were overly concerned with ancillary aspects such as their avatar’s appearance etc. Others admitted to being distracted by external (to the environment) concurrent activities occurring simultaneously on their PC’s such as multi tasking with other programs (e.g. MSN messaging) whilst at the lecture. Others experienced technical difficulties and could not get back into the lecture after they were accidentally logged out. In spite of these short-comings, when asked to rate the lecture experience on a scale of 1-10 the average student response was 8.5. In this study it was noted that some of these distractions and difficulties could be put down to first time user experience. The lecturer also felt that this lecture could have easily been pre-recorded and delivered online and that active learning techniques could have improved the delivery of this lecture in Second Life (Arreguin, 2007).
Joseph (2007) notes a consequence of using Second Life (or a virtual worlds in general) for teaching is that sessions generally take longer than traditional methods but believes that this is not an issue per se as time to complete the task should come second to the effectiveness of the experience. Joseph also believes (from experience) that the avatar projected on the screen and sense of presence experienced by the participants is more effective for learning than a live image of a video feed.
Kofi, Svihla, Gawel, and Bransford (2007) researched the potential that virtual worlds could provide efficiency and innovation for adaptive learning. In their study, students were present with a maze to navigate that simulated problem solving skills required for learning similar to that in a real life learning scenario. Kofi, et al found that Second Life was able to provide enough functionality and support for the learner to apply new concepts in order to solve presented problems as long as they were provided key indicators of possible outcomes. They also found that the use of 3D learning environments required the same amount of instruction that would be provided in equivalent real world learning and that simply building a model did not provide sufficient information, of itself, for the learner to learn in this instance; they also needed to be continuously prompted and guided in order to reach the end learning objective.
In another example, Second Life was used to support learning objectives of a total of 13 students aged between 19-26 for a third year level college students on a course for Digital Entertainment and Society where the students were geographically distributed around the world (Gonzalez, 2007). Both lectures and assignment work was conducted within Second Life. The lectures consisted of a video presentation and an in world field excursion. Assignment work required some in-world building, an exercise using linden dollars with a student presentation on completion. No students had used this environment before but an acclimation exercise was sufficient in providing them with the skills required to undergo course work in Second Life. At the end of the course students were given a survey with results presented below (Table 1).
{|
|Elements that Second Life Added:
|-
|
|Agree
|Disagree
|-
|Enjoyment
|100%
|0%
|-
|Technical difficulties
|100%
|0%
|-
|Interaction with tutor
|62%
|38%
|-
|Interact ion with classmates
|62%
|38%
|}
Table 1. Survey Results for Digital Entertainment and Society Second Life Subject
The technical difficulties result was explained largely by network latency experienced by the students. Each student used their own computers with an average of 512 Kbs connection speed – not especially fast, nor ideal for the use in the Second Life environment. No mention was made in the study as to whether the student computers met the Linden Lab systems requirements (2008c). As Second Life is streaming virtual world where content is downloaded on-demand from Linden Labs servers located in the USA to the local computer connection speed can an important factor in technical difficulty performance. Other major impacts from a technical perspective include the computer graphics cards and the size of onboard computer RAM. The Second Life browser does offer many settings for optimising performance on low-end machines but if the minimum system requirements are not met then the user’s experience of the virtual world will be reduced significantly with dropouts, lag and poor graphics.
==2.10 Learning & Instructional Design Theory==
===2.10.1 Introduction===
Learning in any world (real or virtual) requires well thought out instructional design. Learning is a process of the mind regardless of whether your body is present in the virtual world or real world. Instructional components for learning regardless of medium include (DONCIO et al., 2008):
*Clear, concise, and appropriately structured content
*Activities that draw relationships between concepts, challenge learners' thinking and understanding, and reinforce information
*Evaluative measures that determine if knowledge assimilation and retention have occurred
In this research the focus was on the use of new technology in education as opposed to education applied to new technology; therefore this section only provides an overview of applicable theory required to assist in the instructional design, delivery and assessment of the subject material presented to the research participants in this study. Gagne’s Nine Events of Instruction and Bloom’s Taxonomy of the Cognitive Domain were selected to assist in this task.
===2.10.2 Behaviourism and Cognitivism===
There are two main traditional schools of thought in learning theory. These are Behaviourism and Cognitivism (DONCIO et al., 2008; Lewis, 2001).
*Behaviourist (Objectivist) views the mind as a ‘black box’ no consideration of personal or past experience is taken into consideration. The mind starts off with a clean slate where a stimulus produces a response. Only when a change in behaviour is observed learning has occurred. Learning is discrete, measurable and quantifiable.
*Cognitivist (Constructivist) views the mind as a continuous organism that evolves. Knowledge is constructed based upon from past material and personal experience. Learning is unique to the individual; relating new information based upon pervious knowledge learnt.
The University of Washington, Seattle (2008) compares the two approaches of and a provides a discussion of each in terms of philosophy (Table 2, p93), learning outcomes, instructor role, student role, activities and assessment. The philosophies of these approaches are opposing and therefore produce different methods of instruction (Lewis, 2001; Nash, 2007).
Behaviourism was the first to be defined in learning theory while cognitivism developed later as a response to perceived limitations of behaviourism in understanding and adapting to new learning concepts (Lewis, 2001; Mergel, 1998).
While some constructivists argue the merits of constructivism as a distinct theory, viewing knowledge as a something constructed by a learner through the process of learning other writers view constructivist ideas as an evolution of the fundamental cognitivist school. This position is illustrated in Table 2 where the behaviourist and constructivist-enhanced-cognitivist philosophies are compared using a consistent comparative organisation of views (see Dabbagh, 2006; Mergel, 1998).
Constructivists argue a distinction between cognitive constructivism and social constructivism, in which the former emphasises the exploration and discovery on the part of each learner, while the latter emphasises the collaborative efforts of groups of learners as sources of learning, but for our purposes it is sufficient to distinguish the behaviourist and cognitive approaches. Throughout the years many practical teaching methods have evolved with concepts that encompass both approaches.
[[image:TABLE_Instructional_Design_Behaviorism_Cognitivism_045.jpg]]
Table 2. Instructional Design: Comparative Summary Behaviorism and Cognitivism
(University of Washington, 2008)
===2.10.3 Gagne’s Nine Events of Instruction===
Gagne theory of instruction can be divided into three areas (Corry, 1996); taxonomy of learning outcomes, conditions of learning and levels of instruction. There are considerable similarities between Gagne’s ‘taxonomy of learning outcomes’ and Bloom’s ‘taxonomy of the cognitive domain’ therefore a discussion of these will be provided in the next section of this thesis.
Gagne breaks down ‘conditions of learning’ into internal learning and external learning conditions. Internal learning is concerned with previous learned capabilities of the learner and external learning is the instruction or stimuli that will be presented to the learner. While Gagne’s theory takes an essentially cognitivist approach, it recognises both behaviourism and cognitivism influences to instructional learning. For our purposes, it is the ‘levels of instruction’ as outlined by Gagne that are of particular interest which we will explore in this section.
Gagne (1985) presents a systematic approach to instructional design termed the ‘nine levels of instruction’ as presented below in Figure 45 (Clarke, 2000)[22]. These nine levels have been specifically designed for the teaching of intellectual skills.
[[image:GAGNE_Nine_Steps_To_Instruction_046.gif]]
Figure 45. Robert Gagne's Nine Steps of Instruction (Clarke, 2000)
The nine instructional events with their corresponding cognitive processes can be described as follows (Clarke, 2000; Kearsley, 2008):
#'''Gaining Attention (Reception)''': Grab the attention of the participant by presenting a teaser in order to get the participant interested and motivate them to learn more about the topic that will be presented. This could be done using methods such as a movie, phrase, storytelling or a demonstration.
#'''Informing Learners of the Objective (Expectancy)''': Provide the participant with the objectives in order to assist them in organising their thoughts ready to receive the new information that will be presented.
#'''Stimulating Recall of Prior Learning (Retrieval)''': Provide the participant with any background that my assist them in building upon the new knowledge that they are about to receive. This helps to place a framework in their mind based upon previous knowledge.
#'''Presenting the Stimulus (Selective Perception)''': This is where the new learning begins. Information should be chunked and organised meaningfully in order to avoid memory overload and assist in the learning of new knowledge. Chunking the information into sequence of learning events and breaking it down into constituent parts with a structure and purpose that spans across different areas of comprehension. The revised Bloom’s taxonomy (discussed in the next section) can be used to assist in forming of the presented information.
#'''Providing Learning Guidance (Semantic Encoding)''': Assisting the participant to obtain a deeper level of understanding of the new knowledge so that information can be encoded into their long term memory. During instruction try to provide examples, non examples, analogies, graphical representation etc. to assist in semantic encoding process.
#'''Eliciting Performance (Responding)''': Letting the learner do something with the new knowledge or test their new knowledge to confirm they have a correct understanding of the information.
#'''Providing Feedback (Reinforcement)''': Analyse the learner’s understanding of the subject matter presented and provide feedback to correct any misunderstood knowledge. Immediate feedback and reinforcement of the new knowledge (e.g. question and answers).
#'''Assessing Performance (Retrieval)''': Test that the new knowledge is understood and the learning objectives have been met. This could be in the form of a test or a demonstration by the learner to assess if they have mastered the information.
#'''Enhancing Retention and Transfer (Generalisation)''': Generalise the information so that the knowledge transfer can occur, inform them of similar problems or a similar situation so that the acquired knowledge can be put into a new context.
===2.10.4 Bloom’s Taxonomy===
The Taxonomy of Educational Objectives also known as Bloom’s Taxonomy is widely used[23] to assist in the preparation of learning objectives and the assessment of learning outcomes. The learning outcomes of a student are the results of their learning experience of a course that should be a direct consequence of the course objectives (Monash University, 2008). Hence the application of Bloom’s taxonomy of educational objectives in forming course objectives provides a measure by which to assess student’s learning outcomes.
The original work of Bloom’s Taxonomy was developed by an American committee of educational psychologists chaired by Benjamin Bloom that presented over a period of time three domains: cognitive (knowledge) (Bloom, Englehart, Furst, Hill, & Krathwohl, 1956), affective (attitudes) (Krathwohl, Bloom, & Masia, 1964), and psychomotor (motor skills) (Dave, 1967, 1970; Harrow, 1972; Simpson, 1972). In forming educational course objectives Bloom’s cognitive domain is applied to assess the knowledge and intellectual component of a curriculum.
After nearly 47 years had passed Bloom’s cognitive domain was revised (Anderson et al., 2001; D R Krathwohl, 2002) by a committee of eight, two of whom had worked on the original published work (committee: Krathwohl and editor: Anderson). The revision was made as a result of many years of application and research and has since been accepted by many educators as a replacement for Bloom’s original work. The changes that were made are as follows (Figure 46) (Anderson Research Group, n.d.; D R Krathwohl, 2002):
*The names of six major categories were changed from noun to verb forms.
*Comprehension and synthesis were retitled to understand and create respectively, in order to better reflect the nature of the thinking defined in each category.
*Create was moved to the highest, that is, most complex, category.
*The revised Taxonomy is not a cumulative hierarchy.
*A taxon of remember was devised to replace that of Knowledge, and
*A two dimensional Cognitive Taxonomy Table was formed by sub dividing the original Knowledge taxon.
[[image:BLOOM_Changes_in_Cognitive_Domain_047.jpg]]
Figure 46. Changes in Bloom’s Cognitive Domain
====2.10.4.1 Revised Bloom’s Taxonomy of the Cognitive Domain====
A substantive difference is in the handling of “Knowledge”. The revised Bloom’s cognitive domain as shown in Table 3 was extended to include the dimension of Knowledge. So now the revised Bloom’s cognitive domain consists of a two dimensional table with The Knowledge Dimension and The Cognitive Process Dimension. This table provides the instructor with a tool with which to classify learning objectives where learning objectives are specific and inclusive to the discrete learning outcomes or intended results that are hoped to be achieved by the end of instruction. The instructor defines the learning objectives where these objectives are classified into the appropriate cell in the 2D matrix of cognitive and knowledge dimensions which then assists in instructional design, and assessment and provides a tool to enable balancing of the learning objectives across methods of instructional design.
[[image:BLOOM_TABLE_Revised_Taxonomy_048.jpg]]
Table 3. Revised Bloom’s Taxonomy Table
(Anderson et al., 2001, p. 28)
'''The Cognitive Process Dimension'''
The Cognitive Process Dimension is the column values for Table 3 above. This dimension provides the level of learning and comprehension required to complete a task where each differs in their complexity on a scale from 1-6. Cognitive dimensions are defined as 1.Remembering, 2.Understanding, 3.Applying, 4.Analysing, 5.Evaluating and 6.Creating each of which contain further sub-process with 19 specific cognitive processes in total. Table 4 provides an overview of each cognitive process with their defining verbs. Verbs are used to classify an objective. For example, an objective ‘to recall the 7 states of Australia’ would be classified under remembering. Recall in this instance is the verb that classifies the learning objective into level “1. Remember” of the cognitive dimension.
[[image:Cognitive_Process_Dimension_Processes_049.jpg:
Table 4. The Six Categories of The Cognitive Process Dimension And Related Cognitive Processes (Anderson et al., 2001, p. 31)
Bloom’s cognitive taxonomy was solely based upon the values contained in the cognitive dimension (with the exception of the differences previously discussed). Bloom believed that the cognitive process was a cumulative learning process in order to achieve a learning outcome. For example, in order to ‘analyse’ subject matter the student would need to have mastered using the old Bloom’s taxonomy of the cognitive domain knowledge/remember, comprehension/ understand and application/ apply whereas the revised taxonomy of the cognitive domain does not assume this cumulative hierarchy. The early Bloom’s cognitive domain took a behaviourist approach to instruction whereas the revised Bloom’s cognitive domain believes that learning can take place at any level without mastering previous levels. This is a fundamental shift in the philosophical grounding of Bloom’s taxonomy of the cognitive domain where it has moved away from the behaviourist approach of learning.
'''The Knowledge Dimension'''
The Knowledge Dimension provides an additional dimension that has been added to the taxonomy by the subdivision (and modification) of Bloom’s original knowledge category, which can be seen as row values in Table 3 above. The knowledge dimension defines how knowledge is constructed which can be Factual, Conceptual, Procedural or Metacognitive. Table 5 provides an overview of the knowledge dimension and their meanings.
The knowledge dimension separates the noun (or subject matter) from the stated learning objective. For example, continuing on from the objective discussed above ‘to recall the 7 states of Australia’ would be factual knowledge where the bolded words make up the noun construct. This noun is factual because the learner either knows the states or they don’t, to know is the basic element required in order to solve the problem.
[[image:Major_Types_and_Subtypes_Knowledge_Dimension_050.jpg]]
Table 5. The Major Types And Subtypes Of Knowledge Dimension (Anderson et al., 2001, p. 31)
The knowledge dimension has been added as it provides further insight to the type of knowledge a student is required to master. In the original work this assumption was also made as it was the first level in a cumulative hierarchy but the revised knowledge dimension provides the instructor with a greater understanding and assists in defining knowledge as a separate dimension. For example, the objective ‘to recall the 7 states of Australia’ the student needs to Remember Factual Knowledge.
The knowledge dimension like the cognitive dimension is not a cumulative hierarchy, learning can start anywhere within the knowledge dimension.
'''Using the Revised Bloom’s Cognitive Domain to Assist in Instructional Design'''
To assist in formulating instructional design Anderson et al. (2001) provides in their book for the cognitive dimension; sample objectives, corresponding assessments and assessment formats (chapter 5) and in the knowledge dimension; specific details, elements, generalisation, structures and models etc (chapter 4). This assists in the formulation of specific tasks and in defining the level of knowledge required of the student. It also assists in ensuring those objectives and testing of those objectives lie across the required range of cognitive and /or knowledge categories and that the student is being fairly assessed in areas that are directly related to the objectives.
====2.10.4.2 Bloom’s Taxonomy of the Cognitive Domain Applied to a Digital Environment====
'''Bloom’s Digital Taxonomy of the Cognitive Domain'''
Churches (2008) has extended the (revised) Bloom’s cognitive domain for digital learning by taking the cognitive process dimension and included verbs for emerging technology. As can be seen below (Figure 47) the words highlighted in blue are the digital emerging technology verbs that have been categorised by using (revised) Bloom’s cognitive levels as the basis for interpretation of complexity. For example bookmarking is a remembering process is simpler than programming (which is a creating process).
[[image:BLOOM_Revised_As_Digital_Taxonomy_051.jpg]]
Figure 47. Bloom's Digital Taxonomy
Churches further added within his classification system a rubric (scoring criteria) of these technologies similar to that that has been defined in the sub-classification system used in Bloom’s cognitive domain. For example, Table 6 displays the rubric for Bookmarking where it has been broken down from simplest to highest.
[[image:BLOOM_Bookmarking_Rubric_For_Digital_Taxonomy_052.jpg]]
Table 6. Bookmarking Rubric for Bloom’s Digital Taxonomy
'''Bloom’s Taxonomy of the Cognitive Domain applied to Games'''
Wang & Tzeng (2007) proposed using the (revised) Bloom’s taxonomy of the cognitive domain as a method for understanding the application of knowledge in digital games. They believed that players learn in various ways within computer games and recognised how little work (if any) had been done in analysing such e-learning platforms in a structured taxonomic manner and in structuring the implementation and understanding of the cognitive processes. They proposed using Bloom’s taxonomy of the cognitive domain as a method by which to assess cognitive processes in a computer game.
[[image:BLOOM_Taxonomy_For_Games_053.jpg]]
Figure 48. Bloom’s Taxonomy for Games
The research included using a game called Food Force, which was a problem solving and mission-oriented game. Figure 48 summarises the conclusion of their research. As can be seen in Figure 48, players exhibited both personal and social feedback cross Bloom’s cognitive levels. They found that the players experienced cognitive processes for individuals across all categories of the Bloom’s cognitive model and displayed social interaction for the higher level Bloom’s categories of Analyse, Evaluate and Create.
==2.11 Summary==
The acceptance of the latest crop of virtual worlds such World Of Warcraft, Second Life, Entropia Universe, There, Eve, America’s Army and others by the internet using public as an integral part of their life style is possibly the most significant paradigm shift to occur in the last 10 years. With the statistics of user volumes and retention rates shows consumption numbers in the tens of millions of users, spread evenly across ages from youth to middle age and an approximately even gender balance (at least in the social worlds) (KZERO Research, 2007; Woodcock, 2008; Yee, 2006). The growth rates of these worlds collectively have been, and are projected (by industry analysts) to continue to be, rising dramatically for the foreseeable future.
With the current convergence of disparate technologies represented by these systems, the general public now have affordable single platform multi-media collaborative environments with sufficient realism to create virtual immersive spaces where presence is achieved at a level sufficient for them to lead virtual existences and establish social networks that rival their real world existence.
The linking of these spaces with the affordable (often free) tools that enable the public to create new 3D spaces and content for these spaces over the last eight years has resulted in a world-wide content developer base that with substantial skills and a highly competitive market for purchasers of those skills at often very low rates.
With the combined market pressures of minimising education delivery costs, improving education outcomes, and reaching as wide a market as possible it is understandable that educators have shown an extended interest over many years in the possibilities of virtual environments for education delivery. So with the advent of the latest generation of creativity focused social worlds like Second Life over the last few years, it is not surprising that the uptake by universities and educators (numbering in the hundreds of institutions) has been as substantial as it is.
A brief retrospective of the work in simulators, virtual reality and 3D games, shows that the potential of these environments extends beyond the virtual ‘chalk-and-talk’ to enabling education delivery strategies for even campus based students that cannot economically be delivered using reality bound means.
With traditional real world learning environments there is an extensive body of tested knowledge that can provide clear guidance as to workable frameworks for the design of course work. The extent to which and how these methods can or should be applied to the virtual world learning space remains an open question.
</div >
[[Category:Featured Article]]
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
d875646505c796db4cb09ab9a1b432278c0a2c28
358
330
2018-10-29T12:02:33Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 2: Virtual Worlds - Concepts, History, and Use in Education (Literature Review)=
==2.1 Introduction==
Gartner (2007) predicts that as many as 80% of active internet users will have a ‘Second Life’ in a virtual world by the end of 2011. Depending on your definition of ‘virtual world’ this may seem a little ambitious. Certainly, the extent to which virtual worlds are seen to include massively multi-user online environments supporting collaborative exchange of information in shared virtual space, the prediction might prove reasonably safe. To the extent that this definition is constrained to massively multi-player online games then prediction may prove a little “braver”.
Today’s virtual worlds represent the convergence of multiple technology streams, with the latest examples of the genre representing the merger of internet, telecommunications, instant messaging, virtual reality, 2D & 3D graphics, a variety of 3D modelling technologies, spatial sound, distributed databases, spatial indexing, mapping, streaming data transmission, physics, scripting languages, object-oriented software, agent theory, artificial intelligence, networking, economic modelling, online trading systems, game theory and many, many more technologies.
While the developers of many virtual worlds are content within the game space, some virtual world developers, such as Linden Research (developers of Second Life) have ambitions to be the web platform of the future (Bulkley, 2007). To this end a number of the commercial developers of virtual worlds have joined forces with a number of major corporate consumers, systems integrators and US government bodies to explore common standards for inter-operability of virtual world platforms which is a necessary first step in moving the technologies from the isolated proprietary place they now inhabit to a world-wide shared web platform (Terdiman, 2007).
This chapter explores virtual worlds, reviews the literature considering alternative definitions, characteristics, history, key architectural features, research outcomes and applications in education. The chapter concludes with an examination of traditional education taxonomy and relates that to the virtual world context as a basis for structuring an approach to exploring education affordances offered by two approaches to education in virtual worlds.
==2.2 Virtual Worlds==
===2.2.1 What is a Virtual World?===
====2.2.1.1 In Search of a Definition====
“Virtual worlds are places where the imaginary meets the real”. (Bartle, 2003, p. 1)
Virtual, as defined in the Oxford Dictionary (1989) with respect to the computing context is: “… not physically existing as such but made by software to appear to do so from the point of view of the program or the user….” and defined in the virtual reality context to be “… a notional image or environment generated by computer software, with which a user can interact realistically as by using a helmet containing a screen, gloves fitted with sensors, etc.” (1997).
The term world is defined in the Oxford Dictionary (1989) as “the ‘realm’ within which one moves or lives”.
In simple terms, therefore, a ‘virtual world’ can be defined as a generated computer software realm in which a user moves, exists or lives in a manner that appears to be real to the user.
A common definition for the term ‘virtual world’ is passionately debated in the literature (see Combs, 2004; Jennings, 2007; Reynolds, 2008; Wilson, 2007). It is a term that is used to describe many types of software environments from a simple MUD (Multi User Dungeons, also referred to as Multi User Dimensions or Domains) (Bartle, 2003; Keegan, 1997; Slator et al., 2007) to a sophisticated fully immersive 3D virtual reality environment used in gaming, physical training simulators or social interaction spaces (MetaMersion; Patel, Bailenson, Jung, Diankov, & Bajcsy, 2006; Van Dam, Forsberg, Laidlaw, LaViola, & Simpson, 2000). The term virtual world can be used to describe a single user walk-through simulated environment (Dalgarno, 2004; Youngblut, 1998) or an environment such as a massive multiplayer online role playing game (MMORPG) like World of Warcarft (Bainbridge, 2007). The term virtual world is also interchanged with other terms such as - virtual environment, synthetic world, mirror world, metaverse, virtual universe, artificial world etc[2] (Grøstad, 2007).
Bartle (2003, p. 1) provides the following definition:
<blockquote>
“Virtual worlds are implemented by a computer (or network of computers) that simulate an environment. Some -but not all- of the entities in this environment act under the direct control of individual people. Because several such people can affect the same environment simultaneously, the world is said to be shared or multi-user. The environment continues to exist and develop internally (at least to some degree) even when there are no people interacting with it; this means it is persistent.”
</blockquote>
Therefore, using Bartle’s definition in conjunction with the Oxford Dictionary definition provided above a virtual world can be defined as:
<blockquote>A shared software environment (or realm) in which a person represented as a projected entity (such as an digitally projected image, text identity or other computationally representational object) moves, exists or lives in a manner that appears to be real to the person and capable of affecting that environment and, being affected by, in a manner that simultaneously effects the experiences of other entities within the environment and which generally remains persistent once the user has left the world.
</blockquote>
The key components of this definition are:
#A shared environment in which a real-world participant shares a computationally generated artificial space with other real world participants and/or other computationally generated entities.
#The nature of the real-world participant’s projection into the computationally generated virtual space.
#The characteristics of the space, which establish a sense of realism to the participant.
#The manner and extent to which the real world participant is able to affect the shared space.
#The nature and form of persistence that the artificial space retains.
Throughout this section we will examine the current state of these components; the ideas and literature analysing contributing to the current expression of these concepts in the form of currently available virtual worlds. The realisation of virtual worlds in software has been (and continues to be) a rapidly evolving field continually consolidating mixed influences from a fiction, mechanical and electrical engineering, computer science, gaming theory, telecommunications, social science, commerce, religion and sociology. It is a field where advances are made as much in the act of amateur invention as in formal science, and a field in which the academic literature frequently lags the leading edge of the advances by a significant degree.
===2.2.2 Recognising a Virtual World by its Features===
While there is not as yet a single common set of universally accepted attributes, the literature offers a variety of feature based definitions that attempt to provide a basis for classifying whether a given application or environment is, or is not, a virtual world. Across these competing views there are some features that are most frequently repeated.
Coming from the perspective of virtual worlds as gaming platforms, Bartle (2003, pp. 3-4) proposes that a virtual world should adhere to the following conventions:
*'''Physics''': The world contains automated rules for the players that effect change in the world.
*'''Character''': The player is a part of in world experience that is represented by a character and with which they strongly identify.
*'''Interactions''': All interactions with the world are channelled thought the character.
*'''Real-time''': Interaction in the world take place in real-time.
*'''Shared''': The world is shared by others characters in common.
*'''Persistence''': The world is continuous and evolves to some degree regardless of whether or not the player is present in the word. While not present the player’s state in the game remains unchanged.
Bartle tends to use the term character, for what this thesis refers to as an avatar, and considers that the player (which will be identified as ‘the intelligence’ in this thesis) must strongly identify with that character. In the context of role playing games where the player assumes an identity not their own, this aspect of the feature list goes to recognise the effectiveness of the immersion and sense of presence the player experiences (concepts we will be exploring later), but outside of this space, where the player and the ‘character’ may be one and the same, this feature is less of a distinguishing criterion.
His use of the term Physics in the context of an application genre that may include 3D environments is perhaps a little confusing. In these spaces Physics most commonly refers to the physics engine that manages the simulation of an avatar and object dynamics in the space (such as gravity, acceleration, force, momentum and limb movement, etc). As used by Bartle, the term includes the ‘business rules’ and behaviours of the system – the rules governing all interaction, not just those simulating physical movement.
The nature of the shared space and interactive channel imply that the actions of one player affect the experience of another.
Edward Castronova (2001, pp. 5-6) proposes that a virtual world should have the following features:
*'''Interactivity''': Existing on one computer and can be accessed via a network (or the internet) by many simultaneous users. The actions of each user have influence on other users in the world.
*'''Physicality''': Users access the world by a computer, which provides a first person view of the world, the world is generally ruled by natural laws much like the real world with scarcity of resources.
*'''Persistence''': The world is continuous and evolves to some degree regardless of whether or not the player is present in the word. While not present the player’s state in the game remains unchanged.
Castronova’s feature requirements are essentially a subset of Bartle’s, although with the possible omission of the expectation that interaction is necessarily real time.
Sun Microsystems Inc (2008, p. 3) proposed the following common features of open virtual worlds (ie multi-user virtual worlds open to public access over the internet):
*Shared space, allowing multiple users to participate simultaneously.
*Users interact with one another and the environment.
*Persistence.
*Immediacy of the interactions.
*Similarities to the real world rules.
We might, perhaps reject Sun’s expectation of any need to assimilate ‘real world rules’ as this would exclude many fantasy role playing games from being classed as virtual worlds, but outside from this aspect Sun’s list is essentially consistent with the views of Bartle and Castronova.
These three sources are essentially consistent with the body of the literature, making allowance for the additional attributes and some latitude in interpretation we can establish a minimum feature list that would be generally accepted:
*The environment is shared;
*Interaction are in real-time;
*A person participates in the world through some form of representation with which they identify and are identified and that facilitates interaction and recognition (such as a character or avatar);
*Interactivity in the world is channelled though the avatar;
*Changes induced by a participant influence the experience of the space for other participants;
*Rules govern the world and interactions are shared and commonly applied; and
*The world is persistent.
==2.3 The Avatar–The Nature of a Participant’s Projection into a Virtual World==
While Bartle (2003) refers to a participant’s projection into a virtual world as a “Character”, the more widely accepted name today for a real world participant’s projection into a virtual world is an Avatar. This is the term this thesis will be adopting in this research.
The word avatar derives from avatara a Sanskrit word meaning “descent of a deity” or incarnation and utilised by the Vaishnavism religious tradition of Hinduism. The Hindi concept of an avatar is thought to originate as early as the second century B.C.E (Sheth 2002). One of the most recognised Hindu deities is Vishnu (Figure 1). In Hinduism, Vishnu, is said to have a standard list of ten avataras (collectively known as Dasavatara) with one of them said to be Buddha (Siddhārtha Gautama) the founder of Buddhism (Sheth 2002).
[[image:Vishnu_Hindu_Avatar_001.jpg]]
Figure 1. Hindu Avatara
Left: Visnu (or Vishnu) Hindu deity the protector and preserver of the universe
Right: Ten avatars of Visnu (Dasavatara)
(Vivekananda Centre, 2008)
In computing terms, little has changed from the original Hindi meaning of avatar. As with Hindu avatara, the virtual world participant can be thought of as “descending” or “projected” from reality to become a computational representational in a virtual world. In virtual worlds, an avatar is generally (although not exclusively) a graphical representation of the user’s persona (Deuchar & Nodder, 2003) although it can also be a representation of a system or a function in some applications (Sheth, 2003), a simple name in the form of a text string (in some text based MUD’s) and is evolving to include virtualisations of other senses (such as aural and tactile) (S.-Y. Lee, Kim, Ahn, Lim, & Kim, 2005). The graphical representation of an Avatar was thought to originate from a networked multi-user virtual world game called Habitat in 1984 (Bye, 2008; Morningstar & Farmer, 1990). Early research seems to suggest that the use of digital avatars in virtual worlds provides the user with reduced inhibitions and dissolves social status, or reconstructs social status among users (Dede, 1995; Dickey, 2003; Rheingold, 1993).
The projected form is not necessarily a recognisable representation of the real world human form. In his or her projected form, for example, the avatar might be represented as an image of a human, an animal, an animated mechanical object, a simple name, or any form appropriate to the virtual world, and within the technical capabilities of that world’s object management systems. For example, in Eve (a space based virtual world) all avatars are space ships whereas in Second Life (a social based virtual world) an avatar can take any form (Figure 2) but regardless of appearance your avatar’s name remains the same.
[[image:SecondLife_Digital_Avatars_002.jpg]]
Figure 2. Digital Avatars of Second Life (Levine, 2007)
In terms of today’s virtual worlds, and for the purposes of this research, an avatar should be thought of as a combination of a representation, an agent and an intelligence:
#The ''representation'' may be visual, aural, tactile or any other sense conveying the presence of the avatar to other avatars or agents in a virtual world.
#The ''agent'' is the library of capabilities of the avatar in a virtual world.
#The ''intelligence'' (or actor) provides the tactical and strategic control of the avatar, which could be artificial or natural (eg human).
In a virtual world the decisions of the intelligence are communicated to, and realised by, the agent. The consequence of the agent realising (enacting/implementing) the intelligence’s commands may result in a change in the state of both the agent and the representation, eg, in a 3D Graphical virtual world, a command to walk issued by the intelligence might result in the agent changing position and entering a movement or walking state and triggering the representation to display a walking animation (enter a walking animation state).
==2.4 A Taxonomy of Virtual Worlds==
===2.4.1 Introduction===
As might be expected, the literature contains extensive discussion of the appropriate taxa to be applied in classifying virtual worlds, and also an equal measure of disagreement among authors as to the appropriate criterion so to be applied. In spite of the range of discussions, most attempts are incomplete and therefore capable of classifying in a useable form only a portion of the genre. To be fair, this space is rapidly evolving and possibly as fast as it is classified a new entrant appears that change the paradigm, and old entrants are updated to include new capabilities.
===2.4.2 A Taxon for Virtual Worlds===
Outside of the education and virtual reality streams, possibly the largest single family of virtual worlds are those developed for games. While not actually claiming to propose a taxon, Bartle (2003, pp. 38-61), whose pedigree is essentially from the gaming stream, proposes a set of attributes that can be used to classify Virtual (game) Worlds. Not surprisingly, the attributes are most relevant to multi-user game focussed virtual worlds, but provide a workable superset of the current thought on the matter and with some adjustment can be extended to the more general examples of virtual worlds. He suggests that a virtual world can be categorised according to the following taxa:
#'''Appearance''': To a ‘newbie’ (Bartle’s term for a new user of a virtual world application) the distinction is whether the virtual world is a ‘text based’ MUD, ASCI, graphical 2D or graphical 3D etc. To an ‘oldbie’ (as described by Bartle) this is only an interface issue and therefore not as important as the other listed categories.
#'''Genre''': Is the world fantasy, cyberpunk, horror, social etc. The plot or the settings of the virtual world. This taxon is most helpful with purpose focussed virtual worlds. In the non-gaming or semi-gaming space occupied by some generalised social worlds, the virtual world is as much a platform on which other ‘sub-worlds’ can be based, and thus the genre of the virtual world can be all other genres. Examples of this might include PLATO and Second Life.
#'''Codebase''': Although not as important for the user as it is hidden from them this is an important aspect to the designer of a virtual world. The codebase defines the technical makeup of the world - reusable content and controls, scripting language, database structure etc. This researcher suggests that the codebase is not a single taxon, but perhaps should be separated into multiple taxa. In its place one might propose the content management, asset management, game engine, environment application programming interface, AI, and scripting function library within the system as more relevant technical matters.
#'''Age''': How long the virtual world lasts is an important aspect for the measure of success of the virtual world. Generally the longer you can keep a player (or user) interested the longer the virtual world survives which in turn attracts new users which adds to the player base of the virtual world.
#'''Player base''': How large is the player (or user) base of the virtual world. This measure varies depending upon what you are counting for example, the number of registered users, the number of avatars (a user can have more than one character in a virtual world but in general not for simultaneous use), simultaneous users logged in, hours played per user, access over a period of time, number of active subscriptions, etc. In some worlds the meaningful measure of player base is in fact the number of owner occupied ‘acres’ of virtual land (as opposed to general users of the virtual world). The player base measures the current success of the virtual world, its popularity so to speak, which in turn lengthens the age of the virtual world. Given the number of ways a player base can be structured and measured a single measure is open to both misinterpretation and reporting manipulation, and for some measures (like subscribed users – where some subscriptions are costed and others free) may be completely erroneous when comparing one virtual world to the next.
#'''Degree to which they can be changed''': Virtual worlds vary in the degree to which a user can change the content or add to the content of the virtual world. Virtual worlds such as World of Warcraft (and most game based virtual environments) allow no change by the player with all content created by the developers of the virtual world. Other virtual worlds such as Second Life, Active Worlds, TruePlay and PLATO rely on content created by the community. In the case of Second Life (for example) the entire virtual world is made from user created content by providing them with building tools, import and export capabilities, out-of-world interfaces and communications capabilities, an extensive library of API functions and a scripting language. The degree to which a virtual world’s content can be changed by the user adds to the technical codebase complexity and the user’s (and other user’s for multi-user virtual worlds) experience of and within the virtual world.
#'''Degree of persistence''': Bartle defines persistence to be the degree to which a world’s state remains intact if you shutdown and restart the virtual world. He classifies persistence into ‘discrete’ or ‘continuous’ groups. At the extreme a discrete virtual world would regenerate - described a ‘Ground Hog’ world (named after the movie). Here all content and the location of the player would be reset to the start of play. In a continuous virtual world the content and locations are retained through a restart.<BR />Persistence also relates to what happens to the world when a user logs off, does the virtual world continue to evolve without the individual player – and if so can the player’s state be affected while off line? A virtual world generally displays some level of persistence and is generally a term used to distinguish if a ‘virtual world’ is really a ‘world’ or in fact just a simple ‘Ground Hog’ environment (see Gehorsam, 2003). The ultimate level of persistence being that akin to the real world which is constantly evolving and changing regardless of our existence within the World.
With some modification and generalisation most of the taxa can be applied in the general case of gaming and non-gaming virtual worlds. To be applied outside of the narrow RPG (Role Playing Game) grouping, the classification system would benefit from some subdivision of elements.
We have already noted codebase as one such category. Codebase is such a wide group that is could be applied to every functional capability of the virtual world not covered by another taxon, and thus is of limited help in establishing a consistent framework for classification. For example Castronova (2001) taxonomy recognises a grouping under marketplaces (implying commercial functionality) while both Kish (2007) and Cavazza (2007) recognise groupings covering Paraverses (although they use different terms). In Bartle’s taxa these might both be covered as distinguishing characteristics under codebase, yet the one relates to the ability to conduct real-world commercial transactions in the space, while the other addresses the merging of real-world content with virtual world content.
Persistence as framed by Bartle mixes up multiple discrete concepts – host state persistence, user state persistence, environmental evolution, and scenario persistence. This last item is generally typical of games (such as quest driven environments where on restarting a ‘quest’ the user can rely on the sequence of events being a repetition of the sequence that occurred previously – effectively a ground-hog space within a larger persistent environment), and absolutely essential for simulators and learning systems where a user taking a course should be able to rely on the lesson replaying in a consistent and predictable way each time (unless variation is an intended part of the training like in a military battlefield virtual world). In order to classify virtual worlds, recognising these attributes independently of each other would be more helpful than identifying the world as persistent or not persistent, nor are the sub-features linearly related – i.e. one form of persistence does not imply the inclusion of another form of persistence (Purbrick & Greenhalgh, 2002).
===2.4.3 Applied Taxonomies===
While Bartle proposes a reasonably extensive set of attributes (taxa) for classification, some authors have proposed simpler classification regimes, although all seem as yet to avoid claiming an actual taxonomy.
Kish (2007) recognised that with the appearance of the weakly defined ‘Web 2’ technologies, virtual worlds could be seen to encompass a wider range of social networking and world-imagining spaces. Kish’s classification groups virtual environments into the broad categories (Figure 3):
#'''MMORPGs''': Massively Multiplayer Online Role Playing Games. A category which includes text and graphical gaming environments with the common theme of role playing and containing internally a hierarchical, level based player grading system to determine expertise and implied seniority, and generally plot or quest driven and goal oriented as their linking characteristic. Typical examples might include World of Warcraft, Entropia Universe, Everquest, MUDs, etc.
#'''Metaverses''': Imagined public fantasy spaces, emphasising social interaction, creativity and lacking a single plot or purpose for participation. Generally exhibiting a devolved structure without a single levelling system or clear environment imposed hierarchic seniority system[3]. Typical examples might include Habitat, Second Life, Active Worlds, Furcadia, etc
#'''Paraverses''': Spaces that intersect with the real world, incorporating content from the real world and thus could be described as virtual extensions of the real world. This group potentially includes many of the Web 2 spaces that contain sufficient functionality to create in the minds of their users a ‘real’ virtual community as strongly present to the participant as their real world existence.
#'''Intraverses''': Spaces that are otherwise Metaverses or MMOLE’s but private or closed to the broader public. Virtual reality environments could be seen generally to fall into this category as well as private/corporate implementations of public virtual world spaces. Typical examples might include Qwaq, Sun System’s Wonderland, IBM’s Metaverse, etc.
#'''MMOLEs''': Massively Multi-user Online Learning Environments. Possibly the oldest class of virtual worlds as it includes systems such as PLATO and is typified by educational environments supporting user social interaction. Primarily purpose (or although not goal) driven – such as learning, training, idea exchange, simulation, etc. This space includes the dedicated training / teaching environments of PLATO and planning / simulation management systems of SIMNET, Blackboard, Boston College’s Media Grid, etc.
[[image:Kish_Virtual_Geography_003.jpg]]
Figure 3. Virtual Geography (Kish, 2007)
Cavazza (2007) proposes that a virtual world should be open (public) and contain taxa supporting strong and generalised capabilities in each of the dimensions (Figure 4):
#Social networking
#Gaming
#Entertainment
#Business
[[image:Cavazza_Virtual_Universes_Landscape_004.jpg]]
Figure 4. Virtual Universes Landscape (Cavazza, 2007)
Consequently most of the virtual worlds identified by other authors are excluded from Cavazza definition of virtual worlds, but included under the broad category of ‘Virtual Universe’. To illustrate this idea Cavazza has classified a huge range of the existing virtual environments:
#Social
#*2.5 & 3D Chats
#*Avatar Centric
#*Social Platforms
#*Branded Universe
#*Virtual Worlds
#Game
#*MOG
#*Sports
#*MMORPG
#*Avatar Centric
#*Social Platforms
#*Branded Universe
#*Adult Games
#*Virtual Worlds
#Entertainment
#*Virtual Sex
#*Virtual City Guides
#*2.5 & 3D Chats
#*Avatar Centric
#*Branded Universe
#*Virtual World Generators
#*Virtual Worlds
#Business
#*Serious Games
#*Virtual Marketplaces
#*Adult Games
#*Virtual World Generators
#*Virtual Worlds
Cavazza’s definition and classification system is extensive, and possibly the most comprehensive to date. While Kish’s classification tends to focus on functionality, Cavazza’s emphasises purpose. Never-the-less, there is significant cross-over in their ideas. For example, both recognise the difference between games and social networking, and both accommodate the paraverses in a special category (Cavazza includes them in ‘Virtual City Guides’ among other groups). Cavazza’s analysis, however, lacks the accommodation of the education, training and simulation virtual spaces present in Kish’s categorisation, although, it might be argued that these are covered in multiple categories including ‘Virtual World Generators’ (eg PLATO, VastPark) and Serious Games (training simulators).
==2.5 What’s in a Name? – Virtual Worlds versus Virtual Reality==
Virtual Reality environments are generally a combination of user interface hardware (such as headsets and data gloves) and software. The availability of the (often costly or purpose built) user interface hardware has meant that the majority of these environments are either single user or very small scale multi-user environments (Jones & Hicks, 2004; Miller & Thorpe, 1995). A direct consequence of this is that Virtual Reality environments have tended to ignore the dimensions of user interaction, game play and collaboration in favour of the technology of immersion. This fact, possibly more than any other, has predisposed some authors to exclude virtual reality spaces from the domain of virtual worlds (Bartle, 2003; Yee, 2006).
While Bartle’s virtual world definition, contributes part of the definition we have adopted for virtual worlds in this research, the researcher departs from the entirety of Bartle’s embodiment of virtual worlds as expanded in that work. Bartle believes that a virtual world has a meaning divergent from that of virtual reality believing that “Virtual reality is primarily concerned with the mechanism by which human beings interact with computer simulations… [rather than] the nature of the simulations themselves” (2003, p. 3). To this extent Bartle’s definition specifically excludes Virtual Reality spaces from the definition of virtual worlds.
This researcher adopts a view consistent with some other writers in the field that excluding the body of work in virtual reality from the concept of a virtual world by writing virtual reality spaces out of the definition, places the emphasis narrowly on the social and gaming dimensions of these worlds, and away from the immersive experience thus excluding the vast body of research that predates or has been done in parallel to the development of gaming virtual worlds (Cosby, 1999; Heilig, 1955; Pimentel & Teixeira, 1994; Rheingold, 1992; Schroeder, 1997; Steuer, 1992; Sutherland, 1965; Walker, 1990; Woolley, 1994) and constrains the consideration of these environments in the education context to their collaborative and scripting capabilities.
Other authors have adopted definitions wider than that posited by Bartle of the virtual world concept, although in most cases constrained from some portion of the body of work that has contributed to the space. Dickey (2005, p. 439) implies an exclusion of 2D and non visual environments while providing: “Three-dimensional virtual worlds are a networked desktop virtual reality in which users move and interact in simulated 3D spaces.” Similarly, McLellan (2004) presents 10 classifications of virtual reality, a single virtual world being classified as ‘through the window’ where as a multi-user virtual world would be classified as ‘cyberspace’. Mazuryk and Gervautz (1996) make no distinction in the number of users in the virtual world but define a virtual world to be a ‘desktop VR (virtual reality)’ or a ‘Window on World (WoW)’ system. Biocca and Delaney (1995) defines a virtual world to be a ‘window system’ a computer generated three-dimensional virtual world viewed either by a computer screen or with the assistance of a head mounted display.
This researcher’s view is that all of these definitions are correct, but incomplete and that a definition that allows the participation of all of these examples is the most useful and appropriate in the education context. To appreciate the reasoning behind this argument we must look at some of the history of the development of the technologies and concepts that have contributed to the current family of virtual worlds and the problems and purposes these stepping-stones intended to resolve or achieve.
Authors adopting Bartle’s view have generally also adopted the view that virtual reality is essentially a hardware interfacing technology and hence the environments managed in this space are of no consequence. The misconception that virtual reality is a collection of hardware (data glove, head mounted displays etc) neglects the very meaning of virtual reality, which seeks to evoke a feeling of immersion and presence within the virtual space. In virtual reality research stream, using external hardware devices to enter a virtual world is only one method by which immersion and presence is achieved (Briggs, 1996; Steuer, 1992). No external device will ensure a user’s experience of immersion if the world they enter is an unconvincing generator of an alternative reality for the participant. Furthermore, if virtual reality is to be excluded from the scope of the definition of virtual worlds, then the existence of VR plug-and-play devices such as stereoscopic headsets, data gloves or haptic controls that are readily available to use with many mass market virtual worlds (that otherwise would fall within Bartle’s definition) for example, Vuzix iWear headset, Evolution Motion Glove of PS1, Wii Remote for Nintendo Wii, MS Force Feedback controller for Flight Simulator etc. would seem to contradict the proposed disconnect between the study of virtual worlds and virtual reality. Lastly the exclusion of virtual reality environments from the definition of virtual worlds ignores that fact that in the 3D virtual world space many of the technologies and concepts utilised were contributed by the virtual reality research stream (as will become clear from the history presented in the following sections).
In the education context, virtual reality technologies (as expressed for example in simulators) are a critical and essential contribution to the pantheon of virtual (training) worlds (Bailenson et al., 2007; Dede, 2004). In this researcher’s view, virtual reality environments are a subset of the virtual worlds, which are increasingly converging, if the space has not already converged in current virtual world examples such as America’s Army, Second Life, etc and massive multiplayer training environments like SIMNET (Lang, Maclntyre, & Zugaza, 2008; Lenoir, 2003; Zyda, 2005).
==2.6 Dimensioning Virtual Worlds==
===2.6.1 The Degree of Virtuality===
The degree to which a world is ‘virtual’ can be looked at as a sliding scale between physical and virtual. Milgram and Kishino (1994) presents a taxonomy for mixed reality visual displays called a ‘reality-virtuality continuum’ (Figure 5). On the left hand side of the scale is the ‘real environment’, which is equivalent to the real or tangible world, while on the extreme right is the ‘virtual environment’, which is equivalent to an artificially generated world. Between these two extremes is classified as ‘mixed reality’ (MR) made up of combination of both real and virtual matter.[4]
[[image:Reality_Virtuality_Continuum_005.jpg]]
Figure 5. Reality-Virtuality Continuum: Representation Scale for Visual Display
(Milgram & Kishino, 1994)
Figure 6 illustrates an example of the use of the reality-virtuality continuum taken from the MagicBook Project (Billinghurst, Kato, & Poupyrev, 2001). On the left of the figure is a book that is real (ie. the real world environment); in the middle the same book but viewed though an Augmented Reality (AR) Display where figures appear like pop-up characters on top of the book (ie. mixed reality or augmented reality); while on the right the same book but viewed within a virtual environment where the “reader” becomes the characters within the book.
[[image:The_Magic_Project_006.jpg]]
Figure 6. The MagicBook Project: An Example Of The Full Reality-Virtuality Continuum
While the MagicBook project was conceived around the integration of physical (tangible) real world objects with digital virtual world generated objects, when the real world objects are themselves digital or intangible – such as with course materials of photographic images, text, or other digital content the merging of the ‘Real World’ and the ‘Virtual World’ become less obvious. For example, real world authors Pamela Woodard and Wilbur Witt have published their works in the Second Life virtual world first or simultaneously with publication in the real world (Bell, 2006). Second Life virtual world can integrate conventional HTML web page content directly into the virtual environment (Release Candidate, 2008). Content developers and particularly trainers and presenters in Second Life routinely import textures and slides and stream sound and video from outside of the virtual world into the virtual space.
In the context of Milgram and Kishino’s reality-virtuality continuum, this research focuses on the right hand end of the scale i.e. using a desktop display of a virtual world in which all content is delivered virtually. In contrast to the MagicBook project this research considers (in the education context) the affordances from two virtualisation strategies – a direct reproduction of the real world delivery into the virtual (in part, by importing the non virtual world generated materials into the virtual world), and a transformation of the real world material into virtual material (in part, by recasting the non virtual world materials into virtually generated form).
===2.6.2 The Degree of Immersion and Presence===
====2.6.2.1 Introduction====
Virtual reality literature often separates a user’s experience of a virtual environment into physical and psychological components (Benford, Greenhalgh, Reynard, Brown, & Koleva, 1998; Biocca & Delaney, 1995; Sheridan, 1992; Mal Slater, 1999; Mal Slater & Wilbur, 1997; Steuer, 1992). The psychological components include the interaction (or connectedness) and belief where contribution of the participant or their willingness to believe in the reality of which they would otherwise know to be unreal and the physical aided by external mechanical and functional capabilities of the system.
In exploring the factors determining the effectiveness of Virtual Reality environments, Burdea and Coiffet (2003) determined that the aim of virtual reality is to achieve a trio of ‘Immersion, Interaction and Imagination’ (Figure 7. The Three I's of Virtual Reality), each of which holds equal significance to the user’s experience of virtual reality systems. A virtual reality system seeks fully to engage the user in the virtual space. They proposed that excluding any one of these features exposed a user to passive participation, and ultimately detracted from the perceived ‘reality’ of the experience.
[[image:Immersion_Interaction_Imagination_007.jpg]]
Figure 7. The Three I's of Virtual Reality
Slater (1992) defined user involvement to be a combination of the human experience which in turn is dependent on the technology (Figure 8). Telepresence (or presence) is the human sensation of ‘being there’ in a virtual environment[5] and seen influenced in part by the technology in terms of vividness (richness, realism) and interactivity (response) of the environment.
[[image:Steuer_Variables_Influencing_Telepresence_008.jpg]]
Figure 8. Technological Variables Influencing Telepresence (Steuer, 1992)
Slater and Wilbur (1999; 1997) revisited these concepts in later work, defining a user’s experience in terms of immersion and presence. Immersion is seen as an objective measure of ‘systems immersion’ technology such as field of view, quality of display etc and while presence is seen as a subjective measure, a psychological sensation of ‘being there’. From here on we will be using the terms immersion and presence as defined by Wilbur and Slater.
====2.6.2.2 Immersion====
Benford et al. (1998) propose classifications of artificiality and transportation for collaborative environments (Figure 9) that extends Milgram and Kishino’s reality-virtuality continuum. Artificiality (physical-synthetic) is equivalent to the reality-virtuality continuum. Transportation (local-remote) is the degree to which a participant becomes removed from their local space to operate in a remote space, which they define to be a similar to the concept to immersion. For example, CVEs (Collaborative Virtual Environments[6]) are placed on a scale of partial to remote transportation where a fully immersive CVE would be the ultimate level of transportation in a virtual reality system using devices such as HMD, data gloves, tactical and aural equipment that allowed for no outside distraction, the participant would be operating completely within virtual environment and be fully remote form their local environment[7]. Whereas, a desktop CVE is partially immersive as ones local surroundings form a part of the virtual environment eg field of view that allows for head turning away from the virtual space etc (Sheridan, 1992). In the context of Benford et al. transportation scale this research is conducted using desktop CVEs and is therefore only partially immersive according to their scale.
[[image:Artificiality_Transportation_as_SS_Metrics_009.jpg]]
Figure 9. Shared Space Technology According to Artificiality and Transportation
====2.6.2.3 Presence====
Research in online gaming virtual worlds has tended to focus on the human experience (presence) of virtual worlds rather than the ‘systems immersion’ aspects, while studies of virtual reality environments have tended to consider both. This is possibly a function of the common standard interface for massively multiplayer game environments that has traditionally been the desktop computer equipped with a mouse and keyboard. Although various more advanced mass market input devices (head mounted displays and 3D mice, etc) have been available to the mass-market for many years, they are not yet widely utilised.
The degree of presence is often linked to the effectiveness of a virtual environments (Witmer & Singer, 1998) which due to its subjective nature is possibly the most difficult to comprehend and therefore measure (Mal Slater & Usoh, 1993). Hence, this area has been a widely researched with various explanations as to what constitutes presence in a virtual environment (Schuemie, Straaten, Krijn, & Mast, 2001). The sense of ‘being there’ in the environment is subjective as Slater and Usoh (1993; 1994) describe presence is similar to a person’s ‘willingness to suspend disbelief’, a concept derived from British poet and literary critic Samuel Coleridge (1772-1834) in his autobiography (1817) where he describes the phenomena of when a person becomes so engaged in a narrative that they are willing to believe an event is true if even for only a brief moment. Although suspension of disbelief is often linked today with mediums such as film and literature, virtual worlds (especially Role Playing Game (RPG) worlds) provide many of the same traits in which the user can be thought of as an actor within the virtual world that forms a part of the storyline.
A number of presence classification strategies have been proposed by various authors. We will consider:
#Schroeder - focussing on the importance of social interaction
#Bartle – focussing on the degree of commitment in the environment
Schroeder (2006) presents presence in a continuum of shared virtual environments (SVE) within a three-dimensional model (Figure 10). Presence (x), copresence (y) and connected presence (z) can be described respectively as ‘being there’, ‘being there together’ and ‘being connected together’. Connected presence can be thought as the extent to which a relationship is mediated when presence and copresence exist. Mapping is done on a comparison with a physical face-to-face relationship (0,0,0) and an entirely immersive environment such as a networked Cave (1,1,1). For example, face-to-face is (0,0,0) there is no presence (and thus no copresence) as no meeting is taking place in a virtual environment whereas in the case of a networked Cave (1,1,1) the entire relationship (and environment) is virtual where affordances are such for high connected presence.
[[image:Presence_Copresence_Connected-Presence_010.jpg]]
Figure 10. Presence, Copresence, and Connected Presence
In different media for being there together
Of interest in Schroeder’s model is the comparison of desktop SVE and online computer games. The example given in the model for a desktop SVE is Active Worlds which is a massively multiplayer online (MMO) social virtual world and the example provided in his paper for an online game is Quake, which at the time provided for up to 16 players sharing a common space. Both are virtual worlds, use text chat and sound, and use avatars to project the participant into the virtual world (although Quake takes a first person view exclusively). For the purpose of the analysis the main differences were perceived as the number of simultaneous players sharing the common virtual space and the imposition of clear game driven objectives in Quake, and the absence of those same game driven objectives in Active Worlds. Yet, Active Worlds was seen as providing a higher level of connected presence. So why does Active World provide a higher level of connected presence? The distinction here was seen to be the in the concept of the ‘game’ rather than number of players when you compare it to other SVEs presented in this model above. Active World is a social world in which no plot is provided to measure success or failure of an individual, unlike Quake where the measure of success is clear and the entire activity and function of the environment is the relentless pursuit of that individual success. Therefore it was deduced that a social (game) world provide for more connected presence than that of an individually focussed plot driven gaming virtual world (at least as analysed by Schroeder).
Schroeder observation of higher connected presence in social virtual worlds seems to fit with Heeter’s (1992; 2003) definition of social presence where she defines presence in terms of individual presence, social presence and environmental presence. Presence of an individual is increased when social relationships are formed which is based upon the social component of perceptual stimuli. When an environment or situation is focused on the relationship (rather than killing a monster like in RPGs) a higher social presence will be achieved.[8]
Bartle (2003, p. 42) identifies a system of levels of immersion (which in this paper we have defined as presence[9]) based upon a linear scale of the; Player (the real person), Avatar (the digital puppet), Character (representation in the world e.g. character name, role etc) and Persona (your identity in the virtual world where the player is the Character and is in the virtual world). Persona is similar to the concept presence, if your character is killed ‘you feel like you have died’ there is no distinction between the character and the player, they are one, the Persona. Bartle believes that the avatars and character are just steps along the way to persona. Persona is when a person ‘stops playing the world and starts living in the virtual world’.
==2.7 Influences on Virtual Worlds from Art and Literature==
===2.7.1 Introduction===
The concept of a virtual world is by no means unique to computing. The thought of exploring an imaginary realm has captivated people’s imagination throughout time.
“If we define that a virtual world is a place described by words and/or projected through pictures, which creates a space in the imagination real enough that you can feel you are inside of it, then the painted caves of our ancestors, shadow puppetry, the 17th-century Lanterna Magica, a good book, play or movie are all gateways to virtual worlds. Humanity’s most powerful new tool, the digital computer, was also destined to become a purveyor of virtual worlds, but with a new twist: The computer enables the virtual world to be both inhabited and co-created by people participating from different physical locations.”(Damer, 2007, p. 2)
At least with respect to the massively multiplayer online virtual worlds/role playing games (MMOVW, or MMORPG), all of today’s exhibits can trace their paradigms to literature. Some such as Eve, Entropia Universe, World of Warcraft are amalgams of a body of works and ideas while others such as MUD1 (Sword of the Phoenix (Howard, 1932)) and Second Life (Snow Crash (Stephenson, 1992)) are direct inspirations of specific literary works.
Consequently, to properly understand the ‘state of the art’ represented by today’s multi-user, connected together, virtual worlds and the gaming, social and business rules they have adopted to govern them, it is essential to consider the context from which they have been derived, and the art that has influenced their creators. While some operational paradigms in virtual worlds are technology constraints, functional capability constraints can be as much a condition of the imagined world being implemented as a real constraint of the technology of the day. To appreciate this fact one need only compare the camera controls of Project Entropia versus those of Second Life – two environments of comparable age, or the commercial capabilities of these two environments versus those of World of Warcraft. In each case the differences and apparent restrictions are a game design decision rather than a technology constraint.
===2.7.2 Virtual Worlds of the Arts===
James Pearson (2002) believes that from as early as 30,000 years ago in the Chauvet Cave in France shaman used cave art as a means to document their experiences of travel to the dream world. Packer and Jordan (2002) also draw this similarity in their book on virtual reality describing how the Cro Magnon in 15,000 BC in the Lascaux caves of south-western France used cave art (Figure 11) with candles and the acid aroma of animal fat for a magical theatre of the senses.
[[image:Cave_Art_BC_011.jpg]]
Figure 11. The caves of Lascaux: Cave Art 15,000 BC
The German composer Richard Wagner (1813-1883) (Figure 12) concept of Gesamtkunstwerk (total artwork) has also been cited as an early pioneer in the concept of immersion and presence in virtual worlds (Grau, 1999; Klich, 2007; Packer & Jordan, 2002). Wagner believed that a “Artistic Man can only fully content himself by uniting every branch of Art into the common Artwork” a synergy that not only includes the performance but all that surrounds so that mankind “...forgets the confines of the auditorium, and lives and breathes now only in the artwork which seems to it as Life itself, and on the stage which seems the wide expanse of the whole World” (Wagner, 1849, p. 184 & 186).
[[image:Wagner_Gesamtkunstwerk_012.jpg]]
Figure 12. Richard Wagner's Gesamtkunstwerk (Total Artwork)
===2.7.3 Virtual Worlds of Fiction and Fantasy===
There are numerous examples of virtual world that have been explored though fiction and fantasy. Each has contributed to the illusion of virtual worlds becoming a reality (Bartle, 2003; Chesher, 1994).
In Lewis Carroll’s novel, Alice's Adventures in Wonderland (1865), Alice fell down a rabbit hole to explore a fantasy world inhabited by peculiar and anthropomorphic creatures. Similarly, in Carroll’s follow on novel, Through the Looking Glass (1871), Alice explores a world behind a mirror. Hattori (1991) saw Lewis Carroll’s novels as a paradigm for modern virtual reality systems (Figure 13) blending the physical space with fantasy in a rapidly changing environment. To this extent, Carroll’s works provide a perfect analogy for the design and the development of virtual worlds (Rosenblum, 1995; West Virginia University, 2008). An explorative virtual world was realised as a children’s computer game called The Manhole (1988-2007) where it was based upon Carroll’s novel Alice’s Adventure in Wonderland (Wikipedia, 2008a).
[[image:Alice_via_Caroll_and_Hattori_013.jpg]]
Figure 13. 'Through the Looking Glass' Carroll (1871) & 'The World of Virtual Reality' Hattori (1991)
Within the fantasy literary genre, a key influence has been the works of J R R Tolkien starting with The Hobbit (1937) and its sequel The Lord of the Rings (1954, 1955) (Figure 14). An adventure fantasy that takes place in an imaginary world called Middle-Earth containing races such as Hobbits, Wizards, Elves, Orcs, Dwarves and Trolls. Tolkien’s literature style was so popular that the Oxford dictionary termed his literature approach as tolkienesque[10].
[[image:JRR_Tolkein_Book_Covers_014.jpg]]
Figure 14. The Hobbit & The Lord of the Rings by J. R. R. Tolkien (1937, 1954, 1955)
With respect to today’s virtual worlds, Tolkien’s contribution has not been merely in the construction of a raft of characters, racial groups and social concepts for role playing game inhabitants and interaction rules, but most importantly in his deep backgrounding of the imagined worlds. He did not merely describe his characters within the context and flow of the story line, but extended beyond that which was needed to tell a story, into that which was needed to make us believe of the real existence of his virtual worlds, Tolkien provides the reader with immaculate detail and descriptions to immerse them into the world Middle-Earth. Both books contained land maps (Figure 14) and the final book to The Lord of the Rings (released in 3 parts) containing appendices describing chronologies, histories, family trees, languages and translations and a calendar and dating system. Being a professor at Leeds and Oxford University he approached his work more like an academic anthropological study of an imagined world than a novelist (Macmillan, 2008).
In so doing Tolkien demonstrated a fundamental understanding a core strategy in establishing convincing presence – the necessity for a consistent, credible back story underpinning the virtual world. It is an early example of the depth of design that many later virtual worlds would exhibit in order to create a convincing sense of presence for the participant (Bartle, 2003; Schmidt, Kinzer, & Greenbaum, 2007).
A couple of virtual worlds that has been translated from Tolkien’s literature are the online virtual world ‘Lord of the Rings Online’ (2007) and PLATO’s MUD virtual world ‘Mines of Moria’ (1974).
More recently, literature has turned to imagining realities in which computational virtual worlds are a fundamental component of the plot. It is from this group that many of the terms now used to describe aspects and elements of virtual worlds are derived or were popularised, such as ‘avatar’, ‘metaverse’, ‘cyber-space’, etc. Some recent examples of novels containing a plot of computation virtual world are True Names (Vinge, 1981), Neuromancer (Gibson, 1984) and Snow Cash (Stephenson, 1992) (Figure 15).
[[image:Recent_VR_Literature_Covers_015.jpg]]
Figure 15. Recent Literature True Names (Vinge, 1981), Neuromancer (Gibson, 1984), Snow Cash (Stephenson, 1992)
'''Vernor Vinge’s True Names''' is not as well know as other novels in this genre but it was the first to present the concept of a person entering a computational virtual worlds meeting other people in ‘the other plane’ (Kelly, 1995). It was also unique in bringing the concept of anonymity to the digital world with one’s digital persona (handle) being different from one’s real self and where there was a necessity to hide one’s real identity thus your true name (and hence the title). It was translated to the computational virtual world in the form of ‘Habitat’ – the first graphical social networking virtual world (Farmer, 1992).
'''William Gibson’s Neuromancer''' a true cyberpunk[11] novel is possibly the most widely quoted in the virtual environment space (Chesher, 1994) . In this novel Gibson coined the term cyberspace with the concept of a viable parallel online world capable of critically impacting events and commerce in the real world.
'''Neal Stephenson's Snow Crash''' is where the term Metaverse was coined. Metaverse is a planet-sized city that has one continuous street 65,536 kilometres (216 km) in length where millions of people (known as avatars) travel up and down daily in search for entertainment, trade or social interaction. Although similar, in one sense, to Neuromancer it came from a different perspective in that people actually lived in the Metaverse not as cyberpunks getting up to mischief but as everyday people living a mainstream life real life in the virtual world. In this world real commerce was conducted and virtual artefacts were bought and sold with real world consequences which has been realised in the development of the virtual world Second Life.
Hollywood also contributed to the fantasy of the reality of virtual worlds. Films such as Tron (Lisberger, 1982), The Lawnmower Man (Leonard, 1992) and The Matrix (Wachowski & Wachowski, 1999) (Figure 16) just to name a few gave us the visual of virtual worlds that the books could only describe, and in some cases explored the haptic interfaces now being realised (Chesher, 1994).
[[image:VW_Films_Tron_LawnmowerMan_Matrix_016.jpg]]
Figure 16. Hollywood Films
Tron (Lisberger, 1982), The Lawnmower Man (Leonard, 1992), The Matrix (Wachowski & Wachowski, 1999)
At the time of their release, the novels and movies discussed above may have seemed futuristic and the concepts unobtainable but today we are much closer (if not already past) with advances in networking, computational processing power and understanding of the sociology of virtual environments. Maybe a ‘jack-in’ device that stimulates our nervous system to travel into cyberspace (Neuromancer, Gibson, 1984) is still a little way off (and may be too intrusive for some), or smelling odours or feeling textures within a virtual world may never be quite the same as the real life experience but what once seemed unimaginable in these works has become reality today. With technological advances and the rapid adoption of internet enabled online virtual worlds many of these concepts are less science fiction and more science fact than they once were.
==2.8 The History of Computational Virtual Worlds==
===2.8.1 Introduction===
In a lecture delivered by Ivan Sutherland in 1965 the first steps were made to combine computer design, construction, navigation and habitation of software generated virtual worlds (Packer & Jordan, 2002). Here Sutherland laid down a vision for the development of virtual worlds, as paraphrased by Brooks (1999, p. 16):
<blockquote>
“Don’t think of that thing as a screen, think of it as a window, a window through which one looks into a virtual world. The challenge to computer graphics is to make that virtual world look real, sound real, move and respond to interaction in real-time and even feel real.”
</blockquote>
The new-born medium of the graphical, digital virtual world experienced a “Cambrian Explosion” of diversity in the 1980s and ‘90s, with offspring species of many genres: first-person shooters, fantasy role-playing games, simulators, shared board and game tables, and social virtual worlds. (Damer, 2007)
The massively multiplayer online virtual worlds of today, with their world-wide user bases, are essentially a consequence of the mass adoption of the internet which commenced in the early 1990’s. Since the internet first achieved general acceptance they have advanced substantially in technical capabilities, graphics and number of subscribers (Figure 17) (Woodcock, 2008). See Appendix B: MMOG Analysis, for a break-down of MMOGs contained in this graph.
[[image:MMOVW_Growth_Rate_017.jpg]]
Figure 17. Massive Multiplayer Online Virtual World Growth Chart 98-2008
The virtual worlds of today (such as World of Warcraft, Entropia Universe, America’s Army, and Second Life, etc) represent a convergence of several disparate computational, technical and social origins and drivers. Current virtual worlds combine 3D visualisation, game theory, text messaging, animations, context and text sensitive gesturing, natural language processing, spatial voice & audio, artificial intelligence, agency theory, physics, connectedness, persistence, business strategy, sensory hardware and haptic interfaces, telecommunications, 2D image processing, video chroma-keying, social networking and many other influences to achieve their sense of immersion and presence. In this section we explore some of the milestones along these convergent paths.
As many of the influences that have contributed to our latest virtual world are derived from research streams that were concurrently pursued over more than 50 years, we shall look at the history of virtual worlds in six streams:
#Hardware based user interfaces and virtual reality environments
#Early graphical computer games
#Text and Text+ based Virtual Worlds
#2.5 and 3D graphical multi-player virtual worlds, broken down into:
#: a. MMORPGs
#: b. Social Virtual Worlds
#Simulation and Training Worlds
It should be noted that, while we will be considering the history in these streams, some virtual worlds necessarily exist in more than one stream. The grouping is that of the researcher, based on an extensive assessment of the literature, rather than the view of any one author.
===2.8.2 Hardware Based User Interfaces and Virtual Reality Systems===
====2.8.2.1 Introduction====
These two areas are grouped together, not because Virtual Reality (VR) Systems are a hardware solution, but rather because the work done in virtual reality worlds has generally aimed for extremely high levels of both immersion and presence and has therefore generally (although not always) been coupled with hardware in the form of purpose built user interfaces, designed to assist the sense of immersion such as headsets, or data gloves, etc.
The importance of the progress in VR systems to virtual worlds is that they have contributed or assisted much of the fundamental graphical rendering technologies, 3D animations studies and spatial awareness research and conceptualised the immersive aspects of virtual worlds.
====2.8.2.2 Sensorama====
One of the earliest inventions in the genre of virtual world simulators was developed by a cinematographer Morton Heilig. Inspired by the work of Fred Walker’s with Cinerama[12], Heilig presented a paper in 1955 ‘The Cinema of the Future’ (reprinted in Packer & Jordan, 2002). In an extension of Wagner’s (1849) Gesamtkunstwerk (total artwork) concept (Holmberg, 2003), Heilig believed that the logical extension of cinema was to provide the audience a first person experience of film using all their senses – “Open your eyes, listen, smell, and feel—sense the world in all its magnificent colors, depth, sounds, odors, and textures—this is the cinema of the future! (Packer & Jordan, 2002, p. 246)”
[[image:Morton_Heilig_Sensorama_Simulator_018.jpg]]
Figure 18. Morton Heilig, Sensorama Simulator, U.S. Patent #3050870, 1962
Heilig developed and patented the Sensorama Simulator (Figure 18) in 1962. The Sensorama was a single person simulator that offered the viewer a multi-sensory fully immersive theatre. The viewer could sit to watch a short three-dimensional stereoscopic movie that included stereo sound, an odour generator, force feedback handle bars, chair motion and wind on the viewers face (Rheingold, 1992). Heilig believed that the Sensorama Simulator could be next generation of theatres placed in hotels and lobbies or any small space that could fit his miniature theatre (Heilig, 1955, p. 345).
Heilig also recognised that the Sensorama Simulator offered training and learning potential for educational and industrial intuitions (Rheingold, 1992, p. 58) but unfortunately the Sensorama Simulator never took off, it was “a time when the business community couldn’t figure out what to do with it” (Laurel, 1991, p. 52). This may have been different a decade later when Pong kicked-off the arcade game industry and when education, industry and government saw great potential from investing in virtual world technology as they did with the Head Mounted Display (HMD).
====2.8.2.3 Head-Mounted Display====
In 1968 Ivan Sutherland presented the first computerised graphical HMD (Figure 19) (Sutherland, 1968)[13]. The HMD had a cathode ray tube (CRT) for each eye with a three-dimension simple wire-frame view of a room with motion tracking when the viewer moved their head. Known as ‘The Sword of Damocles’ based upon a Greek legend of a man placed in a precarious position of luxury with a sword above his head (Oxford Dictionary, 1989), similarly the HMD had a computer suspended above the users head attached by a mechanical arm (Figure 19, right) (Carlson, 2003).
[[image:HUD_The_Sword_of_Damocles_019.jpg]]
Figure 19. Head Mounted Display first called The Sword of Damocles (Sutherland,1968)
The HMD was a significant milestone in the development of virtual reality technology, which has since been used in a variety of applications in virtual worlds. Holding advantages over a traditional computer monitor such as total head and body movement, non interrupted viewing for total immersive HMDs and simultaneous viewing of real world and virtual world artefacts in ‘see-though’ HMDs or sometimes called Augmented Reality Displays (Rolland & Hua, 2005).
Today’s HMDs are more compact than Sutherland’s 1960s prototype (Figure 20). In the figure is shown on the left a HMD used for mixed reality environments similar to that designed by Sutherland and right a immersive HMD which is compatible with several online and gaming virtual worlds.
[[image:HUD_See_Through_and_Immersive_020.jpg]]
Figure 20. Today's Head Mounted Displays - Left: See-Though HMD - Right: Immersive HMD
===2.8.3 Early Graphical Computer Games===
Computer games have had a large influence on the evolution of virtual worlds both in the development and use of the technology. The contribution of games includes computational game theory, 2D and 3D graphics, social modelling, simulation, strategies for achieving presence, artificial intelligence, computational game physics and, possibly most significant delivery of a massive consumer market to fund and drive the investment needed for innovation and technology improvement. By far the majority of today’s online virtual worlds were conceived and/or delivered as games, they have subsequently evolved into general business or training platforms which are sometime referred to as Serious Games (Annetta, Murray, Laird, Bohr, & Park, 2006).
The early computer games that can be traced to a few innovative applications (Figure 21):
*'''Tennis for Two''': In 1958 William Higinbotham developed the first electronic game simulator using an oscilloscope display that demonstrated a two-dimensional side view of a tennis court. It was a two player game that the user could control the direction of the bouncing ball by turning a knob on a hand held device. Originally developed by Higinbotham to occupy visitors to Brookhaven National Laboratory during open days the game had queues of people waiting to play (Brookhaven National Laboratory, n.d.). Tennis for Two introduced the concepts of a shared multi-player electronic game experience, a rule based environment managed by a machine, and an electronic space where the actions of one player in the shared space affected the experience of another. The attention the game attracted demonstrated the willingness of participants to accept the visual and sensory limitations of a machine managed game environment and immerse themselves in the experience.
*'''Spacewar!''' The idea originated in 1961 by Steve Russell at the Massachusetts Institute of Technology (MIT) by 1962 the game was released with assistance from his colleges. Spacewar! was the first official release of a two-dimensional computer game.[14] A two player game each with a spaceship that would fire bullets at each other before being pulled into the middle by the sun. Developed originally to demonstrate the power of the new PDP-1 computer, this game was a good demonstration of both the graphic capabilities and the process power of the machine (Computer History Museum, n.d.; Markowitz, 2000). Later in 1969 Rick Blomme modified the game to run on PLATO which made this the first game to be networked (Koster, 2002; Mulligan, 2002). While Tennis for Two was the first multiplayer electronic game, Spacewar was the first computer based multiplayer game. It thus contributed the same key concepts and ideas as Tennis, only for the first time on a computer managed environment.
*'''Maze War''': In 1973-1974 Steve Colley developed the first three-dimensional ‘first person shooter’ (FPS) game Maze War at NASA Ames Research Center. A player would navigate around a maze searching for other players to shoot. As seen below (top right) the player had a first person view, (the eyeball seen in this picture is the other player). This is called a ‘first person’ game, placing the player ‘in-world’ as a part of the game is a significant concept of virtual world games. Maze War also provided other innovations now common to virtual worlds such as instant messaging, levelling and non player robot characters (Damer, 2007). This game which started as a two player game was eventually connected to ARPANET (the forerunner of our current internet network technology) allowing several users from remote locations to play and interact (Colley, n.d.; Damer, 2004). Maze War can therefore lay claim to being the progenitor of virtual worlds but not an actual virtual world because of its lack of persistence.
[[image:Early_Computer_Games_1958_To_1974_021.jpg]]
Figure 21. Early Computer Games 1958 - 1974
*'''DOOM (1993) (II, 1994)''' a 3D FPS game was influential both on a conceptual and technical level (Friedl, 2002; Mulligan, 2000). In DOOM the concept of Maze War was re-implemented in a much more graphically rich 3D environment. Although only a single player game, the key innovation of relevance was the method used to manage the rendering of the 3D space to allow multiple non-player characters to participate in the 3D environment with the player. The strategy adopted was essentially to divide the world into many small rooms surrounded on all sides by walls (essentially a cave system) by rendering only a single room at a time the entire resources of the computer could be devoted to a known confined rendering space, thus achieving the illusion of a highly detailed rendering with the limited computational resources available on the PC’s of the day. Although higher quality 3D rendered games were available some seven years earlier on the Amiga computers from 1986 (including some utilising real-time ray tracing technology), these relied on dedicated proprietary games architected graphics cards and did not provide a 3D space management paradigm that could be easily translated to the future demands of online 3D games. The Doom model could, precisely because it was architected for the graphically and processor challenged generalised home PC’s of the day, rather than proprietary games machines such as the Amiga. The Doom games engine was utilised in many subsequent games and later formed the basis for the model adopted for the online game Quake (Petrich, n.d.; Wikipedia Doom, 2008).
Around the time of DOOM the game industry realised the importance of connecting people together for online gaming, seeing the opportunity they started adding a modem and LAN play and later TCP/IP functionality to their games that allowed both single and multiplayer connectivity. Early games allowed up to 4 players but today’s games can have up to 64 players in a single game session (Quake Wars[15]). Some of the better known brand names included:
*'''Quake''' (1996, a multiplayer extension of DOOM) saw over 80,000 people connected to 10,000 + simultaneous game session (Mulligan, 2000).
Warcraft (1994) (II, 1995) that eventually would become the basis to the largest MMORPG today World of Warcraft (2004) which now has over 11 millions subscribed users (Blizzard Entertainment Inc, 2008).
===2.8.4 Text Based Virtual Worlds===
====2.8.4.1 Text Virtual Worlds: MUDs====
In 1978 the first MUD (Multi User Dungeon) outside of the PLATO system (discussed under Training and Simulators) was created by a Computer Science undergraduate Roy Trubshaw (and shortly afterwards joined by Richard Bartle) from Essex University in England. A text based virtual world, coined a MUD by Bartle was based upon Robert E Howard’s (1932) fictional tale ‘The Phoenix on the Sword’. MUD1[16] was an adventure role playing game, with game levelling and chat rooms which allowed up to 32 players to connect simultaneously over a remote connection (Figure 22) (Bartle, 2003).
[[image:Bartle_The_First_MUD_022.jpg]]
Figure 22. The First MUD: Roy Trubshaw and Richard Bartle (1978)
Early in the game’s history Essex University on whose computers the game was hosted became a part of ARPANET (the forerunner of the internet) and soon after MUD was distributed through that network and being played on universities throughout the world. Some of these institutions were also open for public access. Although copyrighted many variations of MUD1 were made and distributed freely from what Bartle (2003) describes as either player inspiration or pure frustration with the 32 player limitation which made it impossible to play when dial-in lines were fully allocated.
Keegan (1997) identifies two main classification of MUDs developed during this time (Figure 23) - the Essex MUDs (Trubshaw and Bartle’s) and Scepter of Goth (1978). Unfortunately Scepter died an early death, the game was sold and soon afterwards passed onto the creditors when the purchasing company ran out money (Bartle, 2003). Most MUDs were therefore based upon the ideas and technical structure of Trubshaw and Bartle’s MUD (Bartle, 2003; Keegan, 1997).
[[image:Basic_MUD_Tree_Structure_023.jpg]]
Figure 23. Basic Tree Structure for MUD classification
MUD1 introduced a number of concepts retained by most of today’s virtual worlds. Among which are:
*The role and effectiveness of the text based narrative and text communication that contributed to, rather than detracts from the sense of presence.
*Persistence in game play.
*Shared game space and cooperative (team based) activity.
*Non-player artificial intelligences called AI’s (or non player characters) as part of the experience.
*Region based environment management.
*Role-playing as a central game theme.
*Characters and avatars (all be it text based in the early MUDs).
*Game defined goals but player implemented plots.
Region based environment management is a computational aid that warrants particular attention. It was also used by the DOOM 3D graphics engine to manage multi-user environments allowing the computer to render the shared space in a single discrete region at a time. In DOOM this was a room, in MUD1 it was a cave in more recent virtual worlds it may be as much as a 65,000 sqm area (Second Life). This strategy provides a method of scaling the virtual worlds to many regions by distributing the region management across many discrete servers but imposes practical limits on the number of players that can be present in any given region at an instant in time (Hu & Liao, 2004).
MUD1 had a significant impact on virtual world design and development that dominated the online game space until the mid 1990s therefore MUD1 is often marked as the beginning of the first generation in online virtual worlds (Bartle, 2003). MUD1 can still be played online today at british-legends.com (CompuServe, 2007).
====2.8.4.2 ASCII Virtual Worlds====
In the early 1980’s pseudo graphical interfaces were added to some MUDs in the form of ASCII virtual worlds. ACSII (American Standard Code for Information Interchange) is the most widely adopted character encoding on western computer systems. ASCII virtual worlds provided a pseudo-graphical display making use of shape symbols and character positioning escape sequences to create crude planar maps of the terrain (dungeon) environment. The maps enhance the description of the room provided by the text.
ASCII pseudo graphical virtual worlds provided the player with a view of the world improved over the simple text prompt and description of MUDs. An example of an ASCII game can be seen below (Figure 24) Islands of Kesmai (IOK). Developed in 1982 and released in 1984 the game provided a player with a 3rd person view - overhead view of the world. Walls were denoted by [], fire **) and the players were letters (Bartle, 1990). IOK was Compuserve’s (USA ISP) best selling game with players paying up to $12.50 per hour to play (based upon connection time not game played) which usually had between 10-60 players online simultaneously (Bartle, 1990). Other ASCII games around this time were MegaWars I & MegaWars III (1983), NetHack (1987 (O'Donnell, 2003)), Sniper! and The Spy (Bartle, 1990).
[[image:RPG_Islands_Of_Kesmai_024.jpg]]
Figure 24. Islands of Kesmai ASCII Text Role Playing Game (1982-84)
By the mid to late 1980s home computing and online networking service providers opened the gates to huge expansion for on line virtual world. People paid for networking services by the hours, which gave a huge incentive to these providers to get their subscribers hooked on virtual worlds. There was big money to be made with 70% of revenue from one provider (Genie) in the early 1990s being made from games. By 1993 a study showed that 10% of the NSFNET backbone (precursor to the internet consisting of mainly government and universities) network traffic belonged to MUDs (Bartle, 2003).
===2.8.5 Graphical Virtual Worlds===
The text based MUDs evolved into two different streams: the 3D First Person Shooters such as DOOM and Quake which adopted the room at a time view of the world for 3D rendering, and the 2D graphical online virtual worlds that appeared in the early 1990s. Early examples include NeverWinter Nights (1991-1997), Shadow of Yserbius (1992-1996) and Kingdom of Drakkar (1992-Current) (Figure 25).
[[image:Graphical_2D_Virtual_Worlds_025.jpg]]
Figure 25. Graphical 2D Virtual Worlds
Unlike Habitat and Worldsaway (discussed under Social Networking Virtual Worlds) that predated these games appearing in the mid-1980’s, the graphically enhanced text based games were fantasy role playing games -- basically MUDs with graphics. Although 2D some of these games displayed isometrically on an angle which gave an illusion of a three-dimensional view for the player, for this reason these games are sometimes referred to as 2 ½D worlds (Bartle, 2003). These games used more sophisticated graphics (than the pseudo graphical solutions) to improve the sense of presence experienced by the players, while retaining the text based narrative.
By the mid 1990s with nearly 10 million internet hosts (Figure 26) (Slater III, 2002; Zakon, 2006) and price wars between providers the internet opened to doors to millions which saw hordes of inexpert computer users wanting to play games (Bartle, 2003). Game design had improved long with the graphical elements of virtual worlds with graphics rendering capabilities on standard PC’s and the emergence of common graphics file standards which made development of virtual worlds possible, practical and more economical.
[[image:InternetParticipatingHosts_Count_1990_to_1998_026.jpg]]
Figure 26. The Internet No. of Participating Hosts Oct. ‘90 - Apr. ‘98
====2.8.5.1 MMORPGs====
By the mid 1990s we saw the first 3D virtual world online Meridian 59 (1996-2000 & 2002-Current) although technically it used a pseudo-3D graphics engine (Axon, 2008; Bartle, 2003) providing a first person view where the player could view all angles in the environment (Figure 27). We saw the beginnings of a new era of virtual worlds with a massive 25,000 people signing up for the beta release (Axon, 2008), which unfortunately met with limited commercial success (Bartle, 2003; Friedl, 2002) and was shut down in 2000 but resurrected again in 2002 with the updated version online today at meridian59.neardeathstudios.com.
[[image:Meridian_59_First_3D_Online_Virtual_World_027.jpg ]]
Figure 27. Meridian 59 First 3D Online Virtual World (1996)
The turning point for online virtual worlds was Ultima Online (1997-Current). Ultima had already had met with success with the Ultima computer games series. With its online launch it had 50,000 subscribers within 3 months and was the first online virtual world to crack the 100,000 threshold within 12 months of release (which it did so in under 6 months) (Bartle, 2003; Woodcock, 2008). This added a new dimension to the term multiplayer where it has now come to know as a Massive Multiplayer Online, Role Playing Game or MMORPG. Subscription peaked at 250,000 in 2003 with 75,000 being reported in December 2007 (Woodcock, 2008).
Ultima Online consisting of a 2½D graphical virtual world has remained visual much the same (Figure 28) although recently the client that runs the game (the same concept as a web browser) has had a makeover in 2007 with the Kingdom Reborn (right). This game has received regular expansions to the world, which provides for new challenges and adventures for its player. Back in 2001 the client was upgraded to 3D (Wikipedia Ultima, 2008) but recently Electronic Arts announced they will be de-supporting their 3D client continuing only to support the 2D client going forward (Electronic Arts, 2007).
[[image:Ultima_Online_028.jpg]]
Figure 28. Ultima Online (1997-Current)
Other MMORPGs that started around the mid to late 1990s, which can still be played online today, are Furcadia (1996, longest running), The Realm (1996, second longest 15 days out from Furcadia), Lineage (1998), EverQuest (1999) and Asheron's Call (1999).
The more recent MMORPGs of today, not much has changed in game design from the original RPGs but technically they have improved and do provide much better graphics for the player (Figure 29). They have also increased substantially in popularity with the largest subscription based MMORPG World of Warcraft recently climbing to over 11 million players (Blizzard Entertainment Inc, 2008). Although these players do not play in one virtual world they are separated into different realms, the same game but with different people. This contrasts quite differently to the social virtual worlds like Second Life where all the users share one virtual world. In the next section we discuss social online virtual worlds which although they can be a MMORPG within the world itself (as mentioned early) their model of a virtual world is very different than the dedicated MMORPGs.
[[image:MMOZRG_Eve_and_WOW_029.jpg]]
Figure 29. MMORPG's Eve & World of Warcraft
====2.8.5.2 Social Virtual Worlds====
The first attempt for a commercial large scale multi-user game was made by George Lucas’s Lucasfilm Games. Habitat developed by Chip Morningstar and Randall Farmer started development in 1985 (McLellan, 2004; Ray, 2008; Slator et al., 2007). Habitat was built to support thousands of simultaneous users to run on the home computer Commodore 64 to be distributed via Quantum Link network service providers (later known as AOL). Inspired by a science fiction novel ‘True Names’ (Vinge, 1981) the world contained a fully-fledge economy where citizens of the world could own a virtual business, build a house, fall in love, get married and even established their own self governing laws (Morningstar & Farmer, 1990). Habitat a 2D graphical world looked similar to a cartoon (Figure 30, left) with the avatar (digital self) taking a third person view of the world. The storyline was based upon life rather than the fictional storyline of the MUDs, which placed greater emphasis on the social aspect of the world. Lucasfilm's Habitat was first released as a pilot in 1986 then later in 1988 as Club Caribe in North America which reportedly sustained a population of 15,000 participants by 1990 (Morningstar & Farmer, 1990). In 1990 it was released in Japan as Fujitsu Habitat and after extensive modifications. Habitat was realised again in 1995 as WorldsAway (Figure 30, Right) (Damer, 2007) and again as Dreamscape in 2008.
[[image:VW_Habitat_and_Worldsaway_030.jpg]]
Figure 30. Habitat (86) First Graphical Virtual World Precursor to Worldsaway (95)
Habitat introduced some key concepts in virtual worlds;
*The term ‘Avatar’ into the general virtual world community;
*The idea of focussing on social networking as a key form of game play;
*An economy where people could trade both in world currency and artefacts; and
*The most important, the concept that living in a virtual world and leading an alternate life that was not dictated by rules of a game (like with the dedicated MMORPG environments).
More recent social networking virtual worlds include Active Worlds (1995, 1997-current)[17], Second Life (2003-current) and There (2003-current) (Figure 31) – all of which have achieved a significant volume of educational interest as platforms for delivery of learning. The generalised nature of the social networking sites means that they tend to be more diverse in the range of facilities provided and the purposes to which they can be applied than the role playing game systems. They have generally provided participants with some form of content creation tools including the importing and/or exporting of non-virtual world artefacts. In the next section we discuss further the aspect of education in virtual worlds.
[[image:VW_SecondLife_and_There_031.jpg]]
Figure 31. Social Virtual Worlds: Second Life & There
===2.8.6 Simulation and Learning Systems===
====2.8.6.1 PLATO====
PLATO (Programmed Logic for Automated Teaching Operations) was a system designed for computer based education at University of Illinois that started in the early 1960s. Originally developed as a class room course system (Figure 32) with improvements in mainframe technology by 1972 saw up to a thousand simultaneous online users making it the first public online community that featured electronic course delivery, online chat, bulletin boards, 512 x 512 resolution monitors and 1200 baud connection speed (Unger, 1979; Woolley, 1994). With over 15,000 hours of instructional development PLATO was possibility the largest ever investment in educational technology (Garson, 2000).
[[image:PLATO_Lab_Image032.jpg]]
Figure 32. University of Illinois PLATO Lab & Terminal (1961-2006)
By the mid 1970s games made their way onto the university mainframes with great success. Between 1978 and May, 1985 about 20% of time spent on PLATO was game usage (Woolley, 1994). Games appeared such as Spacewar! (1969 game discussed earlier), Empire (1973, multi user space shooter game based upon Star Trek), DND, (1974, MUD[18] based upon the game Dungeons and Dragons), Mines of Moria (1974, MUD, 248 mazes based upon Tolkien’s Lord of the Rings), SPASIM (1974, 32 multi-user, FPS space ship game)[19], Airfight (1974-75 a 3D flight Simulator precursor to Microsoft’s Flight Simulator), Oubliette (1977, first person 3D MUD) and Avatar (19977-79 first person 3D MUD) (Bartle, 2003; Lowood, 2008; Pellett; Wikipedia, 2008b; Woolley, 1994). See below (Figure 33) for some examples of MUDs held on PLATO. Many of the games on PLATO were recreated for commercial use for arcade or personal computer games (Goldberg, 2002; Mulligan, 2002; Woolley, 1994).
[[image:PLATO_Popular_MUD_Games_Developed_For_PLATO_033.jpg]]
Figure 33. PLATO: Some Popular MUDs Games Developed for use on PLATO (1974-1979)
By 1985 after going commercial PLATO had established a systems of over 100 campuses worldwide (Garson, 2000). Known as the ‘ultimate electronic information and communication utility’ offering over 200,000 hours of courseware (Figure 34), with local dial-up of 300 or 1200 baud connection speed, access to both a social and educational contacts were among the many advances of PLATO that made it an attractive system for the academic community at large (Small & Small, 1984). Over time, with improvements in technology, and the cost of maintaining old technology the final PLATO system was turned off in 2006 (Wikipedia, 2008b).
[[image:PLATO_Online_Course_Count_1984_034.jpg]]
Figure 34. PLATO Over 200,000 online courses by 1984
A web site has been established for preservation of PLATO at cyber1.org (VCampus Corporation, 2008) which holds many of PLATO’s games and courseware for public download.
====2.8.6.2 SIMNET====
Military virtual world simulators started with a project called SIMNET (SIMulator NETworking). SIMNET was a DARPA project that enabled the first large scale real-time networked battlefield simulator. Development and implementation occurred on several levels between 1983 and 1990 (Cosby, 1999; Miller & Thorpe, 1995).
Prior to SIMNET military simulators consisted of immersive virtual reality training devices such as cockpit simulators. Cockpit simulators offered a replicated environment of the ‘real thing’ for example, an aeroplane cabin would be built in its entirety with motion and sensory feedback using pre-programmed software to produce repetitive simulations to provide an individual with mastery skills such as low to ground dog-fighting or missile avoidance training (Miller & Thorpe, 1995). SIMNET provided a cheaper alternative for certain types of training than the cockpit simulators and further offered ‘collective skills’ which Miller and Thorpe (1995) define to be cohesive team operations skills distinguished from individual mastery skills taught in cockpit simulators.
SIMNET a multiuser virtual world (Figure 35) consisted of real battle grounds with manned vehicles (tanks and helicopters), command posts, semi-automated forces where a single operator could control many vehicles in the simulation and the ability to record simulations from any view point (known as the flying carpet) so that it could replayed and statistically analysed and reported upon. At the conclusion of the program there were 250 simulators operating in nine locations (4 of which were in Europe) which provided real-time battle engagements that was directly under the control of the participants (Lenoir, 2003; Miller & Thorpe, 1995).
[[image:SIMNET_Battlefield_Simulator_035.jpg]]
Figure 35. SIMNET: Battlefield Simulator at Fort Knox USA (1983-1990)
SIMNET had a substantial impact on military training after being recognised as the key success factor in winning the 3 day ‘Battle of 73 Easting’ in the Gulf War (1991) which lead to several projects based upon the SIMNET technology (Figure 36) (Foley & Gifford, 2002) with the USA government commissioning $2,549 million dollars in 1997 for modelling and simulation projects (Lenoir, 2003).
[[image:US_Military_Networked_Simlator_Projects_1938_To_2001_036.jpg]]
Figure 36. Timeline of US Military Network Modelling and Simulator Projects (1983-2001)
In 1997 a project named Synthetic Theater of War (SToW) commenced which was a program to construct an environment to combine varies simulators into one large-scale distributed battle simulator capable of involving thousands of participants (Budge, Strini, Dehncke, & Hunt, 1998; Tiernan, 1996). This project has since become Joint Semi-Automated Forces (JSAF) (Hardy et al., 2001) which now enables more than 100,000 simultaneous simulations at a time (US Joint Forces Command, 2008). Australia military has also adopted the JSAF platform to build their our own Course Of Action Simulation (COA-Sim) for joint military operations training, exercises and planning (Carless, 2006; Gabrisch & Burgess, 2005)
====2.8.6.3 Military Use of Commercial Games Engines & The America’s Army====
In 1996, General Krulak of the US Marines tasked the Marine Combat Development Command to explore and approve the use of commercial games engines for military training purposes. One outcome of this effort was the collaboratively developed Marine Doom, based on the Software Id Corporation’s shareware Doom engine and Doom Level Editor. The simulation could be configured for simulation of special missions (such as hostage rescue) immediately prior to engagement and used to rehearse the planned mission (Lenoir, 2003).
In July of 2002 the US Military released a milestone in multi-user training game simulators in the form of America’s Army: Operations (Lenoir, 2003; Zyda, 2005). Based on Epic Games ‘Unreal’ games engine, the game created a virtual world that reproduced aspects of a career in the US Army, including ‘boot-camp’ commencement and weapons and tactical training through to various operations scenarios. Although originally developed and released as a recruitment tool, the game was also claimed to be utilised to improve training outcomes by army instructors at Fort Benning (Zyda, 2005).
Now, with 26 subsequent releases (as of 2008) and available for the PC, cell phone and Xbox, the game has more than 9 million registered users exploring entry level to advanced training, and operations in small units (Figure 37). Beyond a focus on realism that extends to accurate tree placement in training courses at the simulated training camps, the game adds an added dimension of presence to the participants through the active involvement of current and former real-world soldiers as players in the game (designated with a star icon in player profiles), interacting with non-military participants (Department of the Army, 2008).
[[image:Americas_Army_037.jpg]]
Figure 37 America's Army (2002)
From a training perspective anecdotal evidence from army trainers regarding the game is that sessions in the training scenarios such as the firing range or obstacle courses improve subsequent results in the real-life versions of these activities (Zyda, 2005). The US Army possibly one of the largest investors of virtual world game technology recently announced their plans to spend $50 million USD over the next 5 years to create 70 gaming systems in 53 locations around the world for combat training (Robson, 2008).
==2.9 Virtual Worlds for Education==
===2.9.1 Architecture Considerations===
====2.9.1.1 Introduction====
To appreciate properly the discussion of the literature examining educational directions in virtual worlds, the researcher considers a brief overview of the key architectural differences to assist the reader. This material is based on the researcher’s examination of a variety of game environments and virtual worlds, and discussions with experienced and knowledgeable users of these environments, rather than sourced from the work of other authors. As such the discussion is interpretive rather than authoritative.
Some of these environments have existed for only a few years, and have not yet enjoyed a comparative analysis undertaken by the academic community. As such, this discussion might not normally reside in the literature review, but it is felt that the placement of this discussion in this sub-section will assist the reader in better appreciating the issues explored in the literature discussion throughout the remainder of the section.
====2.9.1.2 Considerations of Operational Design====
While all of today’s major virtual worlds include capabilities for user interaction, sharing of the environment, persistence, avatars, business rules, streamed audio and text there are substantial differences in the technologies used to deliver the virtual experience. While some of these differences may create marginal differences in the world experience of the casual user, from the perspective of the educator and content creator the differences are substantial.
The major offerings can be viewed under the following groups (note: in each category the researcher has selected only a few example worlds, in most cases other options also exist):
#Proprietary closed engine (e.g. World of Warcraft, Everquest)
#Client resident closed content and world model with open engine (e.g. Shareware Doom )
#Streamed (or semi streamed) closed content and world model with closed engine (Entropia Universe)
#Open client resident content and world model with closed engine (Flight Simulator X, America’s Army, Unreal games, Quake, Doom)
#Open streamed content and world model (Hipi Hi, TruePlay, Active Worlds)
#Open streamed content and world model with out-of-world interfaces (Second Life V1, VastPark)
#Open streamed content and world model with out-of-world interfaces and open client (Second Life V1.2)
#Open streamed content and world model with out-of-world interfaces, open client and open server (DeepSim)
Architectural Components and Implications in Education
Below are some of the architectural components and implementations on the structure of a virtual education environment.
{| border="1"
|'''Architectural Components'''
|'''Implications in Education'''
|-
|Closed Proprietary System
|A closed proprietary system cannot generally be altered. These systems are generally not appropriate for education purposes unless the existing virtual world itself is built for the purpose of the training (such as a purpose built simulator). Closed systems can be used in education for group interaction and discussions, if not for lectures or anything requiring more than text or audio (assuming the system supports group audio communications).
|-
|Closed or Open Environment
|Whether content and world model is closed or open determines whether the textures, objects and artefacts of the world can be modified or created by users. This ability is essential if the world is to be utilised in education as anything more than a 3D discussion forum.
|-
|World Content
|Whether the content and world model is client resident or streamed goes to the complexity of distributing course content, and the dynamics available in delivery. If the content is streamed, it can be changed in real time, but will usually require a high speed internet connection. Systems supporting streamed content generally also include the tools for developing some if not all of the streamable content. If the content is client resident, client interfacing speeds can generally be slower, but the content must be centrally published and distributed to client systems and installed locally prior to use. It cannot be changed in real time, and content production will not generally be supported directly in the virtual world tool set, and will often require advanced 3D modelling skills in dedicated 3D modelling environments.
|-
|World Interfaces
|The existence of out-of-world interfaces goes to whether content from other sources such as the internet web pages, audio or video, etc can be streamed into the world and integrated with the world content and model. Systems capable of providing this capability with streamable open content offer the greatest potential for in-expensive production of course material and publication distribution of that material to students.
|-
|Client / Server Engine
|Whether the client or server engine is open or closed goes to whether the hosting software itself can be modified. Generally this should not be necessary for education if the capabilities of the engines driving the world are otherwise sufficient. Where the content / world are otherwise closed, but the engines are open, the existing content and world could be replaced by interfacing the games engine to a new world with new content.
|}
====2.9.1.3 Options for Content Modification====
The ability to modify the content of a virtual world is essential if the educator is to deliver course content in-world beyond that of an interactive discussion, or monologue.
There are essentially three ways content can be modified by end-users in current virtual world environments (as opposed to systems providers or publishers) depending on the operational design of the environment:
#'''Level Editor''' (eg: Doom, Half Life, America’s Army, Flight Simulator). Applicable to client resident worlds (i.e. systems where the world is stored on each client computer and distributed as a separately published down load). A level editor is a content editing tool that allows an entire simulation to be created including the world model, textures, characters, behaviours, etc. They usually support importation of textures and animations, etc into the ‘level’ and then distribution of the entire level to a central server for redistribution to clients.
#'''Client Content Editing Tool''' with import/export (eg: Second Life, Vast Park, etc). For environments where building and content creation is part of the ‘game play’ the client will have a content editor provided. These environments provide a simplified model for constructing shapes and objects (e.g. Second Life’s prims) and some means for importing complex objects such as organic shapes, textures, animations, sound, etc.
#'''Out-of-world interface''' (e.g. Second Life, Active Worlds). Potentially available in both client resident and server resident (streamed) worlds. An out-of-world interface allows for the connection of some aspect of the user experience while in world to be drawn directly and live from an off world location like a web page, internet resident database or streaming SoundCast server, etc.
====2.9.1.4 Implications of differential content capabilities====
Virtual world are comprised of components (objects) and functions that are managed by the virtual world (or game) engine and together comprise the capabilities of the world. Not all worlds have the same object management capabilities built into their engines. For the purposes of this discussion, the range of capabilities will be considered to be:
#'''Terrain''' – the land form or map of the virtual space. Essentially all virtual worlds offer some form of terrain map (although the terrain map may not be ground, but rather simply a 3D space.
#'''Avatars''' – Discussed extensively already, the avatar is the user’s projection into the virtual world and may or may not be customisable.
#'''Structural objects''' – Including buildings, furniture, ornaments, statues, models, etc. These are the virtual world equivalent to objects in the real world. They may or may not be animatible and scriptable. If they are scriptable they may be able to become autonomous agents, depending in the capabilities of the scripting engine.
#'''Textures''' – The visual covering of any object, terrain, or even avatars. The ability to display and upload/import textures is (generally) essential to the ability to ability to display lecture materials like slides, etc (but note the existence of streams as a potential alternative).
#'''Animations''' – An avatar and a non-player character appears to walk, sit, stand, change facial expressions, etc because of the animation it is playing at the time. Without animations an object might move from one point to another, but it will not change it apparent state. The ability to modify animations is advantageous for creating a sense of realism, but possibly not generally essential for the ability to deliver a lecture or every type of simulation. All virtual worlds examined, offered some range of built-in animations within their worlds. Some allow the animations to be imported or modified, or strung together to create more complex animations.
#'''Scripts''' – Scripting is a capability to programme the objects and behaviours in the world. In worlds modified by level editors and programming language is generally provided as part of the level editing environment and ‘compiled into’ the level before it is published and distributed. In user modifiable worlds, where scripting is supported (like Second Life) the scripting editor and compiler is provided as part of the client application and scripts are dynamically modifiable. In some architectures the scripts are stored in the objects and distributed with the objects (and therefore if the object is moved between worlds/simulators the script and behaviours move with it), whereas in others the scripts are centrally stored controlled for the world/level and not available outside of the world or level or simulator (as appropriate). Scripts govern the behaviour (movement, animations, actions, sounds, appearance, world responses, inter-object communication, etc) of objects. The capability and simplicity of language design of the scripting engine is critical to the options available for educators in building a simulation.
#'''Streams''' – Streams include any media that is streamable such as audio, video, web-page content, etc. The availability of streams is an extension (or possible an alternative) to the ability to import textures. From an educational standpoint it represents the ability to deliver video or sound presentations, or draw lecture materials directly from the internet. Depending on the world engine, stream content may be able to be dynamically published (drawn down to the client as required – such as in Second Life) or packaged into the client resident world (such as in America’s Army).
#'''Non-player Characters''' (also called Bots, AI’s or MOBs – mobile objects) - These are essentially characters that look like avatars but are completely controlled and managed by the engine. They interact with players/avatars in a semi-intelligent manner. The availability and capability of these vary significantly across worlds. In HalfLife and America’s Army, the AI capability is available within the engine and has considerable ‘intelligence’ and in some cases the ability to learn and modify their behaviour. In other worlds (such as Second Life) they are not directly supported by the virtual world engine at all. The existence of non-player characters can directly impact on the type of learning simulation that an educator can build as it can provide user feedback and the feeling of presence within the environment for the user (if implemented to provide a realistic experience for the user).
#'''Text Communication''' - Text chat (including instant messages, group communication chat, etc) is the standard communication strategy in all worlds. It is always instant and dynamic (in that it does not have to be pre-packaged into the world). It is a functional capability rather than an object, and may or may not be logged or copied depending on the client capabilities.
#'''Multi-way Voice Communication''' - Most virtual worlds do not support voice directly, although this has been an increasingly offered function over the last twelve months. Multi-way voice communication enables a group of players to converse as if they were in a conference call, without the necessity to type all communication in text. It is different from streams, in that every client can be a sound source to every other client, whereas streams are a one-way communication from a point source to many destination receivers. Clearly the availability of voice communication impacts both the type student and the form of discussion that can be undertaken in a learning situation.
In selecting the platform for delivering an educational experience, the extent to which the educator requires any or all of these capabilities within a virtual world will probably influence the decision. Some of these capabilities have only recently become generally available, and others are still in only rudimentary forms. In the literature review that follows, the approaches and content adopted, and the outcomes achieved have necessarily been constrained by capabilities of the technology options available at the time and the architectural constraints of the virtual world used.
===2.9.2 Education Applications in Virtual Worlds===
====2.9.2.1 Introduction====
During the 1970’s, 1980’s and early 1990’s, perhaps the most significant multi-user online environment for education was the PLATO system. From the mid 1990’s onwards, the influence of this system waned as it was progressively superseded in user interface capabilities by the emerging 3D online games, social networking systems and custom built virtual worlds for the specific application of subject matter.
Today the use of public online virtual worlds for is gaining popularity with educators with a recent special purpose committee of educators (The New Media Consortium & EDCAUSE, 2007) identifying that virtual worlds will have a significant impact in the future of teaching, learning, or creative expression within higher education. In the next section we will discuss some of the research findings of virtual worlds being used for educational purposes.
====2.9.2.2 Education Uses in Virtual Worlds====
Early work in education using text based MUDs showed that they offered support for constructive knowledge-building communities that offered affordances of coordinated presence with evidence for interactive learning and collaboration across time and space (Dickey, 2003).
The period of the late 1990’s until today has been typified by educators experimenting with the potential for mass market games engines (and more recently virtual worlds) to be re-tasked as education environments (Annetta et al., 2006; Beedle & Wright, 2007; Gikas & Van Eck, 2004). In some cases, such as America’s Army the ‘game’ environment was built with the specific goal of recruitment and training in mind (Zyda, 2005), or as with MicroSoft’s Flight Simulator a game evolved over time with the assistance of subject matter experts to create an accurate simulation tool for the games audience (Lenoir, 2003). In other cases a games engine (the operating system of a game) has been adapted to create a purpose built learning tool, such as educators and students at MIT utilising the Neverwinter Nights tools to create a historical game based on a battle in the Revolutionary War or MIT's Games-to-Teach Project produced playable prototypes of four games, including Biohazard, developed jointly by MIT and the Entertainment Technology Center at Carnegie Mellon University which trained emergency workers to deal with a cataclysmic attack (King, 2003).
The early 3D virtual worlds with their simplistic graphics bearing little resemblance to the real world provided students with advantages over traditional learning methods whilst fostering collaboration in multiuser virtual worlds. An extensive study of virtual reality technology in education was performed by Youngblut (1998) where she looked at 35 different research studies in education that varied in technology use, subject discipline and age group from 1993-1998. Below is an example of VARI House and Virtual Physics both of which were custom built (Figure 38), VARI a single user virtual world and Virtual Physics a multiuser virtual world. Although studies were mainly research based (as opposed to the application in course work) research showed for both single and multi user environments that virtual world technology in many cases surpassed traditional learning methods in areas such as subject matter understanding, memory retention, student collaboration and constructive learning methods. Some obvious disadvantages were technology constraints, cost and development and usability (Youngblut, 1998) which in most part could be contributed to the infancy of this technology, formative years of computer based learning and the lack of general use of computers by students which had yet to permeate sociality as a whole.
[[image:Education_In_Virtual_Worlds_in_1950_to_60_038.jpg]]
Figure 38. Education in Virtual World Mid 1990s
====2.9.2.3 Online Education Uses in Virtual Worlds====
As identified in the architecture considerations section, virtual worlds that are to be used in educational settings must enable content modification if learning is to consist of anything more advanced than an interactive conversation. For the purposes of this research, the researcher is choosing to focus on virtual worlds that support the dynamic delivery or streaming of content (and the building tools are provided as part of the environment), rather than those worlds where a separate level editor is required and a client resident world model must be installed on the client computer prior to use. The literature surveyed in this sub-section will therefore focus on the work done in two such environments – Active Worlds and Second Life.
=====2.9.2.3.1 Active Worlds=====
Online virtual worlds enabled educators’ access to environments without the cost and complexity of developing their own custom software. One of the first online virtual worlds that made it feasible for research and development in education (given its architecture qualities) was Active Worlds (1995, 1997). Officially known as Active Worlds Universe because it consists of many worlds, Active Worlds provided educators with the opportunity to rent or buy their own world allowing restricted access to invited guest, building tools and content management capabilities. Below is a screenshot of Active Worlds (Figure 39). As can be seen, the current client consist of four sections; left – communications and navigations options, right – integrated web browser, bottom – chat window and middle – 3D environment. This type of client is generally called a “browser” by the environment developers.
[[image:Active_Worlds_Universe_039.jpg]]
Figure 39. Early Online Social Virtual World: Active Worlds Universe
'''Active Worlds Research'''
During the late 1990s to the early 2000s several educational institutions setup up a presence in Active Worlds for various projects from research to actively using Active Worlds as an online learning environment (see Smith, 1999 for a list of Virtual Learning projects most of which being in Active Worlds). The early research into online virtual world based education using Active Worlds showed promise.
Dickey (1999, 2003, 2005) undertook research into the viability of Active Worlds being used for geographically distant learners for both formal (a university business computing skills course) and informal courses (Active Worlds building course). His research studies showed that the 3D Virtual Word offered advantages in fostering constructive learning, student and teacher collaboration, visual representation of course context and course content and student engagement and participation. Some of the disadvantages identified were essentially environment specific and included a lack of support for collaborative activities like a whiteboard or collaborative interactive writing spaces, chat tool single posting word limitation, a single shared channel for chat tool providing no separation of teacher / student discussion and no ability for turn taking and kinetics (animation) constraints such as hand raising for alerting the attention of the instructor.[20]
Dickey also identified a number of opportunities specifically enabled by a 3D environment. While some of the previously identified advantages (such as collaboration and student management and participation) might be duplicated in other forms of online education tools, the 3D modelling of the course itself (the visual representation of course context and course content) was an advantage specific to the 3D environment.
Course context modelling as provided in Dickey’s research (1999) was a 3D representation that illustrated the structure of the course by the use of individual buildings and plazas (Figure 40). Each building was a topic in the subject, which provided resources to aid learning and a meeting place where students could collaborate for group projects around this topic.
[[image:Visual_Course_Structure_in_Virtual_Buildings_040.jpg]]
Figure 40. Visual Representation of Course Structure by the use of Individual Buildings
Course content modelling as provided in Dickey’s research (1999) was a 3D representation that the student had to build in order to understand the concept of the subject material (Figure 41).
[[image:Visual_Represnetation_of_Course_Content_041.jpg]]
Figure 41. Visual Representation of Course Content
These alternative methods provide a good example of the power and adaptability of 3D modelling environment applied to education. The course context provided the student a method by which they could visualise the learning objectives and progression of the course. The student had to visit each building within a specific time frame and complete the contained content. The 3D modelling of course content enabled the learner multiple viewpoints of actual subject material which provided interactive learning that was believed to enhance the student’s understand of the subject topic.
Clark & Maher (2006) looked at the role of place and identity in a 3D virtual learning environment using Active Worlds by the analysis of chat logs and physical locality of the avatars within group discussions. They found that a sense of place can be achieved in a 3D virtual learning environment where identity and presence plays a role in establishing the context of the learning place. The students formed a strong bond with their avatar and indicated that they felt a sense of presence, as measured by a series of subjective scales, within the virtual learning environment. Similar Dickey (2003) found that the 3D virtual desktop world provided qualities of presence similar to that of an immersive virtual reality virtual world.
=====2.9.2.3.2 Second Life=====
Second Life (started 2003) consists of two worlds. These are: Second Life Teen Grid and Second Life Adult Grid. The teen grid provides access to 13-17 year olds and educational instructors. The functionality of the teen grid is the same as the adult grid with exception that all content has a PG rating. The Adult Grid is where you find all the universities and colleges for students over 17 years of age. Other educational content in Second Life is an extensive list of museums, galleries, simulations, business product development, role-playing spaces, employee and public business training course, etc. Similar to Active Worlds educators are able to rent or purchase land, allow open or closed access to the public and build and develop on land.
One major difference between Second Life and Active Worlds is that the former has an in world economy with in-built functional support enabling the trading of virtual products and services using ‘Linden dollars’, backed by content copyright and duplication controls and augmented by a provider managed exchange where real dollars can be exchanged for Linden dollars (and vice versa). This fundamental difference provides an incentive for content developers and service providers to actively support and expand the world with content and therefore enables access to a large body of pre-constructed content or access to an entire world-wide industry of content developers at extremely reasonable rates (compared to the real world 3D developers providing the similar content outside of Second Life) (Joseph, 2007). The building and scripting tools are easier to master than traditional 3D rendering tools, and delivered free as part of every user’s world browser and are sufficiently powerful that just about anything imaginable can be constructed (Schmidt et al., 2007).
Second Life’s standard interface as seen below (Figure 42) offers extensive functionality over that of Active Worlds. Some of the more common features as seen in the figure below are built-in world, content and people search facilities (left), a mini map (top right), an inventory library (bottom right), local chat channel (with a standard ranges of 15, 30 meters or 60 meters from text source) and group chat channels (worldwide range for up to 25 groups per avatar), customisable streaming media players (for sound, video and web page content), in world or external web html browser (link for both in world and outside world content), private or public multi-player voice facilities etc.
[[image:Second_Life_042.jpg]]
Figure 42. Online Virtual Social World Second Life (Circa 2008)
Another difference from Active World is avatar control, Second Life avatars can use roaming camera (whereas Active Worlds only provides First and Third person view). Roaming camera enables the user to use their mouse to control the moment around the world without the need to move their avatar. This functionality once mastered offers the users a powerful tool that provides an easy and fast way in which to navigate objects (that can even go through objects such as walls).
Due these and other technological advances over Active Worlds, Second Life has developed a large education community over the last couple of years. For instance, SIMTeach (June, 2008) the Second Life Education Wiki identifies over 200 Educational Institutions in Second Life of which 138 listed are universities, colleges and schools. The Second Life Education (SLED) list server has over 5,000 world-wide members. The New Media Consortium (NMC, a group that hosts education islands) has over 100 universities on their land and Second Life Teen Grid has over 90 educational projects (Linden & Linden, 2008). Figure 44 p88 provides some examples of the training and learning activities in Second Life representing a mixture of educational institutions, corporations and governments agencies.
The content of Second Life is entirely user created. The availability of content developers and potential students already experienced in using the environment is dependent on the take-up and expected future growth of the environment. In Figure 43 are the user base and economical statistics for the first quarter 2008 as provided by Second Life’s proprietor Linden Lab (2008a). As of November 2008 Second Life had 16,318,063 million users (60 day logons 1,344,215 million). A beak-down of Second Life’s demographics as at November 2008 can be seen in Appendix I: Second Life Demographics.
[[image:Second_Life_User_and_Econ_Stats_Q12008_043.jpg]]
Figure 43. Second Life User & Economic Statistics for Q1 2008
[[image:Second_Life_Training_and_Learning_044.jpg]]
Figure 44. Second Life Training and Learning
'''Second Life Research'''
Educators are using Second Life for both formal and informal purposes. Some Educational intuitions have set up entire virtual campuses modelling their real world campus while others are modelling purpose built virtual education structures. The relative youth of Second Life means that there is considerable variation in the maturity of educational efforts across the virtual world, and limited peer reviewed studies yet published. Many educators are still experimenting while others, having active support of their institutions are actively using the environment for partial or entire subject delivery. Here we will look at some of the current research at the time of writing that has been undertaken in Second Life most of which has been recently published since 2006 although given the technological advances that has occurred in Second Life since 2007 onwards we will specifically concentrate on the later research.
Martinez, Martinez, & Warkentin (2007) researched the implementation of a lecture to geographically distributed third year university students in Second Life. The lecture was delivered in a conventional lecture room setting using traditional chalk and talk style delivery with lecture slides and the chat channel for instruction, no voice was used.[21] According to the lecturer’s experience using text only delivery, the time to deliver the content was double that of a face to face lecture. This was also confirmed by the students in their survey. In the student survey some admitted they felt distracted by the novelty of the environment and were overly concerned with ancillary aspects such as their avatar’s appearance etc. Others admitted to being distracted by external (to the environment) concurrent activities occurring simultaneously on their PC’s such as multi tasking with other programs (e.g. MSN messaging) whilst at the lecture. Others experienced technical difficulties and could not get back into the lecture after they were accidentally logged out. In spite of these short-comings, when asked to rate the lecture experience on a scale of 1-10 the average student response was 8.5. In this study it was noted that some of these distractions and difficulties could be put down to first time user experience. The lecturer also felt that this lecture could have easily been pre-recorded and delivered online and that active learning techniques could have improved the delivery of this lecture in Second Life (Arreguin, 2007).
Joseph (2007) notes a consequence of using Second Life (or a virtual worlds in general) for teaching is that sessions generally take longer than traditional methods but believes that this is not an issue per se as time to complete the task should come second to the effectiveness of the experience. Joseph also believes (from experience) that the avatar projected on the screen and sense of presence experienced by the participants is more effective for learning than a live image of a video feed.
Kofi, Svihla, Gawel, and Bransford (2007) researched the potential that virtual worlds could provide efficiency and innovation for adaptive learning. In their study, students were present with a maze to navigate that simulated problem solving skills required for learning similar to that in a real life learning scenario. Kofi, et al found that Second Life was able to provide enough functionality and support for the learner to apply new concepts in order to solve presented problems as long as they were provided key indicators of possible outcomes. They also found that the use of 3D learning environments required the same amount of instruction that would be provided in equivalent real world learning and that simply building a model did not provide sufficient information, of itself, for the learner to learn in this instance; they also needed to be continuously prompted and guided in order to reach the end learning objective.
In another example, Second Life was used to support learning objectives of a total of 13 students aged between 19-26 for a third year level college students on a course for Digital Entertainment and Society where the students were geographically distributed around the world (Gonzalez, 2007). Both lectures and assignment work was conducted within Second Life. The lectures consisted of a video presentation and an in world field excursion. Assignment work required some in-world building, an exercise using linden dollars with a student presentation on completion. No students had used this environment before but an acclimation exercise was sufficient in providing them with the skills required to undergo course work in Second Life. At the end of the course students were given a survey with results presented below (Table 1).
{|
|Elements that Second Life Added:
|-
|
|Agree
|Disagree
|-
|Enjoyment
|100%
|0%
|-
|Technical difficulties
|100%
|0%
|-
|Interaction with tutor
|62%
|38%
|-
|Interact ion with classmates
|62%
|38%
|}
Table 1. Survey Results for Digital Entertainment and Society Second Life Subject
The technical difficulties result was explained largely by network latency experienced by the students. Each student used their own computers with an average of 512 Kbs connection speed – not especially fast, nor ideal for the use in the Second Life environment. No mention was made in the study as to whether the student computers met the Linden Lab systems requirements (2008c). As Second Life is streaming virtual world where content is downloaded on-demand from Linden Labs servers located in the USA to the local computer connection speed can an important factor in technical difficulty performance. Other major impacts from a technical perspective include the computer graphics cards and the size of onboard computer RAM. The Second Life browser does offer many settings for optimising performance on low-end machines but if the minimum system requirements are not met then the user’s experience of the virtual world will be reduced significantly with dropouts, lag and poor graphics.
==2.10 Learning & Instructional Design Theory==
===2.10.1 Introduction===
Learning in any world (real or virtual) requires well thought out instructional design. Learning is a process of the mind regardless of whether your body is present in the virtual world or real world. Instructional components for learning regardless of medium include (DONCIO et al., 2008):
*Clear, concise, and appropriately structured content
*Activities that draw relationships between concepts, challenge learners' thinking and understanding, and reinforce information
*Evaluative measures that determine if knowledge assimilation and retention have occurred
In this research the focus was on the use of new technology in education as opposed to education applied to new technology; therefore this section only provides an overview of applicable theory required to assist in the instructional design, delivery and assessment of the subject material presented to the research participants in this study. Gagne’s Nine Events of Instruction and Bloom’s Taxonomy of the Cognitive Domain were selected to assist in this task.
===2.10.2 Behaviourism and Cognitivism===
There are two main traditional schools of thought in learning theory. These are Behaviourism and Cognitivism (DONCIO et al., 2008; Lewis, 2001).
*Behaviourist (Objectivist) views the mind as a ‘black box’ no consideration of personal or past experience is taken into consideration. The mind starts off with a clean slate where a stimulus produces a response. Only when a change in behaviour is observed learning has occurred. Learning is discrete, measurable and quantifiable.
*Cognitivist (Constructivist) views the mind as a continuous organism that evolves. Knowledge is constructed based upon from past material and personal experience. Learning is unique to the individual; relating new information based upon pervious knowledge learnt.
The University of Washington, Seattle (2008) compares the two approaches of and a provides a discussion of each in terms of philosophy (Table 2, p93), learning outcomes, instructor role, student role, activities and assessment. The philosophies of these approaches are opposing and therefore produce different methods of instruction (Lewis, 2001; Nash, 2007).
Behaviourism was the first to be defined in learning theory while cognitivism developed later as a response to perceived limitations of behaviourism in understanding and adapting to new learning concepts (Lewis, 2001; Mergel, 1998).
While some constructivists argue the merits of constructivism as a distinct theory, viewing knowledge as a something constructed by a learner through the process of learning other writers view constructivist ideas as an evolution of the fundamental cognitivist school. This position is illustrated in Table 2 where the behaviourist and constructivist-enhanced-cognitivist philosophies are compared using a consistent comparative organisation of views (see Dabbagh, 2006; Mergel, 1998).
Constructivists argue a distinction between cognitive constructivism and social constructivism, in which the former emphasises the exploration and discovery on the part of each learner, while the latter emphasises the collaborative efforts of groups of learners as sources of learning, but for our purposes it is sufficient to distinguish the behaviourist and cognitive approaches. Throughout the years many practical teaching methods have evolved with concepts that encompass both approaches.
[[image:TABLE_Instructional_Design_Behaviorism_Cognitivism_045.jpg]]
Table 2. Instructional Design: Comparative Summary Behaviorism and Cognitivism
(University of Washington, 2008)
===2.10.3 Gagne’s Nine Events of Instruction===
Gagne theory of instruction can be divided into three areas (Corry, 1996); taxonomy of learning outcomes, conditions of learning and levels of instruction. There are considerable similarities between Gagne’s ‘taxonomy of learning outcomes’ and Bloom’s ‘taxonomy of the cognitive domain’ therefore a discussion of these will be provided in the next section of this thesis.
Gagne breaks down ‘conditions of learning’ into internal learning and external learning conditions. Internal learning is concerned with previous learned capabilities of the learner and external learning is the instruction or stimuli that will be presented to the learner. While Gagne’s theory takes an essentially cognitivist approach, it recognises both behaviourism and cognitivism influences to instructional learning. For our purposes, it is the ‘levels of instruction’ as outlined by Gagne that are of particular interest which we will explore in this section.
Gagne (1985) presents a systematic approach to instructional design termed the ‘nine levels of instruction’ as presented below in Figure 45 (Clarke, 2000)[22]. These nine levels have been specifically designed for the teaching of intellectual skills.
[[image:GAGNE_Nine_Steps_To_Instruction_046.gif]]
Figure 45. Robert Gagne's Nine Steps of Instruction (Clarke, 2000)
The nine instructional events with their corresponding cognitive processes can be described as follows (Clarke, 2000; Kearsley, 2008):
#'''Gaining Attention (Reception)''': Grab the attention of the participant by presenting a teaser in order to get the participant interested and motivate them to learn more about the topic that will be presented. This could be done using methods such as a movie, phrase, storytelling or a demonstration.
#'''Informing Learners of the Objective (Expectancy)''': Provide the participant with the objectives in order to assist them in organising their thoughts ready to receive the new information that will be presented.
#'''Stimulating Recall of Prior Learning (Retrieval)''': Provide the participant with any background that my assist them in building upon the new knowledge that they are about to receive. This helps to place a framework in their mind based upon previous knowledge.
#'''Presenting the Stimulus (Selective Perception)''': This is where the new learning begins. Information should be chunked and organised meaningfully in order to avoid memory overload and assist in the learning of new knowledge. Chunking the information into sequence of learning events and breaking it down into constituent parts with a structure and purpose that spans across different areas of comprehension. The revised Bloom’s taxonomy (discussed in the next section) can be used to assist in forming of the presented information.
#'''Providing Learning Guidance (Semantic Encoding)''': Assisting the participant to obtain a deeper level of understanding of the new knowledge so that information can be encoded into their long term memory. During instruction try to provide examples, non examples, analogies, graphical representation etc. to assist in semantic encoding process.
#'''Eliciting Performance (Responding)''': Letting the learner do something with the new knowledge or test their new knowledge to confirm they have a correct understanding of the information.
#'''Providing Feedback (Reinforcement)''': Analyse the learner’s understanding of the subject matter presented and provide feedback to correct any misunderstood knowledge. Immediate feedback and reinforcement of the new knowledge (e.g. question and answers).
#'''Assessing Performance (Retrieval)''': Test that the new knowledge is understood and the learning objectives have been met. This could be in the form of a test or a demonstration by the learner to assess if they have mastered the information.
#'''Enhancing Retention and Transfer (Generalisation)''': Generalise the information so that the knowledge transfer can occur, inform them of similar problems or a similar situation so that the acquired knowledge can be put into a new context.
===2.10.4 Bloom’s Taxonomy===
The Taxonomy of Educational Objectives also known as Bloom’s Taxonomy is widely used[23] to assist in the preparation of learning objectives and the assessment of learning outcomes. The learning outcomes of a student are the results of their learning experience of a course that should be a direct consequence of the course objectives (Monash University, 2008). Hence the application of Bloom’s taxonomy of educational objectives in forming course objectives provides a measure by which to assess student’s learning outcomes.
The original work of Bloom’s Taxonomy was developed by an American committee of educational psychologists chaired by Benjamin Bloom that presented over a period of time three domains: cognitive (knowledge) (Bloom, Englehart, Furst, Hill, & Krathwohl, 1956), affective (attitudes) (Krathwohl, Bloom, & Masia, 1964), and psychomotor (motor skills) (Dave, 1967, 1970; Harrow, 1972; Simpson, 1972). In forming educational course objectives Bloom’s cognitive domain is applied to assess the knowledge and intellectual component of a curriculum.
After nearly 47 years had passed Bloom’s cognitive domain was revised (Anderson et al., 2001; D R Krathwohl, 2002) by a committee of eight, two of whom had worked on the original published work (committee: Krathwohl and editor: Anderson). The revision was made as a result of many years of application and research and has since been accepted by many educators as a replacement for Bloom’s original work. The changes that were made are as follows (Figure 46) (Anderson Research Group, n.d.; D R Krathwohl, 2002):
*The names of six major categories were changed from noun to verb forms.
*Comprehension and synthesis were retitled to understand and create respectively, in order to better reflect the nature of the thinking defined in each category.
*Create was moved to the highest, that is, most complex, category.
*The revised Taxonomy is not a cumulative hierarchy.
*A taxon of remember was devised to replace that of Knowledge, and
*A two dimensional Cognitive Taxonomy Table was formed by sub dividing the original Knowledge taxon.
[[image:BLOOM_Changes_in_Cognitive_Domain_047.jpg]]
Figure 46. Changes in Bloom’s Cognitive Domain
====2.10.4.1 Revised Bloom’s Taxonomy of the Cognitive Domain====
A substantive difference is in the handling of “Knowledge”. The revised Bloom’s cognitive domain as shown in Table 3 was extended to include the dimension of Knowledge. So now the revised Bloom’s cognitive domain consists of a two dimensional table with The Knowledge Dimension and The Cognitive Process Dimension. This table provides the instructor with a tool with which to classify learning objectives where learning objectives are specific and inclusive to the discrete learning outcomes or intended results that are hoped to be achieved by the end of instruction. The instructor defines the learning objectives where these objectives are classified into the appropriate cell in the 2D matrix of cognitive and knowledge dimensions which then assists in instructional design, and assessment and provides a tool to enable balancing of the learning objectives across methods of instructional design.
[[image:BLOOM_TABLE_Revised_Taxonomy_048.jpg]]
Table 3. Revised Bloom’s Taxonomy Table
(Anderson et al., 2001, p. 28)
'''The Cognitive Process Dimension'''
The Cognitive Process Dimension is the column values for Table 3 above. This dimension provides the level of learning and comprehension required to complete a task where each differs in their complexity on a scale from 1-6. Cognitive dimensions are defined as 1.Remembering, 2.Understanding, 3.Applying, 4.Analysing, 5.Evaluating and 6.Creating each of which contain further sub-process with 19 specific cognitive processes in total. Table 4 provides an overview of each cognitive process with their defining verbs. Verbs are used to classify an objective. For example, an objective ‘to recall the 7 states of Australia’ would be classified under remembering. Recall in this instance is the verb that classifies the learning objective into level “1. Remember” of the cognitive dimension.
[[image:Cognitive_Process_Dimension_Processes_049.jpg:
Table 4. The Six Categories of The Cognitive Process Dimension And Related Cognitive Processes (Anderson et al., 2001, p. 31)
Bloom’s cognitive taxonomy was solely based upon the values contained in the cognitive dimension (with the exception of the differences previously discussed). Bloom believed that the cognitive process was a cumulative learning process in order to achieve a learning outcome. For example, in order to ‘analyse’ subject matter the student would need to have mastered using the old Bloom’s taxonomy of the cognitive domain knowledge/remember, comprehension/ understand and application/ apply whereas the revised taxonomy of the cognitive domain does not assume this cumulative hierarchy. The early Bloom’s cognitive domain took a behaviourist approach to instruction whereas the revised Bloom’s cognitive domain believes that learning can take place at any level without mastering previous levels. This is a fundamental shift in the philosophical grounding of Bloom’s taxonomy of the cognitive domain where it has moved away from the behaviourist approach of learning.
'''The Knowledge Dimension'''
The Knowledge Dimension provides an additional dimension that has been added to the taxonomy by the subdivision (and modification) of Bloom’s original knowledge category, which can be seen as row values in Table 3 above. The knowledge dimension defines how knowledge is constructed which can be Factual, Conceptual, Procedural or Metacognitive. Table 5 provides an overview of the knowledge dimension and their meanings.
The knowledge dimension separates the noun (or subject matter) from the stated learning objective. For example, continuing on from the objective discussed above ‘to recall the 7 states of Australia’ would be factual knowledge where the bolded words make up the noun construct. This noun is factual because the learner either knows the states or they don’t, to know is the basic element required in order to solve the problem.
[[image:Major_Types_and_Subtypes_Knowledge_Dimension_050.jpg]]
Table 5. The Major Types And Subtypes Of Knowledge Dimension (Anderson et al., 2001, p. 31)
The knowledge dimension has been added as it provides further insight to the type of knowledge a student is required to master. In the original work this assumption was also made as it was the first level in a cumulative hierarchy but the revised knowledge dimension provides the instructor with a greater understanding and assists in defining knowledge as a separate dimension. For example, the objective ‘to recall the 7 states of Australia’ the student needs to Remember Factual Knowledge.
The knowledge dimension like the cognitive dimension is not a cumulative hierarchy, learning can start anywhere within the knowledge dimension.
'''Using the Revised Bloom’s Cognitive Domain to Assist in Instructional Design'''
To assist in formulating instructional design Anderson et al. (2001) provides in their book for the cognitive dimension; sample objectives, corresponding assessments and assessment formats (chapter 5) and in the knowledge dimension; specific details, elements, generalisation, structures and models etc (chapter 4). This assists in the formulation of specific tasks and in defining the level of knowledge required of the student. It also assists in ensuring those objectives and testing of those objectives lie across the required range of cognitive and /or knowledge categories and that the student is being fairly assessed in areas that are directly related to the objectives.
====2.10.4.2 Bloom’s Taxonomy of the Cognitive Domain Applied to a Digital Environment====
'''Bloom’s Digital Taxonomy of the Cognitive Domain'''
Churches (2008) has extended the (revised) Bloom’s cognitive domain for digital learning by taking the cognitive process dimension and included verbs for emerging technology. As can be seen below (Figure 47) the words highlighted in blue are the digital emerging technology verbs that have been categorised by using (revised) Bloom’s cognitive levels as the basis for interpretation of complexity. For example bookmarking is a remembering process is simpler than programming (which is a creating process).
[[image:BLOOM_Revised_As_Digital_Taxonomy_051.jpg]]
Figure 47. Bloom's Digital Taxonomy
Churches further added within his classification system a rubric (scoring criteria) of these technologies similar to that that has been defined in the sub-classification system used in Bloom’s cognitive domain. For example, Table 6 displays the rubric for Bookmarking where it has been broken down from simplest to highest.
[[image:BLOOM_Bookmarking_Rubric_For_Digital_Taxonomy_052.jpg]]
Table 6. Bookmarking Rubric for Bloom’s Digital Taxonomy
'''Bloom’s Taxonomy of the Cognitive Domain applied to Games'''
Wang & Tzeng (2007) proposed using the (revised) Bloom’s taxonomy of the cognitive domain as a method for understanding the application of knowledge in digital games. They believed that players learn in various ways within computer games and recognised how little work (if any) had been done in analysing such e-learning platforms in a structured taxonomic manner and in structuring the implementation and understanding of the cognitive processes. They proposed using Bloom’s taxonomy of the cognitive domain as a method by which to assess cognitive processes in a computer game.
[[image:BLOOM_Taxonomy_For_Games_053.jpg]]
Figure 48. Bloom’s Taxonomy for Games
The research included using a game called Food Force, which was a problem solving and mission-oriented game. Figure 48 summarises the conclusion of their research. As can be seen in Figure 48, players exhibited both personal and social feedback cross Bloom’s cognitive levels. They found that the players experienced cognitive processes for individuals across all categories of the Bloom’s cognitive model and displayed social interaction for the higher level Bloom’s categories of Analyse, Evaluate and Create.
==2.11 Summary==
The acceptance of the latest crop of virtual worlds such World Of Warcraft, Second Life, Entropia Universe, There, Eve, America’s Army and others by the internet using public as an integral part of their life style is possibly the most significant paradigm shift to occur in the last 10 years. With the statistics of user volumes and retention rates shows consumption numbers in the tens of millions of users, spread evenly across ages from youth to middle age and an approximately even gender balance (at least in the social worlds) (KZERO Research, 2007; Woodcock, 2008; Yee, 2006). The growth rates of these worlds collectively have been, and are projected (by industry analysts) to continue to be, rising dramatically for the foreseeable future.
With the current convergence of disparate technologies represented by these systems, the general public now have affordable single platform multi-media collaborative environments with sufficient realism to create virtual immersive spaces where presence is achieved at a level sufficient for them to lead virtual existences and establish social networks that rival their real world existence.
The linking of these spaces with the affordable (often free) tools that enable the public to create new 3D spaces and content for these spaces over the last eight years has resulted in a world-wide content developer base that with substantial skills and a highly competitive market for purchasers of those skills at often very low rates.
With the combined market pressures of minimising education delivery costs, improving education outcomes, and reaching as wide a market as possible it is understandable that educators have shown an extended interest over many years in the possibilities of virtual environments for education delivery. So with the advent of the latest generation of creativity focused social worlds like Second Life over the last few years, it is not surprising that the uptake by universities and educators (numbering in the hundreds of institutions) has been as substantial as it is.
A brief retrospective of the work in simulators, virtual reality and 3D games, shows that the potential of these environments extends beyond the virtual ‘chalk-and-talk’ to enabling education delivery strategies for even campus based students that cannot economically be delivered using reality bound means.
With traditional real world learning environments there is an extensive body of tested knowledge that can provide clear guidance as to workable frameworks for the design of course work. The extent to which and how these methods can or should be applied to the virtual world learning space remains an open question.
</div >
[[Category:Featured Article]]
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
d875646505c796db4cb09ab9a1b432278c0a2c28
Real Learning In Virtual Worlds
0
279
306
305
2018-10-29T11:40:33Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Real Learning in Virtual Worlds: An assessment of two approaches to content delivery with respect to learning outcomes=
Author: Dianne Bishop
A Minor Thesis
Submitted in partial fulfilment of the requirements for the
Degree of Master of Information Technology (Minor Thesis)
Faculty of IT
Monash University
December 2008
==Abstract==
This thesis comparatively explores two methods of delivering lecture based teaching material in the virtual world Second Life by comparing and contrasting tested outcomes of Bloom’s ‘remember’ and ‘understand’ cognitive processes, and analysing qualitative feedback on participants experiences.
The study provides an extensive literature review covering the history of research and invention in the Virtual Worlds commencing from gestation in fictional writings to realisation in the current genre of massively connected online virtual worlds, and finally summarises the specific research into application of virtual worlds in education, and outlines alternative models for measuring learning outcomes.
From this basis the thesis documents an experimental framework, virtual world teaching laboratory and learning management system built for the purpose of delivering lecture material in a controlled, experimental manner and an experiment conducted for the purposes of comparing the outcomes of two alternative delivery systems. Using otherwise identical content a “classic” 2D lecture and the same lecture augmented by 3D models and simulations was delivered to randomly selected participants and their achievement scores for Bloom’s cognitive processes ‘remember’ and ‘understand’ graded and analysed.
The research found that there is no significant difference between either ‘remember’ or ‘understand’ cognitive processes grades for the 2D and 3D groups, although there was a non-statistically significant advantage in remembering demonstrated by the 3D group at the extreme lower and upper deciles.
The thesis concludes by identifying a number of opportunities for further research.
==Table of Contents==
*[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|CHAPTER 1: Overview]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.1 Background to the Study.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.2 Research Questions.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.3 Overview of Study.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.4 Significance and Limitations.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.5 Structure of Thesis.]]
*[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|CHAPTER 2: Literature Review]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.1 Introduction.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.2 Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.2.1 What is a Virtual World?.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.2.2 Recognising a Virtual World by its Features.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.3 The Avatar–The Nature of a Participant’s Projection into a Virtual World.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4 A Taxonomy of Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4.2 A Taxon for Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4.3 Applied Taxonomies.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.5 What’s in a Name? – Virtual Worlds versus Virtual Reality.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.6 Dimensioning Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.6.1 The Degree of Virtuality.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.6.2 The Degree of Immersion and Presence.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7 Influences on Virtual Worlds from Art and Literature.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7.2 Virtual Worlds of the Arts.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7.3 Virtual Worlds of Fiction and Fantasy.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8 The History of Computational Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.2 Hardware Based User Interfaces and Virtual Reality Systems.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.3 Early Graphical Computer Games.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.4 Text Based Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.5 Graphical Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.6 Simulation and Learning Systems.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.9 Virtual Worlds for Education.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.9.1 Architecture Considerations.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.9.2 Education Applications in Virtual Worlds.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10 Learning & Instructional Design Theory.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.2 Behaviourism and Cognitivism.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.3 Gagne’s Nine Events of Instruction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.4 Bloom’s Taxonomy.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.11 Summary.]]
*[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|CHAPTER 3: Research Design]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.1 Introduction]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.2 Problem Statement and Research Hypothesis]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.3 Research Rationale]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.4 Research Method]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.4.1 Theoretical Assumptions]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.4.2 Research Study]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.5 Research Population]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.6 The Virtual Learning Environment]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.7 Learning Task Design]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.7.1 Subject Matter]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.7.2 Instruction Delivery]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8 Instrumentation]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8.1 Pre and Post Quiz]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8.2 Survey: Learning Experience]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8.3 Instrument Reliability]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9 Analysis Method]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.1 Introduction]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.2 Data Processing]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.3 Software]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.4 Quantitative Analysis Methods]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.5 Qualitative Analysis Methods]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.10 Summary]]
*[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|CHAPTER 4: Results.]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.1 Introduction]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2 Quantitative Analysis Results: Achievement Scores]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.1 Overview of Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.2 Pre-Quiz Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.3 Post-Quiz Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.4 Hypotheses Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.5 Survey Results: Likert Scales]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3 Qualitative Analysis Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3.1 Introduction]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3.2 Analysis Approach]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3.3 Themes of the Open Survey Questions]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.4 Summary]]
*[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|CHAPTER 5: Discussion & Conclusion]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.1 Introduction]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2 Quantitative Analysis]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.1 The Results of the Hypothesis]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.2 The Results of the Pre-Quiz]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.3 The Results of the Post-Quiz]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.4 Likert Scale Analysis]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.3 Qualitative Analysis]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.3.1 Thematic Analysis Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.3.2 Qualitative Analysis of Thematic Results]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.4 Discussion of Results]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.5 Conclusion]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6 Opportunities for Further Research]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.1 Improving Instrument Reliability]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.2 Course versus Lecture]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.3 Introducing a Real and Robot Presenter to the Experience]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.4 Testing Other Bloom’s Cognitive Processes]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.5 Outcome Measurement Over Time]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.6 Comparison to Real-World Training]]
*[[VirtualWorldLearningReferences|Referencs]]
*[[Real Learining in Virtual World - Selected Appendices|Selected Appendices]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix A: Terminology]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix B: MMOG Analysis]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix I: Second Life Demographics]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix J: Pre-Quiz Score Results]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix K: Post-Quiz Score Results]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix L: Instrument Reliability Results]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix M: Qualitative Analysis: A Sample of Participants Comments]]
==Full Appendices==
The full appendices to the original master's thesis on which Real Learing in Virtual Worlds articles are based include items such as pre and post question quizes, reproduction of building signage, and graphic heavy pages. This material is best examined in a downloadable. The full appendices A through M are avaliable here.
The content of the download is:
#'''Appendices.'''
*Appendix A: Terminology.
*Appendix B: MMOG Analysis.
*Appendix C: Welcome Room Information Content
*Appendix D: Instruction: Slide Presentation.
*Appendix E: Pre-Presentation Slide Show.
*Appendix F: Pre-Quiz.
*Appendix G: Post Quiz.
*Appendix H: Survey.
*Appendix I: Second Life Demographics.
*Appendix J: Pre-Quiz Score Results.
*Appendix K: Post-Quiz Score Results.
*Appendix L: Instrument Reliability Results.
*Appendix M: Qualitative Analysis: A Sample of Participants Comments.
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
a86cf20177dd347fdd09144b692cdb0f293fbdc7
360
306
2018-10-29T12:02:34Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Real Learning in Virtual Worlds: An assessment of two approaches to content delivery with respect to learning outcomes=
Author: Dianne Bishop
A Minor Thesis
Submitted in partial fulfilment of the requirements for the
Degree of Master of Information Technology (Minor Thesis)
Faculty of IT
Monash University
December 2008
==Abstract==
This thesis comparatively explores two methods of delivering lecture based teaching material in the virtual world Second Life by comparing and contrasting tested outcomes of Bloom’s ‘remember’ and ‘understand’ cognitive processes, and analysing qualitative feedback on participants experiences.
The study provides an extensive literature review covering the history of research and invention in the Virtual Worlds commencing from gestation in fictional writings to realisation in the current genre of massively connected online virtual worlds, and finally summarises the specific research into application of virtual worlds in education, and outlines alternative models for measuring learning outcomes.
From this basis the thesis documents an experimental framework, virtual world teaching laboratory and learning management system built for the purpose of delivering lecture material in a controlled, experimental manner and an experiment conducted for the purposes of comparing the outcomes of two alternative delivery systems. Using otherwise identical content a “classic” 2D lecture and the same lecture augmented by 3D models and simulations was delivered to randomly selected participants and their achievement scores for Bloom’s cognitive processes ‘remember’ and ‘understand’ graded and analysed.
The research found that there is no significant difference between either ‘remember’ or ‘understand’ cognitive processes grades for the 2D and 3D groups, although there was a non-statistically significant advantage in remembering demonstrated by the 3D group at the extreme lower and upper deciles.
The thesis concludes by identifying a number of opportunities for further research.
==Table of Contents==
*[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|CHAPTER 1: Overview]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.1 Background to the Study.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.2 Research Questions.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.3 Overview of Study.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.4 Significance and Limitations.]]
**[[Real Learning in Virtual Worlds - CHAPTER 1: Overview|1.5 Structure of Thesis.]]
*[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|CHAPTER 2: Literature Review]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.1 Introduction.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.2 Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.2.1 What is a Virtual World?.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.2.2 Recognising a Virtual World by its Features.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.3 The Avatar–The Nature of a Participant’s Projection into a Virtual World.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4 A Taxonomy of Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4.2 A Taxon for Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.4.3 Applied Taxonomies.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.5 What’s in a Name? – Virtual Worlds versus Virtual Reality.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.6 Dimensioning Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.6.1 The Degree of Virtuality.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.6.2 The Degree of Immersion and Presence.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7 Influences on Virtual Worlds from Art and Literature.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7.2 Virtual Worlds of the Arts.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.7.3 Virtual Worlds of Fiction and Fantasy.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8 The History of Computational Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.2 Hardware Based User Interfaces and Virtual Reality Systems.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.3 Early Graphical Computer Games.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.4 Text Based Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.5 Graphical Virtual Worlds.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.8.6 Simulation and Learning Systems.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.9 Virtual Worlds for Education.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.9.1 Architecture Considerations.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.9.2 Education Applications in Virtual Worlds.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10 Learning & Instructional Design Theory.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.1 Introduction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.2 Behaviourism and Cognitivism.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.3 Gagne’s Nine Events of Instruction.]]
***[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.10.4 Bloom’s Taxonomy.]]
**[[Real Learning in Virtual Worlds - CHAPTER 2: Literature Review|2.11 Summary.]]
*[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|CHAPTER 3: Research Design]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.1 Introduction]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.2 Problem Statement and Research Hypothesis]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.3 Research Rationale]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.4 Research Method]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.4.1 Theoretical Assumptions]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.4.2 Research Study]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.5 Research Population]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.6 The Virtual Learning Environment]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.7 Learning Task Design]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.7.1 Subject Matter]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.7.2 Instruction Delivery]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8 Instrumentation]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8.1 Pre and Post Quiz]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8.2 Survey: Learning Experience]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.8.3 Instrument Reliability]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9 Analysis Method]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.1 Introduction]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.2 Data Processing]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.3 Software]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.4 Quantitative Analysis Methods]]
***[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.9.5 Qualitative Analysis Methods]]
**[[Real Learning in Virtual Worlds - CHAPTER 3: Research Design|3.10 Summary]]
*[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|CHAPTER 4: Results.]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.1 Introduction]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2 Quantitative Analysis Results: Achievement Scores]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.1 Overview of Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.2 Pre-Quiz Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.3 Post-Quiz Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.4 Hypotheses Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.2.5 Survey Results: Likert Scales]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3 Qualitative Analysis Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3.1 Introduction]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3.2 Analysis Approach]]
***[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.3.3 Themes of the Open Survey Questions]]
**[[Real Learning in Virtual Worlds - CHAPTER 4: Results.|4.4 Summary]]
*[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|CHAPTER 5: Discussion & Conclusion]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.1 Introduction]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2 Quantitative Analysis]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.1 The Results of the Hypothesis]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.2 The Results of the Pre-Quiz]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.3 The Results of the Post-Quiz]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.2.4 Likert Scale Analysis]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.3 Qualitative Analysis]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.3.1 Thematic Analysis Results]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.3.2 Qualitative Analysis of Thematic Results]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.4 Discussion of Results]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.5 Conclusion]]
**[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6 Opportunities for Further Research]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.1 Improving Instrument Reliability]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.2 Course versus Lecture]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.3 Introducing a Real and Robot Presenter to the Experience]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.4 Testing Other Bloom’s Cognitive Processes]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.5 Outcome Measurement Over Time]]
***[[Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion|5.6.6 Comparison to Real-World Training]]
*[[VirtualWorldLearningReferences|Referencs]]
*[[Real Learining in Virtual World - Selected Appendices|Selected Appendices]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix A: Terminology]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix B: MMOG Analysis]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix I: Second Life Demographics]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix J: Pre-Quiz Score Results]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix K: Post-Quiz Score Results]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix L: Instrument Reliability Results]]
**[[Real Learining in Virtual World - Selected Appendices|Appendix M: Qualitative Analysis: A Sample of Participants Comments]]
==Full Appendices==
The full appendices to the original master's thesis on which Real Learing in Virtual Worlds articles are based include items such as pre and post question quizes, reproduction of building signage, and graphic heavy pages. This material is best examined in a downloadable. The full appendices A through M are avaliable here.
The content of the download is:
#'''Appendices.'''
*Appendix A: Terminology.
*Appendix B: MMOG Analysis.
*Appendix C: Welcome Room Information Content
*Appendix D: Instruction: Slide Presentation.
*Appendix E: Pre-Presentation Slide Show.
*Appendix F: Pre-Quiz.
*Appendix G: Post Quiz.
*Appendix H: Survey.
*Appendix I: Second Life Demographics.
*Appendix J: Pre-Quiz Score Results.
*Appendix K: Post-Quiz Score Results.
*Appendix L: Instrument Reliability Results.
*Appendix M: Qualitative Analysis: A Sample of Participants Comments.
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
a86cf20177dd347fdd09144b692cdb0f293fbdc7
Real Learning in Virtual Worlds - CHAPTER 1: Overview
0
280
308
307
2018-10-29T11:40:33Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=CHAPTER 1: Overview=
==1.1 Background to the Study==
“Imagine waking up in the morning and teaching a class without changing out of your pyjamas. Imagine teleporting and flying to the library instead of inching along a highway. Imagine teaching a classroom of students who may have blue skin, purple wings, or the body of a raccoon. Peculiar as they sound, all of these things are now possible” (Harvard's Berkman Center for Internet and Society, 2007; Kribble, 2007).
With recent advances in public access virtual world technology it is now practical for educators to experiment economically with virtual world based learning methods. Technological limitations no longer impose a substantial compromise on the educator’s preferred teaching method.
Virtual worlds differ fundamentally from the online HTML/PDF based learning environments that have been progressively adopted over the last 12 years for online education in the same way that a book differs from a lecture. At first glance, virtual worlds allow the distance education delivery to move into a virtual representation of the real world lecture, and therefore offer the possibility of a ‘quasi-realistic’ distance education delivery model. At second glance, they tempt the educator who is willing to fund the cost, with visions of highly interactive, immersive and engaging teaching vectors and learning management systems extending beyond the options available in real-world training.
Public access virtual worlds offer educators some potentially significant opportunities in the education space. These include the opportunity to approximate better the real-world education experience for distance learners using low cost (often free) publically available tools, and the reduction in the total-cost-of learning by elimination of travel, reduction in capital (infrastructure) investment through reduction in bricks & mortar infrastructure, world-wide sharing of education content, standardisation of environment navigation and access methods, on demand/automated training session delivery, “24 hours by 365 days a year” availability, instant & automated assessment, instant planet-wide delivery (at homogeneous cost) and the use of software simulations in place of physical models. The virtual reality capabilities of virtual worlds offer immersive exposure to simulations of real-world experiences (like tsunami’s or tornadoes) and events that otherwise could only be described and illustrated in conventional education. They enable exploration of events, places, micro and macro worlds, and theories that are either impossible to do in physical environments, or prohibitively costly to implement for individual courses. Lastly, the use of role play based simulations enable the exploration of foreign locations, cultures and historic events in a manner not otherwise available economically in the physical realm.
As the use of public access online virtual worlds is relatively new to the mainstream education community, many research questions remain unanswered. Exploitation of this technology is still relatively immature compared with traditional online learning platforms and therefore much (although certainly not all) of the content has been more experimental than useful for mainstream educational use – until, possibly, the last few years, if not currently.
Until the last few years, virtual worlds have been either special purpose (like flight simulators) and exceptionally costly to construct, or not sufficiently realistic, difficult to access, complex to use, constrained by limited communication vectors (such as missing audio or streaming media), or cumbersome and expensive to distribute and update. It has only been in the last few years that public access virtual world architectures and infrastructures have reached a level of maturity where convincing workable and low cost solutions have substantially neutralised objections of educators surrounding cost, realism, availability, standardisation, access, content distribution, and richness of sensory and communication vectors.
Possibly the greatest hurdle still faced by educators that are willing to experiment in these worlds is that much of the public continue to perceive public online virtual worlds as game technology. They are yet to be widely acknowledge by mainstream educators as a valid option for the delivery of higher educational course material (Jamison, 2007). Yet the potential for both quality gains and cost savings from the successful exploitation of virtual world training in higher education and industry are very high. There is, therefore, a great need for research in this area that provides insight into the affordances of this technology in education, and guidance on its cost-effective.
While much work has been done to compare the relative “effectiveness” of virtual world versus real world training over many years, little or no structured research has been undertaken comparing the “effectiveness” of different approaches to education within a virtual world.
With a few notable exceptions, research has traditionally examined virtual world training in the context of social interaction or 3D object manipulation and simulation. As discussed in the literature review, this body of work has generally found virtual training to be as effective or better then the real world equivalent (at least within the theoretical confines of the subject matter explored). Yet, a direct consequence of the realism available from the latest generation of virtual world technology has provided the ability to simulate the real world teaching environment itself, not just the ability to build better simulations and 3D models of teachable content. The traditional teaching environment[1] can now be practically reproduced – class rooms or lecture theatres providing a central location for real students to learn in a virtual world. Provided that participants are not constrained by technological requirements as discussed in the literature review, increasing the latest environments allow the reproduction of a real world learning environment, simulating almost verbatim the traditional real world “chalk and talk” lecture experience.
In designing the topic delivery, educators in virtual worlds are now presented with choices between simulating a real world lecture environment delivering essentially similar presentation material to that which they might deliver in a “chalk and talk” lecture in the real world and delivering a purpose built simulation of the material itself – or some combination between these two extremes. The literature review references many studies where the focus has been on assessing the effectiveness of simulation of the teaching material rather than simulation of the real world teaching environment. In the former case the 3D software development effort is significantly in the construction of the topic focussed material, while in the latter the 3D software development is more heavily biased to the teaching environment – such as “lecture rooms”.
Although costs are only superficially explored in this research, it is perhaps reasonable to propose that purpose built, topic centric simulators for each course or subject are necessarily a more expensive investment proposition than a single initial investment in lecture room simulators that are shared by many lecturers and across many topics. The closer the virtual world training delivery model gets to mirroring its real world equivalent the more practical this latter option becomes and the closer the preparation cost matches those of the traditional real world learning methods, yet without the overhead of real world infrastructure and physical student and teacher transportation much reducing the total cost of learning.
A casual survey by the researcher of the teaching infrastructure built by, or for, educators in at least one of these public access virtual worlds that is frequented by more than 200 educational institutions (SimTeach, 2008), reveals that the majority of teaching spaces have been built around exactly this traditional “chalk and talk” lecture model, with essentially conventional auditorium style lecture rooms. Prim-facie this seems an under-utilisation of the environment. Surely, if a 3D representation or simulation of an item can be built, one might argue, the educator is almost duty-bound to exploit the capability. Of course, even in a virtual world with dedicated fast-to-use 3D modelling and agent scripting tools, construction of 3D objects and simulations requires considerably more investment than a simple 2D slide show with audio voice-over, that constitutes the body of a “chalk and talk” lecture.
A central question arises, therefore: on a platform capable of delivering 3D models and simulations, is the mere use of it as a virtual “chalk and talk” class room consisting of 2D lecture slides a reasonable and acceptable use of this technology? This is the central question that this research sets out to explore.
==1.2 Research Questions==
This study assessed the learning outcomes using two groups in the widely adopted public access virtual world of Second Life. One group experienced a lecture on the topic ‘The Physics of Bridges’ as a 2D slide show presentation and the other group experienced the same lecture as a 3D augmented lecture of the content contained in the slide show presentation. In order to answer the research question: How effective is it to learn in a virtual world using a traditional 2D slide show method compared to that of a 3D interactive simulation?
To carry out this study the following research hypothesis was formed:
Learning outcomes are not independent of the delivery methods in a virtual world, in that varying the delivery method between 2D and a 3D presentation results in a significant difference in the post-quiz achievement scores of a participant in relation to Bloom’s cognitive process of factual knowledge of ‘remember’ and ‘understand’.
Second Life was chosen as the experimental platform for the research question because of its low cost of access (free), wide platform availability (PC/Linux/Mac), it wide adoption (16 million plus registered users (Linden Lab, 2008a)), huge educator community (200 plus educational institutions (SimTeach, 2008)), maturity and capability of its tool set (3D, streaming and interactive audio, streaming media, web interfacing, html content support, etc), content publication delay (instant) and its environmental realism (real time content streaming, spatial audio, environmental and spatial lighting, 3D perspective, layering, animation, concurrent multitasking agents, realistic photo finished avatar mesh, etc.).
==1.3 Overview of Study==
This research study was conducted in the online virtual world of Second Life. Using an experimental design approach a virtual learning campus was constructed to utilise two different forms of lecture delivery method on the topic of ‘The Physics of Bridges’. This topic was presented as a lecture with a 2D slide show and audio (reproducing a real world lecture on the topic in the virtual space) and the same 2D content and audio augmented with immersive 3D models. Both delivery methods used identical content, slides, audio and time allocation. The independent variable in the delivery method was the presence or absence of 3D bridges and simulations matching the 2D slides and audio.
The 2D and 3D lecture environment simulated real-world lecture theatres with seating for up to 18 people and a large front facing projection screen. The 3D lecture room contained an additional space with lecture screens on three walls in which 3D objects appeared and with which users could interact or examine. The 2D and 3D theatres were otherwise identical.
Participants were recruited from the in world population of Second Life by advertisement and self selection (i.e. without profiling or filtering) and without replacement (avatars could not repeat any test). Prior to the lecture they received a pre-quiz containing 8 questions to establish a prior-knowledge benchmark. After completion of the pre-quiz participants were randomly allocated to either a 2D or 3D lecture theatre. On completion of their lecture they were given a 20 question post-quiz to test the learning outcomes of the lecture and a survey to gain an understanding of their learning experience within the virtual world environment. A total of 111 participants took part in this entire research process. The 2D and 3D participants numbered 55 and 56 participants respectively.
The learning materials along with the pre and post quiz questions were constructed using Bloom’s cognitive processes of ‘remember’ and ‘understand’. The quiz questions were divided evenly across these processes, which provided the basis for analysis.
The analysis method adopted in this research was triangulation using mixed methods. The pre and post quiz questions provided the basis for quantitative analysis. The post survey open questions provided the basis for qualitative analysis. Both of these analyses were then triangulated in order to compare the learning outcomes and experiences of the two groups that took part in this research.
==1.4 Significance and Limitations==
Mirroring real world education, there are at least three barriers an educator must overcome in order to deliver virtual world training:
Hosting infrastructure (the software environment the hosts the virtual world mechanics)
Training infrastructure (the creation of training spaces in the virtual world such as lecture theatres, or presentation screens)
Training content (the actual training material presented).
Today’s public online virtual worlds provide the hosting infrastructure while enabling low cost construction or acquisition of the training infrastructure to enable educators the opportunity to experiment with virtual learning delivery efficiently. Prior to this, educators were faced with extensive time, cost and complexity to build custom applications that could deliver the infrastructure before any virtual learning could take place. What once required extensive support from heads of department now requires very little effort on the educator’s behalf to enter into the world of virtual learning.
With a public online virtual world such as Second Life, the cost to develop, publish and deliver a 2D slide show based instructional learning program, such as the one produced in this experiment, is comparable to that of a real world ‘face to face’ lecture. Yet given the opportunity of this technology to go beyond real world instructional methods the temptation to exploit the full modelling and simulation capabilities of the environment is strong.
The research aimed to inform the question as to whether it is ‘worth’ the extra cost and time to build something more complicated than a 2D slide presentation. For this research the cost was measured in time (hours). While the cost of the 2D lecture was identical to preparing and delivering the same in the real-world, the 3D augmented version was approximately 3 times the cost. There is therefore a significant incentive to determine under experimental conditions the difference in learning outcomes and experience of the participants when presented with two different forms of delivery methods.
To preserve the integrity of the concept of separation of costs of content from costs of infrastructure (both hosting and training), a re-usable general purpose campus and lecture space was first constructed. With all content and tests independent of the campus and lecture infrastructure and interchangeable, the environment that can support both multiple simultaneous courses and rapid 5 to 15 minute course change in each lecture room. While this was not critical to the study, it was judged essential to the integrity of the assumptions on which the research was based: that virtual world content could be treated independently of the training infrastructure if a shared protocol was adopted. Secondly, the content preparation technology expectations were intentional constrained to a standard SL and MS Office equipped PC. PowerPoint and MS Audio Recorder (or other audio recorder) and the Second Life client are all that is required at the minimum to prepare a course for delivery for the purpose of the research.
Despite the recent growth in publicly accessible on-line virtual worlds, little published work has been conducted in this specific area of research. Furthermore, at the time of writing none, if any, had been performed using experimental methods. There is a growing body of high grade and scientific work in other aspects of educational and social dimensions of virtual worlds, and a respectable body of earlier work in purpose built and text based 3D virtual worlds, particularly in the comparative aspects of virtual and real world presence. Possibly, it is only with the realism attained in the latest generation of full content streaming, mixed graphical, audio and text worlds that this research has become practical. Thus the researcher’s motivation is to add to a body of knowledge, which is, as yet, predominantly (if not totally) lacking in scientific rigour via an experiment conducted under controlled conditions.
There have been multiple studies that compare traditional face to face learning methods with distance education learning outcomes. Thomas Russell’s book ‘No Significant Difference Phenomenon’ (2001) documents a review of literature of accumulative studies that goes back as far 1928 with the research question: ‘Does taking a course via distance education lower a student's chances for success as compared to the same student taking the same course in a face-to-face format?’ In most cases Russell’s findings resulted in ‘no significant difference’ in learning outcomes. The common identifier by Russell being that no student is better or worse off when comparing distance learning delivery methods with that of traditional face to face learning methods.
Similarly Richard Clark’s (1983) article published in the early 80s ‘Reconsidering Research on Learning from Media’ claimed that when comparing learning effects of different media platforms, there is no signification difference in outcome. In this article, Clark dismissed any studies that did find differences by providing that any differences that may have been found were not due to the medium platform but rather to the instructional design in the study.
Clark’s article sparked a heated response from Robert Kozma who had opposing views on the matter. This lead to a public debate between the two researchers (R. E. Clark, 1994; Kozma, 1994) in academic journals. This debate continues today amongst educational researchers and is commonly termed ‘The Media Debate’ (EduTech Wiki (2009).
This researcher does not enter into the media debate nor does she enter into the debate over whether real face-to-face learning ‘is better’ or ‘worse’ than virtual world learning. Rather this research has taken the position of ‘Now we are here [in the virtual world] what do we do?’
Consistent with this position the research decided to recruit only from the in world population. Therefore the constraint related to this is that the tested population is more likely to be pre-disposed to the virtual environment for a range of purposes one of which might include education. In the context of this experiment, however, the researcher is not convinced that such a condition would have had any impact on the outcomes. The elimination of the novice user dimension removed mechanical unfamiliarity as a significant factor from the outcomes which was appropriate for a study comparing virtual world delivery methods as opposed to a study comparing virtual and real world learning methods, and has been a factor that complicated the interpretation of some virtual-world research results in prior studies.
==1.5 Structure of Thesis==
For common terms used in this thesis see Appendix A: Terminology.
Chapter Two Literature Review; examines virtual world technology and a brief overview of educational learning theory.
The Virtual world section discusses alternative definitions, characteristics, history, key architectural features, research outcomes and applications in education of virtual worlds. The review of virtual worlds has been taken from an historic perspective discussing key influences that have lead to today’s massively multi-user virtual worlds. Discussion of virtual worlds concludes with a review of educational uses, affordances and a review of current research into online virtual worlds.
Chapter two concludes with a review of learning theory and instructional methods that provides basis of the learning methods and materials used to conduct this experiment.
Chapter Three Research Design; examines the research design along with the researcher’s theoretical assumptions, environment design, lecture material design and analysis methods adopted in this research study.
Chapter Four Results: presents the quantitative and the qualitative results of the virtual world learning experiment conducted in Second Life between the two groups of participants who undertook the differing lecture delivery methods for a lecture on ‘The Physics of Bridges’.
Chapter Five Discussion & Conclusion; provides an analysis of the results of the experiment along with discussion of these results and opportunities for further research.
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
32c50ce77d96d6139033f5943bbad674980ab982
362
308
2018-10-29T12:02:34Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=CHAPTER 1: Overview=
==1.1 Background to the Study==
“Imagine waking up in the morning and teaching a class without changing out of your pyjamas. Imagine teleporting and flying to the library instead of inching along a highway. Imagine teaching a classroom of students who may have blue skin, purple wings, or the body of a raccoon. Peculiar as they sound, all of these things are now possible” (Harvard's Berkman Center for Internet and Society, 2007; Kribble, 2007).
With recent advances in public access virtual world technology it is now practical for educators to experiment economically with virtual world based learning methods. Technological limitations no longer impose a substantial compromise on the educator’s preferred teaching method.
Virtual worlds differ fundamentally from the online HTML/PDF based learning environments that have been progressively adopted over the last 12 years for online education in the same way that a book differs from a lecture. At first glance, virtual worlds allow the distance education delivery to move into a virtual representation of the real world lecture, and therefore offer the possibility of a ‘quasi-realistic’ distance education delivery model. At second glance, they tempt the educator who is willing to fund the cost, with visions of highly interactive, immersive and engaging teaching vectors and learning management systems extending beyond the options available in real-world training.
Public access virtual worlds offer educators some potentially significant opportunities in the education space. These include the opportunity to approximate better the real-world education experience for distance learners using low cost (often free) publically available tools, and the reduction in the total-cost-of learning by elimination of travel, reduction in capital (infrastructure) investment through reduction in bricks & mortar infrastructure, world-wide sharing of education content, standardisation of environment navigation and access methods, on demand/automated training session delivery, “24 hours by 365 days a year” availability, instant & automated assessment, instant planet-wide delivery (at homogeneous cost) and the use of software simulations in place of physical models. The virtual reality capabilities of virtual worlds offer immersive exposure to simulations of real-world experiences (like tsunami’s or tornadoes) and events that otherwise could only be described and illustrated in conventional education. They enable exploration of events, places, micro and macro worlds, and theories that are either impossible to do in physical environments, or prohibitively costly to implement for individual courses. Lastly, the use of role play based simulations enable the exploration of foreign locations, cultures and historic events in a manner not otherwise available economically in the physical realm.
As the use of public access online virtual worlds is relatively new to the mainstream education community, many research questions remain unanswered. Exploitation of this technology is still relatively immature compared with traditional online learning platforms and therefore much (although certainly not all) of the content has been more experimental than useful for mainstream educational use – until, possibly, the last few years, if not currently.
Until the last few years, virtual worlds have been either special purpose (like flight simulators) and exceptionally costly to construct, or not sufficiently realistic, difficult to access, complex to use, constrained by limited communication vectors (such as missing audio or streaming media), or cumbersome and expensive to distribute and update. It has only been in the last few years that public access virtual world architectures and infrastructures have reached a level of maturity where convincing workable and low cost solutions have substantially neutralised objections of educators surrounding cost, realism, availability, standardisation, access, content distribution, and richness of sensory and communication vectors.
Possibly the greatest hurdle still faced by educators that are willing to experiment in these worlds is that much of the public continue to perceive public online virtual worlds as game technology. They are yet to be widely acknowledge by mainstream educators as a valid option for the delivery of higher educational course material (Jamison, 2007). Yet the potential for both quality gains and cost savings from the successful exploitation of virtual world training in higher education and industry are very high. There is, therefore, a great need for research in this area that provides insight into the affordances of this technology in education, and guidance on its cost-effective.
While much work has been done to compare the relative “effectiveness” of virtual world versus real world training over many years, little or no structured research has been undertaken comparing the “effectiveness” of different approaches to education within a virtual world.
With a few notable exceptions, research has traditionally examined virtual world training in the context of social interaction or 3D object manipulation and simulation. As discussed in the literature review, this body of work has generally found virtual training to be as effective or better then the real world equivalent (at least within the theoretical confines of the subject matter explored). Yet, a direct consequence of the realism available from the latest generation of virtual world technology has provided the ability to simulate the real world teaching environment itself, not just the ability to build better simulations and 3D models of teachable content. The traditional teaching environment[1] can now be practically reproduced – class rooms or lecture theatres providing a central location for real students to learn in a virtual world. Provided that participants are not constrained by technological requirements as discussed in the literature review, increasing the latest environments allow the reproduction of a real world learning environment, simulating almost verbatim the traditional real world “chalk and talk” lecture experience.
In designing the topic delivery, educators in virtual worlds are now presented with choices between simulating a real world lecture environment delivering essentially similar presentation material to that which they might deliver in a “chalk and talk” lecture in the real world and delivering a purpose built simulation of the material itself – or some combination between these two extremes. The literature review references many studies where the focus has been on assessing the effectiveness of simulation of the teaching material rather than simulation of the real world teaching environment. In the former case the 3D software development effort is significantly in the construction of the topic focussed material, while in the latter the 3D software development is more heavily biased to the teaching environment – such as “lecture rooms”.
Although costs are only superficially explored in this research, it is perhaps reasonable to propose that purpose built, topic centric simulators for each course or subject are necessarily a more expensive investment proposition than a single initial investment in lecture room simulators that are shared by many lecturers and across many topics. The closer the virtual world training delivery model gets to mirroring its real world equivalent the more practical this latter option becomes and the closer the preparation cost matches those of the traditional real world learning methods, yet without the overhead of real world infrastructure and physical student and teacher transportation much reducing the total cost of learning.
A casual survey by the researcher of the teaching infrastructure built by, or for, educators in at least one of these public access virtual worlds that is frequented by more than 200 educational institutions (SimTeach, 2008), reveals that the majority of teaching spaces have been built around exactly this traditional “chalk and talk” lecture model, with essentially conventional auditorium style lecture rooms. Prim-facie this seems an under-utilisation of the environment. Surely, if a 3D representation or simulation of an item can be built, one might argue, the educator is almost duty-bound to exploit the capability. Of course, even in a virtual world with dedicated fast-to-use 3D modelling and agent scripting tools, construction of 3D objects and simulations requires considerably more investment than a simple 2D slide show with audio voice-over, that constitutes the body of a “chalk and talk” lecture.
A central question arises, therefore: on a platform capable of delivering 3D models and simulations, is the mere use of it as a virtual “chalk and talk” class room consisting of 2D lecture slides a reasonable and acceptable use of this technology? This is the central question that this research sets out to explore.
==1.2 Research Questions==
This study assessed the learning outcomes using two groups in the widely adopted public access virtual world of Second Life. One group experienced a lecture on the topic ‘The Physics of Bridges’ as a 2D slide show presentation and the other group experienced the same lecture as a 3D augmented lecture of the content contained in the slide show presentation. In order to answer the research question: How effective is it to learn in a virtual world using a traditional 2D slide show method compared to that of a 3D interactive simulation?
To carry out this study the following research hypothesis was formed:
Learning outcomes are not independent of the delivery methods in a virtual world, in that varying the delivery method between 2D and a 3D presentation results in a significant difference in the post-quiz achievement scores of a participant in relation to Bloom’s cognitive process of factual knowledge of ‘remember’ and ‘understand’.
Second Life was chosen as the experimental platform for the research question because of its low cost of access (free), wide platform availability (PC/Linux/Mac), it wide adoption (16 million plus registered users (Linden Lab, 2008a)), huge educator community (200 plus educational institutions (SimTeach, 2008)), maturity and capability of its tool set (3D, streaming and interactive audio, streaming media, web interfacing, html content support, etc), content publication delay (instant) and its environmental realism (real time content streaming, spatial audio, environmental and spatial lighting, 3D perspective, layering, animation, concurrent multitasking agents, realistic photo finished avatar mesh, etc.).
==1.3 Overview of Study==
This research study was conducted in the online virtual world of Second Life. Using an experimental design approach a virtual learning campus was constructed to utilise two different forms of lecture delivery method on the topic of ‘The Physics of Bridges’. This topic was presented as a lecture with a 2D slide show and audio (reproducing a real world lecture on the topic in the virtual space) and the same 2D content and audio augmented with immersive 3D models. Both delivery methods used identical content, slides, audio and time allocation. The independent variable in the delivery method was the presence or absence of 3D bridges and simulations matching the 2D slides and audio.
The 2D and 3D lecture environment simulated real-world lecture theatres with seating for up to 18 people and a large front facing projection screen. The 3D lecture room contained an additional space with lecture screens on three walls in which 3D objects appeared and with which users could interact or examine. The 2D and 3D theatres were otherwise identical.
Participants were recruited from the in world population of Second Life by advertisement and self selection (i.e. without profiling or filtering) and without replacement (avatars could not repeat any test). Prior to the lecture they received a pre-quiz containing 8 questions to establish a prior-knowledge benchmark. After completion of the pre-quiz participants were randomly allocated to either a 2D or 3D lecture theatre. On completion of their lecture they were given a 20 question post-quiz to test the learning outcomes of the lecture and a survey to gain an understanding of their learning experience within the virtual world environment. A total of 111 participants took part in this entire research process. The 2D and 3D participants numbered 55 and 56 participants respectively.
The learning materials along with the pre and post quiz questions were constructed using Bloom’s cognitive processes of ‘remember’ and ‘understand’. The quiz questions were divided evenly across these processes, which provided the basis for analysis.
The analysis method adopted in this research was triangulation using mixed methods. The pre and post quiz questions provided the basis for quantitative analysis. The post survey open questions provided the basis for qualitative analysis. Both of these analyses were then triangulated in order to compare the learning outcomes and experiences of the two groups that took part in this research.
==1.4 Significance and Limitations==
Mirroring real world education, there are at least three barriers an educator must overcome in order to deliver virtual world training:
Hosting infrastructure (the software environment the hosts the virtual world mechanics)
Training infrastructure (the creation of training spaces in the virtual world such as lecture theatres, or presentation screens)
Training content (the actual training material presented).
Today’s public online virtual worlds provide the hosting infrastructure while enabling low cost construction or acquisition of the training infrastructure to enable educators the opportunity to experiment with virtual learning delivery efficiently. Prior to this, educators were faced with extensive time, cost and complexity to build custom applications that could deliver the infrastructure before any virtual learning could take place. What once required extensive support from heads of department now requires very little effort on the educator’s behalf to enter into the world of virtual learning.
With a public online virtual world such as Second Life, the cost to develop, publish and deliver a 2D slide show based instructional learning program, such as the one produced in this experiment, is comparable to that of a real world ‘face to face’ lecture. Yet given the opportunity of this technology to go beyond real world instructional methods the temptation to exploit the full modelling and simulation capabilities of the environment is strong.
The research aimed to inform the question as to whether it is ‘worth’ the extra cost and time to build something more complicated than a 2D slide presentation. For this research the cost was measured in time (hours). While the cost of the 2D lecture was identical to preparing and delivering the same in the real-world, the 3D augmented version was approximately 3 times the cost. There is therefore a significant incentive to determine under experimental conditions the difference in learning outcomes and experience of the participants when presented with two different forms of delivery methods.
To preserve the integrity of the concept of separation of costs of content from costs of infrastructure (both hosting and training), a re-usable general purpose campus and lecture space was first constructed. With all content and tests independent of the campus and lecture infrastructure and interchangeable, the environment that can support both multiple simultaneous courses and rapid 5 to 15 minute course change in each lecture room. While this was not critical to the study, it was judged essential to the integrity of the assumptions on which the research was based: that virtual world content could be treated independently of the training infrastructure if a shared protocol was adopted. Secondly, the content preparation technology expectations were intentional constrained to a standard SL and MS Office equipped PC. PowerPoint and MS Audio Recorder (or other audio recorder) and the Second Life client are all that is required at the minimum to prepare a course for delivery for the purpose of the research.
Despite the recent growth in publicly accessible on-line virtual worlds, little published work has been conducted in this specific area of research. Furthermore, at the time of writing none, if any, had been performed using experimental methods. There is a growing body of high grade and scientific work in other aspects of educational and social dimensions of virtual worlds, and a respectable body of earlier work in purpose built and text based 3D virtual worlds, particularly in the comparative aspects of virtual and real world presence. Possibly, it is only with the realism attained in the latest generation of full content streaming, mixed graphical, audio and text worlds that this research has become practical. Thus the researcher’s motivation is to add to a body of knowledge, which is, as yet, predominantly (if not totally) lacking in scientific rigour via an experiment conducted under controlled conditions.
There have been multiple studies that compare traditional face to face learning methods with distance education learning outcomes. Thomas Russell’s book ‘No Significant Difference Phenomenon’ (2001) documents a review of literature of accumulative studies that goes back as far 1928 with the research question: ‘Does taking a course via distance education lower a student's chances for success as compared to the same student taking the same course in a face-to-face format?’ In most cases Russell’s findings resulted in ‘no significant difference’ in learning outcomes. The common identifier by Russell being that no student is better or worse off when comparing distance learning delivery methods with that of traditional face to face learning methods.
Similarly Richard Clark’s (1983) article published in the early 80s ‘Reconsidering Research on Learning from Media’ claimed that when comparing learning effects of different media platforms, there is no signification difference in outcome. In this article, Clark dismissed any studies that did find differences by providing that any differences that may have been found were not due to the medium platform but rather to the instructional design in the study.
Clark’s article sparked a heated response from Robert Kozma who had opposing views on the matter. This lead to a public debate between the two researchers (R. E. Clark, 1994; Kozma, 1994) in academic journals. This debate continues today amongst educational researchers and is commonly termed ‘The Media Debate’ (EduTech Wiki (2009).
This researcher does not enter into the media debate nor does she enter into the debate over whether real face-to-face learning ‘is better’ or ‘worse’ than virtual world learning. Rather this research has taken the position of ‘Now we are here [in the virtual world] what do we do?’
Consistent with this position the research decided to recruit only from the in world population. Therefore the constraint related to this is that the tested population is more likely to be pre-disposed to the virtual environment for a range of purposes one of which might include education. In the context of this experiment, however, the researcher is not convinced that such a condition would have had any impact on the outcomes. The elimination of the novice user dimension removed mechanical unfamiliarity as a significant factor from the outcomes which was appropriate for a study comparing virtual world delivery methods as opposed to a study comparing virtual and real world learning methods, and has been a factor that complicated the interpretation of some virtual-world research results in prior studies.
==1.5 Structure of Thesis==
For common terms used in this thesis see Appendix A: Terminology.
Chapter Two Literature Review; examines virtual world technology and a brief overview of educational learning theory.
The Virtual world section discusses alternative definitions, characteristics, history, key architectural features, research outcomes and applications in education of virtual worlds. The review of virtual worlds has been taken from an historic perspective discussing key influences that have lead to today’s massively multi-user virtual worlds. Discussion of virtual worlds concludes with a review of educational uses, affordances and a review of current research into online virtual worlds.
Chapter two concludes with a review of learning theory and instructional methods that provides basis of the learning methods and materials used to conduct this experiment.
Chapter Three Research Design; examines the research design along with the researcher’s theoretical assumptions, environment design, lecture material design and analysis methods adopted in this research study.
Chapter Four Results: presents the quantitative and the qualitative results of the virtual world learning experiment conducted in Second Life between the two groups of participants who undertook the differing lecture delivery methods for a lecture on ‘The Physics of Bridges’.
Chapter Five Discussion & Conclusion; provides an analysis of the results of the experiment along with discussion of these results and opportunities for further research.
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
32c50ce77d96d6139033f5943bbad674980ab982
Real Learning in Virtual Worlds - CHAPTER 3: Research Design
0
281
310
309
2018-10-29T11:40:34Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 3: Research Design=
==3.1 Introduction==
This study measured learning outcomes through the achievement scores of a multiple choice post-quiz at two cognitive levels of Bloom’s Factual Knowledge: Remember and Understand for two different lectures delivered in the virtual world of Second Life.
This chapter will discuss the research design of this study along with the researcher’s theoretical assumptions, environment design, lecture material design and analysis methods used in producing the results discussed in the next chapter of this thesis.
==3.2 Problem Statement and Research Hypothesis==
The problem of this study was to determine the difference in learning outcomes between two randomly selected groups that attended the same lecture in a 3D virtual world using differing methods of delivery. Group 1 received a 2D slide show with pre-recorded audio in a lecture room setting (emulating a classical lecture in a 3D virtual world space) and group 2 received the same lecture augmented with appropriate 3D objects in an appropriately modified virtual 3D theatre space. Both were delivered in the virtual world of Second Life. The research investigated whether a difference in the delivery method (the addition of interactive “life size” 3D models), where instructional design, timing, content and environmental setup are otherwise the same produces different learning outcomes with respect the two identified cognitive levels.
To carry out this study the following hypothesis was formed:
Learning outcomes are not independent of the delivery methods in a virtual world, in that varying the delivery method between 2D and a 3D presentation results in a significant difference in the post-quiz achievement scores of a participant in relation to Bloom’s cognitive process of factual knowledge of ‘remember’ and ‘understand’.
==3.3 Research Rationale==
In spite of the extensive current efforts of many institutions and educators to establish a virtual presence and adapt delivery of courses to this newly emerged generation of mass market virtual worlds, little (if any) formal and structured analysis has been undertaken by researchers to assess the comparative cognitive affordances of learning delivery methods in these spaces.
An anecdotal assessment of delivery methods in university campuses and training rooms within Second Life showed a preponderance of virtualised traditional lecture rooms – complete with front facing chairs, projection screens and even lecterns. The implication is therefore that a significant volume of current delivery in Second Life (at least) is therefore merely virtualising traditional real world delivery. The question arises, however, in a space capable of delivering highly interactive collaborative learning and 3D simulations, potentially for lower input costs than would be required in the real world, is the traditional chalk and talk approach the most appropriate?
There was a significant incentive to distinguish the effectiveness of these two learning approaches. The comparative costs, expertise and effort required to utilise a set of pre-prepared real world slides with audio and present them in a virtualised classroom is essentially the same as that required for real world delivery (at least in the Second Life virtual world). Even with Second Life’s simplified and efficient 3D building editor and scripting language, a 2D slide show based presentation with audio narration can be imported or streamed into Second Life and presented, for a fraction of the cost (in time) for course preparation and level of sophistication of learning materials, and skill set required where an interactive 3D simulation is built and utilised. Distinguishing between these two learning approaches would assist educators determine whether the extra cost (of construction) and time (in design and preparation) involved in the development of 3D instructional learning materials is worth the effort as it produces better learning outcomes.
==3.4 Research Method==
===3.4.1 Theoretical Assumptions ===
Previous research into education within virtual worlds can be divided into two main areas. Research that assesses the affordances of the environment to be used as an educational tool (Dickey, 2003; Gonzalez, 2007; Martinez et al., 2007; Youngblut, 1998) or research that compares the virtual world learning outcomes to that of real world learning methods (Kurt, Mike, Jamillah, & Thomas, 2004; Mania & Chalmers, 2001; Youngblut, 1998).
The former usually takes an interpretive research approach the latter a positivist research approach. From a purist’s standpoint, these two approaches are at opposite ends of the scale in their theoretical assumptions. This, in turn, affects how the researcher approaches, conducts and analyses their research data.
In an interpretive research approach the researcher adopts an investigative approach to analyse and ‘understand’ the conceptual meaning of the social construct. This approach to research is one of total immersion, experiencing the research from an insider’s view, where the researcher plays a social actor within the social construct (Klein & Myers, 1999; A. Lee, 1991; Orlikowski & Baroudi, 1991).
A positivist analyst takes a very different approach than that of interpretive analyst. Positivist research follows principals such as (A. Lee, 1991; Orlikowski & Baroudi, 1991):
*the researcher is independent of the research,
*the researcher is inquiry value-free,
*a linear cause-effect relationship exists and is verified and tested by deductive logic and analysis methods.
Without passing judgement on the merits of either approach, this research has generally taken a positivist research approach using a classical experimental design method (Neuman, 2006). A direct consequence of this decision was that a ‘laboratory’ first had to be created in the virtual world that could enable the delivery of the lectures under controlled experimental conditions. We will explore this laboratory in this chapter.
===3.4.2 Research Study===
A virtual learning campus was set-up in the virtual world of Second Life where participants were randomly allocated into two groups to participant in either:
#2D Slide Show Lecture: Slides and audio in a class room setting
#3D Augmented Lecture: Slides and audio augmented by ‘life size’ 3D objects and simulations, in a class room setting
By ‘life size’ we mean that the 3D objects seemed large than the participant in the 3D space and large enough for the participant’s avatar to walk on and around them. The lecture was on ‘The Physics of Bridges’ that was presented using identical subject matter, audio and slide times. The only difference being the presence of the 3D objects and the minimum necessary environmental changes to allow avatar interaction with the 3D objects.
Before the lecture participants were given a pre-quiz and afterwards a post-quiz to test the learning outcomes of each group with respect to Bloom’s factual knowledge of ‘remember’ and ‘understand’, and a survey collecting qualitative data about the experience. Both groups received identical pre and post quizzes and surveys. The questions in the pre-quiz differed from those in the post quiz. A summary of the experiment design is provided below (Table 7):
{|border="1"
|'''Research Design Summary'''
|-
|'''Research Design'''
|'''Classical Experimental Design'''
|-
|'''Sampling'''
|Random without replacement (i.e. Avatars were prevented from taking either quiz more than once).
111+ selections.
|-
|'''Random Assignment'''
|Yes
|-
|'''Independent Variable'''
|Learning Delivery Method
Virtual 2D Slide Show Lecture vs. 3D Augmented Lecture
◦ Course Delivered: The Physics of Bridges
◦ Time 20 minutes for both
|-
|'''Groups'''
|2D Group: 2D Slide Show Lecture
3D Group: 3D Augmented Lecture
|-
|'''Dependant Variable'''
|Cognitive Learning Outcome
Post-test achievement scores measuring the lecture objectives of Bloom’s:
◦ Factual knowledge of Remember Cognitive process
◦ Factual knowledge of Understand Cognitive process
|-
|'''Instrument'''
|'''Pre-Test'''
Test current factual knowledge of topic before course delivery
'''Post-Test & Survey'''
Retest factual knowledge for ‘Remember’ & ‘Understand’ after course delivery
Survey of participant’s learning experience
|}
'''Table 7. Research Design Summary'''
==3.5 Research Population==
The population and frame for inclusions was the total residences in Second Life which consists of 16,318,063 million users (60 day logons 1,344,215 million) with demographics of 59% male and 41% female, where the highest group at 35% are aged between 24-34 years with a total age population being over 18 years of age. The majority of Second Life residences, 39%, live in the United States of America. Appendix I: Second Life Demographic provides a more detailed breakdown of these statistics (Linden Lab, 2008b).
It was decided to use only current in world users (rather than recruiting new users to participate in world) to avoid the weaknesses of previous research studies that was discussed in Chapter 2, where the participants were learning a new toolset rather than the learning-material presented (Martinez et al., 2007; Youngblut, 1998).
==3.6 The Virtual Learning Environment==
The virtual world Second Life was chosen over other virtual world environments in light of the discussion provided in Chapter 2 concerning Architecture Considerations and the review of Educational Research in virtual worlds. Second Life currently provides many benefits over other virtual worlds for open access to learning due to the capabilities of its toolset that simplify the rapid import of 2D materials and construction of 3D interactive environments. Second Life has powerful scripting and modelling tools that come standard as a part of its interface that provide a vast range of approaches with which to create the virtual learning environment. Lastly, as noted in Chapter 2, the take up by tertiary institutions of Second Life for education purposes worldwide numbers in the hundreds.
In the section that follows we will discuss the virtual world learning environment (the ‘laboratory’) that was built in Second Life in order to conduct this research experiment.
====3.6.1.1 Building the Virtual Learning Environment: Design Considerations====
There are two general approaches to the design layout of a virtual space (Corbit, 2002). One separates places within the space into discrete areas where users move around using portals (known as teleports in Second Life), the other is more representative of the real world where users navigate to different places using such things as pathways between buildings or rooms within the virtual space. Both of these constructs offer advantages depending upon the circumstances for example, the former method of using portals offers a more simple method for the user to navigate the space easily and quickly whereas if one wanted to assist the user in obtaining a sense of placement, presence and collaboration within the virtual environment then latter may be more appropriate (S. Clark & Maher, 2006) where the user is encouraged to explore the virtual space in order to form a relationship with the environment (Corbit, 2002).
This virtual learning environment was built largely around the first approach where a series of rooms were built and participants navigated the environment using teleports in order to complete the appropriate stage within the experiment, but with the rooms themselves emulating a real world environment with chairs for sitting, lecture rooms with projection screens and foyers, teller machines for delivering participant fees, etc.
The use of teleports not only offered simplicity for navigation but also enabled the control required over the steps in the process for the experimental design approach taken in this research. Teleports allowed the environment to be easily automated for participants to operate without the intervention or the assistance from the researcher so as to uphold the positivist research approach and remaining unbiased and inquiry free, and independent of the experiment under study (Orlikowski & Baroudi, 1991). Furthermore, the use of distinct purpose specific and separate rooms connected only by teleports was also indicated due to technical and security reasons that will be discuss later in the System Controls section below.
A further consideration was given to the construction of the rooms themselves, including the look and content of each room. Bellman and Landauer (2000) believe that a key question of the implementation and application of a virtual world is decide what reality should be made virtual by incorporating “functional realism”. Functional realism is purpose built realism that maintains sufficient realism for illusionary effects for presence and immersion but does not support the goal of absolute realism. Absolute realism in most instances, they believe, only distracts from the real objectives of the environment. For example, implementing window scenes in a university lecture room that have passing cars, jets flying though the sky and construction to a neighbouring building may be a realistic scene in the real world but in a virtual world it would only distract the students from their learning objectives. Applying functional realism not only provides focussed design but also enhances the virtual world by only including key components and excluding any adversities that may be disruptive in real world. [24]
This virtual learning environment build was based upon a real world setting, using a theatre theme, in rooms that were self contained with only the essentials elements included in order to complete the learning task at hand.
====3.6.1.2 Virtual Learning Campus Overview====
The overall virtual learning campus consisted of a Welcome Room, a Pre-Quiz Room, 6 Lecture Room complexes (containing an arrival foyer, theatre, exit foyer and theatre control room), a Post-Quiz/Survey Room and a central Control Room; Figure 49 provides an overview of the process flow of the virtual learning campus.
The starting area for all visitors was the Welcome Room, in this room the participant could read about the research, the rules, authority, and standards, etc. From this room a participant could take a teleport to the Pre-Quiz room. On arrival avatar identity keys were automatically recorded.
After completing the pre-quiz in the pre-quiz room participants were paid a minimum amount for attending and they could decide either to leave the research project, or continue onto a lecture. On commencement and completion of quizzes avatar identity keys were recorded.
There were 6 Lecture Rooms divided evenly into 2 types of lectures – a 2D audio-slide show presentation or a 3D augmented audio-slide show presentation. Each lecture theatre could hold up to 18 seated participants and were timed to commence every 10 minutes in pairs.
If participants continued onto the lecture their completion of the pre-quiz was automatically verified and they were randomly allocated on teleportation to either one of these lectures. Once the lecture completed they could then teleport to the Post-Quiz/Survey Room to be tested on their learning outcome and surveyed on their experience and finally they were paid for their participation in the research project.
This entire process took approximately 30 minutes for the participant to complete.
The entire virtual campus build time took approximately one man month to build [25] with the 3D presentation content taking approximately 3 times longer to build than the 2D presentation content (approximately 3 days to build for the 3D presentation and 1 day for the 2D presentation).
In the section that follows a detailed view of each room is provided along with the function of the room.
Figure 49. Environment: Virtual Learning Campus Flow Chart
====3.6.1.3 Welcome Room====
The Welcome Room provided the entry point into the virtual campus (Figure 50). Here the participants were provided information about the research and if they decided to participate what could be expected of them within the research experiment.
This room contained four large wall signs and four smaller floor signs in each corner.
The wall signs provided the following information (see Appendix C: Welcome Room Information Content for more details):
*The aim of this research;
*What can I expect?
*How long will it take?
*Payment?
The floor signs provided the participant with a web link to the research explanatory statement (see Appendix C: Welcome Room Information Content for more details) and a virtual note card providing them with the welcome room information that they could hold in their inventory to take away from the research location.
If the participant decided to take part in this research then they took a teleport (the gold rings partially visible in the image) from this room, which transported them to the Pre-Quiz Room.
Figure 50. Environment: Welcome Room
====3.6.1.4 Pre-Quiz Room====
The Pre-Quiz Room was a common area where all participants were given a Pre-Quiz to obtain their level of knowledge of the subject prior to the delivery of the lecture.
A participant would be teleported from the Welcome Room into the centre of this room and provided with instructions by the large sign on the main wall to be seated in order to take the pre-survey (Figure 51, Left). Once seated a web-link would be provided to them to take the pre-quiz. This web-link was connected to a survey engine that operated over the internet and stored details into a database outside of the Second Life environment. The survey database recorded the participant’s answers to the pre-quiz along with other details such as the participant’s avatar key (the unique identify of the Second Life user). The avatar’s key was used to verify that the participant had completed the pre-quiz prior to payment and teleportation into the next scheduled lecture.
Once the participant had completed the pre-quiz they could collect part payment for completion of this stage of the research from an ATM along the back wall (Figure 51, Right) and then could use a teleport, situated next to the ATMs, to transport them to the next scheduled lecture. The lectures were scheduled every 10 minutes for both the 2D and 3D presentations. If the blue beam on the teleport was displayed then this showed the participant that the next lecture was available for them to teleport. Timers beside the ATMs showed the time until the next lecture. On teleportation a participant was randomly allocated to either a 2D or 3D lecture.
Figure 51. Environment: Left Pre-Quiz Room, Right ATMs & Teleporters
====3.6.1.5 Lecture Theatre====
The participant would arrive in the foyer of the lecture theatre where they were instructed via floor signs to switch on their audio and video controls and to be seated inside the lecture theatre (Figure 52).
The slide presentation was delivered using streaming in world web-technology where PowerPoint slides were constructed and saved as html files and streamed into Second Life using an in world constructed HTML viewer. Audio streams were also recorded and synchronised to each of these slides throughout to the presentation.
Figure 52. Environment: Lecture Theatre
Both the 2D and 3D theatres were setup essentially the same and delivered within the same time frame: which took approximately 20 minutes of instructional delivery. The only variable that changed was the presence or absence of 3D objects in the delivery method of the presentation.
In the 2D presentation a participant would continue to be seated to watch and listen to the 2D lecture (Figure 53, Left) throughout the lecture. In the 3D presentation the participant would commence the session seated, but on commencement of the lecture a room would open up behind the front 2D presentation screen and the participant would be automatically transported in their chairs and dropped into the 3D presentation space to view the 3D slide show presentation in a specially designed 3D viewing area (Figure 53, Right). Participants in the 3D presentation were then left standing in this space and were able to move around in the 3D space if they wished. In the 2D mode the front facing projection screen displayed the slides, while in the 3D space, the 2D slides were projected on the walls around the 3D viewing space, with the 3D objects created and removed automatically in synch with the slides and audio in the centre of (and around) the 3D viewing space.
Figure 53. Environment: Learning Delivery Method
Careful consideration was given so that both groups obtained the same instructional information. The only exception was that the pictures contained in the 2D slide presentation was translated into 3D form and either rotated and animated, or positioned for ‘walking on’ or exploration in front of the participant.
Once the lecture had completed the participants for both groups were instructed to move to the exit foyer and teleport to the next phase of the research project via teleports located in the exit foyer. The entrance to the exit foyer and the teleports therein were only switched on after the last slide had been delivered (Figure 54).
Figure 54. Environment: Lecture Room Teleporters
Each lecture theatre contained a hidden control room and separate bank of teleports (restricted to the administration avatar) connecting them and the central control room that allowed for independent movement and invisible monitoring of the lecture rooms, and contained the control system and communication devices for that lecture theatre.
====3.6.1.6 Post-Quiz Room====
The final phase for the participants was to take a post-quiz and survey. This room operated the same as the Pre-Quiz room.
The Post-Quiz Room was a common room where all participants would be teleported into the middle of the room after their lecture. A participant would be instructed via the main sign on the wall to be seated in order to take the quiz and survey (Figure 55). Once they had completed the quiz and survey they were then instructed to go to the back of the room to collect their payment from an ATM for the final payment for their research participation. The survey engine would note they had completion of this survey and only then allow payment if completed.
Figure 55. Environment: Post-Quiz Room
====3.6.1.7 Control Room====
At the centre of this system was a Control Room. The Control Room was responsible for managing the 28 public teleports as well as containing separate teleports for members of the administration team. At any one time a member of the administrator team could bypass the controls contained within the system and move to any room within environment (Figure 56).
Figure 56. Environment: Control Room
====3.6.1.8 System Controls====
In the design consideration section it was mentioned that this environment was best setup using separated rooms with teleports to navigate the system. This decision allowed for an increase in security as well as allowing the use of teleports to operate as control gates.
Within Second Life you can use what is called roaming camera mode to navigate around without moving your avatar. A person can use this mode to move around to view other locations within a definable distance and even operate controls like the sit command therefore providing a security risk that a participant could bypass steps within the research process. Having rooms located far away from each other at random distances in 3D space and connected with teleports prevented this from occurring. Even if a participant found of way of teleporting to a location that was out of sequence to the research process (eg they had visited before and created a landmark to teleport back or had given away this landmark to another avatar) then the teleports, seats and ATMs all communicated with a central off-world web site (containing the survey engine) which verified the proper completion of each required step and acted as a gatekeeper to stop a person from breaching the system.
At every stage when an avatar used a teleport, used a quiz seat, or used an ATM, these teleports, seats or ATMs, connected to an external database that would look-up the avatar’s key to ensure that the appropriate stage had been completed prior to allowing access. For example, a participant had to have completed their pre-quiz survey prior to entry into a lecture theatre. If they tried to breach this sequence then the teleport reported an error message and would not allow them to teleport. A further example, a participant was required to complete an entire lecture prior to completing the post-survey. The exit Lecture Room teleports were disabled until lecture finished after which a participant could take a teleport to the Post-Quiz room in doing so the participant was flagged as having completed the lecture which enabled them take the post-quiz and survey.
Other controls were built into the ATM machines so that a participant could only be paid once and also built into the survey system so that a participant could only undertake the research once (although they were allowed to attend again if they chose they just could not take the quizzes or survey again).
This design construction of the virtual learning campus allowed for an automated system that could be operated over 24 hours for multiple participants. It was also fault tolerant to possible SIM crashes with the entire system to be able to automatically restart and recover correctly unattended.
Lastly, driven entirely by a specially designed control language in replaceable text files, the design made for an easily modifiable and manageable system with minimum scripting change to introduce any new rules. An entirely new lecture and testing set can be loaded into the system in less than 5 minutes (once the content has been written or built).
==3.7 Learning Task Design==
===3.7.1 Subject Matter===
The subject matter that was chosen was the Physics of Bridges. This topic was chosen both for its familiarity (everyone knows what a bridge is) and obscurity (they don’t generally know as much as they might initially believe about the detail of how they work) and because the content could be easily adapted for both forms of delivery. The level of difficulty was aimed at approximately a year 12 level high school student. The content of information was mainly sourced from academic and government information web-sites. Appendix D:Instruction: Slide Presentation contains the delivered presentation along with a references list on the last page of this presentation.
===3.7.2 Instruction Delivery===
A virtual learning system, no matter how good its delivery design is only as good as the instructional design of the learning task. As discussed in Chapter 2 Learning and Instructional Design Theory, the instruction methods used to assist in the delivery and assessment of the course was Gagne’s Nine Events of Instruction and the revised Bloom’s Taxonomy Cognitive domain.
This section provides details of how both the 2D and 3D materials were constructed, for the differences within these deliveries refer to section 3.6.1.5 Lecture Theatre in this chapter.
====3.7.2.1 Gagne====
The theme of this lecture was how the various bridge designs handled the key forces of tension and compression. A variety of bridge designs were explored with respect to these two forces.
Gagne’s 9 stages of instructional delivery were provided for as follows:
#Gaining Attention (Reception): This stage grabs the attention of the participant. A slide show was given while participants arrived in the theatre prior to the commencement of the formal presentation that contained a variety of bridge structures that were the ‘best of’ bridges along with music to motivate and excite the participant for the lecture that was to follow (see Appendix E: Pre-Presentation Slide Show).
#Informing Learners of the Objective (Expectancy): This stage informs the participant what new knowledge they can expect to learn. The 2nd Slide obtained the objectives of the presentation (see Appendix D: Instruction: Slide Presentation). These objectives were also written in conjunction using the revised Bloom’s taxonomy.
#Stimulating Recall of Prior Learning (Retrieval): This stage tries to place the new information that will be delivered in the form of current knowledge so that they can relate better to the newly presented information. Every slide that introduced a new bridge structure contained a picture of a real bridge so that the participant could relate to real life experience to the new information that would be presented.
#Presenting the Stimulus (Selective Perception): This is where the learning (or new knowledge) was presented, each bridge form was presented with an overview, its relationship to tension and compression and the limitations of the bridge design. The information was chunked into a logical structure. Stages (4) and (5) are interrelated which tries to provide the participant new knowledge in a logical and meaningful context.
#Providing Learning Guidance (Semantic Encoding): This stage presents the information in a deeper form allowing the participant to encode the new information into their long-term memory. Here the information was presented in different forms using both pictures (and in the case of the 3D group, 3D models) and text. Furthermore, three different concepts (ie overview, tension and compression and limitations) were provided for each bridge to enhance a participant’s breath of knowledge of that bridge. The bridges were also presented to the participant from simplest to most complex so that they could gradually understand the concept of a bridge structure and its relationship to tension and compression.
#Eliciting Performance (Responding): This stage of instructional delivery allows the participant to ‘do something’ with their new knowledge. Given we only had 20 minutes to deliver the material this stage was not performed. If Bloom’s cognitive process of Apply was tested then inclusion of this stage would have been imperative. The researcher recognises that although time was a limitation to this study, ultimately, this stage would have been interesting to include.
#Providing Feedback (Reinforcement): The stage of instructional delivery is usually performed with feedback from the lecturer to confirm that the participant understood the new knowledge presented. Again due to time constraints and the type research method used (experimental design) direct lecturer interaction was not an option, so in order to hold this experiment constant for all participants’ summary slides where used. These provided a form of feedback by presenting the information again but in a different form to that that was initially used in the main body of the presentation, forcing some degree of participant thought to process the summary information (and of course, the post quiz served a similar purpose, but without the learning confirmation).
#Assessing Performance (Retrieval): In this research study this was the final stage of delivery where the participant’s were provided with the post-quiz to assess their learning outcome.
#Enhancing Retention and Transfer (Generalisation) The final stage of Gagne’s instructional delivery is to generalise and transfer the information delivered in light of new information that may be presented in future. This step was partly performed at stage (7) were the information was summarised. Transfer in normal situations (ie non-experimental) would allow the student to take away their new knowledge, ie the lecture materials. Although in Second Life this is possible as this was under experimental conditions that had to be controlled the lecture materials were not transferred to the participant.
====3.7.2.2 Bloom’s====
The revised Bloom’s taxonomy (Anderson et al., 2001) provided the overall learning objectives of the course content (and therefore the new knowledge presented throughout the instruction) as well as the way in which participants were tested on this new knowledge. The two learning outcomes this research assessed were ‘Remember’ and ‘Understand’ of Factual Knowledge dimensions of the revised Bloom’s cognitive process as can be seen in Figure 57 below.
Figure 57. The Revised Bloom’s Taxonomy Table: Tested Process Dimensions
Bloom defines ‘remember’ of Factual Knowledge as knowledge that is presented to participants in the learning instruction, which are the basic elements of the subject matter. For example, Bridge Types presented were: Beam, Truss, Arch and Suspension. To recall the names of these bridges is the cognitive process of ‘remember’ of Factual information. Participants either remember or they do not when tested.
Bloom defines ‘understand’ of Factual Knowledge as a means to promote retention of ‘remember’ by linking the new knowledge of ‘remember’ with prior knowledge of the participant to be able to achieve more than just remember but utilise this new knowledge in other forms like interpreting, comparing, explaining etc which is not necessarily presented to them in instruction but rather it is assimilated from the entire information that is presented to them through instruction. For example, participants were tested on hybrid bridges but were never instructed on these forms of bridges in the lecture. The participant should have been able to construct this knowledge based upon the basic bridge forms presented in the lecture.
In application of the revised Bloom’s taxonomy the researcher identified the learning objectives, defined these learning objectives in terms of one of Bloom’s 19 levels of Cognitive Process (noting that each cognitive category contains specific cognitive processes), facilitated these objectives into instruction then assessed these objectives.
==3.8 Instrumentation==
The instrument used to assess a participant’s learning outcome as well as their overall learning experience was in survey form. Below is the survey structure that was used in this research study (Table 8):
{|align="center"
|-bgcolor="lightgrey"
|Pre-Survey
|''Total questions: 8''
|-bgcolor=white
|Pre-Quiz
|8 multi-choice questions
|-bgcolor=lightgrey
|Post-Survey
|''Total question: 32''
|-bgcolor=white
|Post-Quiz
|20 multi-choice questions
|-bgcolor=lightgrey
|Survey
|2 content knowledge: self-assessment of pre & post knowledge
3 Delivery Method : self-assessment of quality of learning materials
2 Technology: Assess technical difficulties
5 Learning Experience: Assess satisfaction level in learning method
|}
<p align=center >
'''''Table 8. Pre and Post Survey Structure'''''
</p >
The survey system that was used to record the data was a web based survey system as discussed in this chapter The Virtual Learning Environment section (Figure 58).
Figure 58. Web-Based Survey System
===3.8.1 Pre and Post Quiz===
A total of 28 quiz questions were prepared which were divided into 2 groups of Bloom’s Factual Knowledge of ‘remember’ and ‘understand’ (see section 3.7.2.2 Bloom’s for more details for the difference between these two cognitive dimensions). A total of 8 of these questions were given to all participants as a pre-quiz and 20 in the post-quiz.
A participant was never tested on the same question twice or provided the answers for either quiz, reducing the likelihood that a participant would learn from quiz questions rather than the lecture material presented. The pre-quiz was delivered to the participant prior to the lecture (see Appendix F: Pre-Quiz) and the post-quiz and survey was delivered directly after the lecture (see Appendix G: Post Quiz & Appendix H: Survey).
In order to construct these questions Bloom’s Taxonomy provides sample objectives and corresponding assessment examples within each cognitive category. The format of the multiple choice questions contained both direct selection and cueing as the question format. For example a direct selection question proposes a statement or asks a question and provides the participant with a list from which to select an answer while a cueing question provides the participant with a sentence that contained a blank space for which the responder selects an appropriate response from a multiple choice list.
===3.8.2 Survey: Learning Experience===
After a participant completed the post-quiz a brief survey made up of 12 questions was given (questions 21-32) to assess a participant’s own perception of their prior and post content knowledge, the delivery method, technological constraints and their learning experience. The structure of these questions used 6 Likert scale questions (5-point scales), 1 yes/no question for technical difficulty along with a general comment to explain difficulty, 2 questions to list both positive and negative experiences they perceived about the technology as a learning tool, and 2 open-ended questions for general comments about the course delivery and the participants overall experience (see Appendix H: Survey Q21-32).
The survey was implemented in order to assist the researcher as to whether there may had been any adverse effects that may have affected a participant’s performance in completing the knowledge quiz as well as to assist the researcher into gaining a better understanding of the overall research results and a participant’s relative experiences across the two delivery methods.
===3.8.3 Instrument Reliability===
Kuder-Richardson Formula 20 (KR-20) was the selected reliability test for the pre and post test quiz questions due to the design of the instrument. As the pre-test and post-test were not equivalent K20 measures internal consistency on a single set of survey results (Burns, 2000; Siegle, 2008). KR-20 is widely accepted by those educators and psychologist who support the instrument reliability concept to be a satisfactory method to measure the reliably of a testing instrument (Yount, 2006).
In order to test the Likert scales in the post survey Cronbach's Alpha was used to measure reliability. Similar to K20 in concept, but Cronbach's Alpha allows for testing of data across scales. K20 requires the data to be dichotomously scored (although both in reality produce the same results on dichotomously scored data).
The overall results of the instrument reliability test were low. The problem with the instrument reliability test is that there were too few questions within each group to obtain a true value for the reliability test. The results along with a discussion of the instrument reliability tests performed are provided in Appendix L: Instrument Reliability Results.
==3.9 Analysis Method==
===3.9.1 Introduction===
As discussed in the Research Method section of this chapter this research has generally taken a positivist research approach as opposed to an interpretive research approach. A purest approach to research from either side can lead to weaknesses when interpreting results (Onwuegbuzie, 2002; Richardson, 2005; Walsham, 1995; Weber, 2004), critics argue:
*Positivist: that this method can lead to narrow, non-innovative and repetitive thought, while failing to understand that the selection of data, the method of collection, form of quantification and the tests applied are not themselves objective processes.
*Interpretive: that this method can lead to unresolvable propositions, contextually isolated understandings, non-reproducible observations and ideas sustainable only in the mind of the interpreter.
Thus, in order to minimise the weakness of positivist research the researcher has used triangulation. Triangulation in research can be applied in many forms; in this research it has been used as ‘theory triangulation’ as described by Denzin (1978) which involves using multiple theoretical perspectives in order to interpret the data results. Although unlike the Denzin perspective where triangulation is used as a means of avoiding bias and validating the data results this researcher’s reasoning for the application of theory triangulation is to gain a greater understanding of the results by adding range and depth to the quantitative data analysis (Fielding & Fielding, 1986; Olsen, 2004).
===3.9.2 Data Processing===
The survey data was extracted from the database along with survey start and finishing times of participants and processed in Microsoft’s Excel spreadsheets. After conducting a small number of trials with independent trusted respondents, not otherwise part of the assessment, to determine the minimum practical time for completion of the quiz and survey, it was decided that a cut-off time of 2 minutes would be used as the basis filtering post-surveys. Post-quiz/surveys completed under this time were examined and removed. This time was based upon how long it took the researcher and the trusted responders to read and respond to only the quiz questions at a medium speed. Each survey was also reviewed for possible fake entry of the quiz answers eg selecting the first or last value for every question for their given answers. By extracting these surveys it was hoped to lessen the chance of erroneous results.
No missing data was contained in the survey because every field except the general comments and technical comment questions were all required response fields before a quiz/survey was accepted by the system and saved to the database.
===3.9.3 Software===
The software used to analyse the data results was Microsoft Excel 2007 Data Analysis add-in, STATGRAPHICS Centurion (2009) which is a statistical software package similar to SPSS, StatCal developed by David Moriarty (2008) an excel spreadsheet for testing normal distribution and Del Siegle (2008) excel spreadsheet for testing instrument reliability.
===3.9.4 Quantitative Analysis Methods===
Quantitative research methods are a natural fit with the principles of positivist research, which requires a scientific approach to analysis. Quantitative research can be described as a process of presenting and interpreting data that follows a linear research path using logical models to measure variables and test a hypothesis that is directly linked to a cause. Analysis is performed using hard data, (i.e. numerical) but soft data (i.e. non-numerical) may also be assessed by transforming natural phenomena into numbers using quantification techniques (Neuman, 2006).
====3.9.4.1 Operational Hypotheses====
Quantitative analysis methods require a research hypothesis (as given early in the Problem Statement and Research Hypothesis section) to be re-expressed into operational hypotheses so that each hypothesis forms a tighter a more testable statement (Burns, 2000). From the research hypothesis the following operational hypotheses were formed:
#(H1): That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
#(H2): That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
Statistical analysis requires testing be performed on a hypothesis where no difference exist thus known as a null hypothesis (H0). Since H1 and H2 are expressed in terms of differences the null hypotheses H01 and H02 respectively was tested for no significant difference. If either null hypothesis H01 and H02 measures a statistically significant result then null hypothesis of either H01 and H02 gets rejected thus accepting the probability that the results of the experiment are unlikely to be a random variation in sampling error and that the conclusions drawn from the sampled population in the experiment can be drawn for the entire research population (Burns, 2000).
The experimental data results used to test the above hypotheses were based upon participant multiple-choice post-quiz achievement scores. These multiple-choice answers were dichotomously scored (ie 0 wrong answer, 1 correct answer) and analysed as will be discussed next.
====3.9.4.2 Statistical Significance====
This study used the non-parametric Mann-Whitney U Test to test H01 and the parametric t-test for independent groups to test H02.. All significance tests used a critical alpha level (α) 0.05, i.e. the probability (p) that 95% of the results were not due to chance. The selection of this test was based upon the way in which the hypothesis was formed and whether the results data met the assumptions of parametric test selection.
Burns (2000, p. 155) provides a flowchart to assist in the selection of a statistical test. As can be seen in Figure 59, the highlighted statistical tests are the test options available in this research study. The test selection is based upon a combination of the data type, hypothesis statement and the sample population selection.
Figure 59. Significance Test Selection
Burns (2000) states that if a researcher has a choice between the selection of a parametric or non-parametric test it is best to select the parametric test. Parametric tests are more powerful at picking up significant differences than a non-parametric test because parametric tests not only take into account the rank order of scores but also calculate variances between these scores. The selection of a parametric test should only be chosen if the experimental data results meet three assumptions, which are that the data be – naturally numerical using interval or ratio scales, of normal distribution and homogeneity of variance.
Using Burns’ diagram above, in this study we measure the differences between 2 groups (2D and 3D) were the population was randomly selected therefore the data was in 2 independent groups. From Burns diagram[26] this research study should either use the parametric independent t-test or the non-parametric Mann-Whitney U test. If the data meets the three parametric test assumptions then a parametric test should be chosen over the non- parametric test.
Within the data analysis for significance, it was decided that the significant difference would be based upon a 2-tail hypothesis. Due to the lack of research that had been performed in this area the researcher was not able to come to a strong conclusion that either method would produce a significant difference in their test results.
=====3.9.4.2.1 Assumptions of Parametric Testing: Tests Performed=====
Prior to testing for significance the results data was tested to see if the data met the assumptions of parametric testing, that is that the data be; 1) naturally numerical using interval or ratio scales, 2) of normal distribution and 3) homogeneity of variance as provided by Burns above.
The first assumption is that the data be naturally numeric. The data type of the pre and post quiz scores was interval scaled therefore the first assumption of parametric testing was met.
The second assumption is that the data is of normal distribution. There are various methods with which you can test for normal distribution (Fife-Schaw, 2007). This research has adopted the following approach:
*The measure skewness and kurtosis can be used to test for normal distribution. If either skewness and kurtosis departs significantly from zero[27] (±2 standard errors of skewness (ses) or standard errors of kurtosis (sek)) then the results cannot be assumed to be normality distributed (Brown, 1997).
*D’Agostino-Pearson K2 omnibus test (K2) was chosen as the statistical test to measure whether the data does not deviate from normal distribution significantly. This test is known as the most powerful Gaussian test as it is not affected by duplicate values in the data (which the result data contains) (Fife-Schaw, 2007; Graphpad, 2009).
The third assumption is that the data between the two groups do not vary significantly. Levene's F-test was applied to measure if the standard deviation variance between the groups varied significantly (NIST, 2006).
====3.9.4.3 Other Tests Performed====
Other tests performed that will be discussed in the results section are statistical descriptive analysis for each group using both the pre-post quiz data and the survey data. These tests will provide further insight into the research results and the differences obtained in this experiment.
The Likert scales in the survey was treated as ordinal data and therefore where not seen to have the same variance and thus treated as 3 groups positive, neutral and negative (Jacoby & Matell, 1971).
===3.9.5 Qualitative Analysis Methods===
Qualitative research methods are a natural fit with an interpretive research approach. Qualitative research is a process of interpreting the data by applying ‘logic in practice’ using a non linear research path. The emphasis is on constructionism, using inductive analysis for the generation of theory. Data used in analysis is soft data, the researcher will analysis the data looking at ways in which an individual interprets their social construct (Neuman, 2006).
Unlike quantitative analysis, no hypothesis is formed at the start of a study. It is an inductive process where the main concern of the researcher is to generate and develop new theories based upon interpretation. Qualitative research analysis relies heavily on the application of phenomenological sociology, hermeneutics and ethnography in order to interpret their findings (A. Lee, 1991).
In this study we have used qualitative methods as a way to gain an understanding of the overall experience of a participant learning experience in a virtual world as well as any differences that they may have experienced in the alternative delivery methods of the lecture.
====3.9.5.1 Analysis Data====
The data in this research study that was analysed using qualitative analysis methods was the post-survey data (see Appendix H: Survey). This survey contained open questions to enable a participant to provided feedback on their learning experience, instructional delivery and any technical constraints that they may have had during their lecture delivery. The technical difficulty question was straight forward; if they answered yes then they could comment on what went wrong. The questions that were asked in order to understand their perception of virtual world learning and delivery method were as follows:
*'''DELIVERY METHOD ASSESSMENT''' (Q 25) General Comment:
*'''VIRTUAL WORLD LEARNING EXPERIENCE'''
**(Q 30) List 3 positive experiences you had with using this technology to learn:
**(Q 31) List 3 negative experiences you had with using this technology to learn:
**(Q 32) General Comment:
Qualitative analysis of these questions required the application of hermeneutic method which is the process of analysing verbal conversations, text, journals, pictures etc looking for meaning in the detail and as a whole to reveal the deeper meaning contained within - i.e. ‘reading between the lines’ in order to extract meaning. Within this method a hermeneutic circle is preformed were interpretation takes an iterate approach interpreting as a whole and of its parts then reinterpreting in light of the new understanding (Klein & Myers, 1999; A. Lee, 1991).
====3.9.5.2 Coding====
Using hermeneutic method on the survey data as described above data was coded into patterns, themes and contextual structures in light of the research problem and literature review. Coding generally takes 3 stages in qualitative analysis – Open, Axial and Selective coding (Neuman, 2006).
Open coding was performed as a preliminary analysis to develop codes to condense data into specific meanings and themes. This process was preformed several times prior and after the quantitative analysis was preformed.
Axial coding was then performed to develop possible relationships between the coded data.
Selective coding, the final stage, was performed to extract major themes and general theory that emerged which will be discussed in the Results section of this paper.
==3.10 Summary==
In this chapter the researcher has discussed the research design that required the construction of the virtual learning campus and learning materials. The instrument used to collect the data was a pre and post quiz and survey.
This research will be applying theory triangulation, which represents a mixed method approach to the analysis. An operational hypothesis was drawn from the research problem that will be assessed using quantitative analysis methods. Qualitative analysis will be used in order to gain a better understanding of the quantitative results as well as the learning experience of participants.
The next chapter discusses the results of this research project using the methods that were discussed under Analysis Method in this chapter.
</div >
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
ccccfae42a86ad81647f0dc7e9d1c0ef9e7c70bf
364
310
2018-10-29T12:02:34Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 3: Research Design=
==3.1 Introduction==
This study measured learning outcomes through the achievement scores of a multiple choice post-quiz at two cognitive levels of Bloom’s Factual Knowledge: Remember and Understand for two different lectures delivered in the virtual world of Second Life.
This chapter will discuss the research design of this study along with the researcher’s theoretical assumptions, environment design, lecture material design and analysis methods used in producing the results discussed in the next chapter of this thesis.
==3.2 Problem Statement and Research Hypothesis==
The problem of this study was to determine the difference in learning outcomes between two randomly selected groups that attended the same lecture in a 3D virtual world using differing methods of delivery. Group 1 received a 2D slide show with pre-recorded audio in a lecture room setting (emulating a classical lecture in a 3D virtual world space) and group 2 received the same lecture augmented with appropriate 3D objects in an appropriately modified virtual 3D theatre space. Both were delivered in the virtual world of Second Life. The research investigated whether a difference in the delivery method (the addition of interactive “life size” 3D models), where instructional design, timing, content and environmental setup are otherwise the same produces different learning outcomes with respect the two identified cognitive levels.
To carry out this study the following hypothesis was formed:
Learning outcomes are not independent of the delivery methods in a virtual world, in that varying the delivery method between 2D and a 3D presentation results in a significant difference in the post-quiz achievement scores of a participant in relation to Bloom’s cognitive process of factual knowledge of ‘remember’ and ‘understand’.
==3.3 Research Rationale==
In spite of the extensive current efforts of many institutions and educators to establish a virtual presence and adapt delivery of courses to this newly emerged generation of mass market virtual worlds, little (if any) formal and structured analysis has been undertaken by researchers to assess the comparative cognitive affordances of learning delivery methods in these spaces.
An anecdotal assessment of delivery methods in university campuses and training rooms within Second Life showed a preponderance of virtualised traditional lecture rooms – complete with front facing chairs, projection screens and even lecterns. The implication is therefore that a significant volume of current delivery in Second Life (at least) is therefore merely virtualising traditional real world delivery. The question arises, however, in a space capable of delivering highly interactive collaborative learning and 3D simulations, potentially for lower input costs than would be required in the real world, is the traditional chalk and talk approach the most appropriate?
There was a significant incentive to distinguish the effectiveness of these two learning approaches. The comparative costs, expertise and effort required to utilise a set of pre-prepared real world slides with audio and present them in a virtualised classroom is essentially the same as that required for real world delivery (at least in the Second Life virtual world). Even with Second Life’s simplified and efficient 3D building editor and scripting language, a 2D slide show based presentation with audio narration can be imported or streamed into Second Life and presented, for a fraction of the cost (in time) for course preparation and level of sophistication of learning materials, and skill set required where an interactive 3D simulation is built and utilised. Distinguishing between these two learning approaches would assist educators determine whether the extra cost (of construction) and time (in design and preparation) involved in the development of 3D instructional learning materials is worth the effort as it produces better learning outcomes.
==3.4 Research Method==
===3.4.1 Theoretical Assumptions ===
Previous research into education within virtual worlds can be divided into two main areas. Research that assesses the affordances of the environment to be used as an educational tool (Dickey, 2003; Gonzalez, 2007; Martinez et al., 2007; Youngblut, 1998) or research that compares the virtual world learning outcomes to that of real world learning methods (Kurt, Mike, Jamillah, & Thomas, 2004; Mania & Chalmers, 2001; Youngblut, 1998).
The former usually takes an interpretive research approach the latter a positivist research approach. From a purist’s standpoint, these two approaches are at opposite ends of the scale in their theoretical assumptions. This, in turn, affects how the researcher approaches, conducts and analyses their research data.
In an interpretive research approach the researcher adopts an investigative approach to analyse and ‘understand’ the conceptual meaning of the social construct. This approach to research is one of total immersion, experiencing the research from an insider’s view, where the researcher plays a social actor within the social construct (Klein & Myers, 1999; A. Lee, 1991; Orlikowski & Baroudi, 1991).
A positivist analyst takes a very different approach than that of interpretive analyst. Positivist research follows principals such as (A. Lee, 1991; Orlikowski & Baroudi, 1991):
*the researcher is independent of the research,
*the researcher is inquiry value-free,
*a linear cause-effect relationship exists and is verified and tested by deductive logic and analysis methods.
Without passing judgement on the merits of either approach, this research has generally taken a positivist research approach using a classical experimental design method (Neuman, 2006). A direct consequence of this decision was that a ‘laboratory’ first had to be created in the virtual world that could enable the delivery of the lectures under controlled experimental conditions. We will explore this laboratory in this chapter.
===3.4.2 Research Study===
A virtual learning campus was set-up in the virtual world of Second Life where participants were randomly allocated into two groups to participant in either:
#2D Slide Show Lecture: Slides and audio in a class room setting
#3D Augmented Lecture: Slides and audio augmented by ‘life size’ 3D objects and simulations, in a class room setting
By ‘life size’ we mean that the 3D objects seemed large than the participant in the 3D space and large enough for the participant’s avatar to walk on and around them. The lecture was on ‘The Physics of Bridges’ that was presented using identical subject matter, audio and slide times. The only difference being the presence of the 3D objects and the minimum necessary environmental changes to allow avatar interaction with the 3D objects.
Before the lecture participants were given a pre-quiz and afterwards a post-quiz to test the learning outcomes of each group with respect to Bloom’s factual knowledge of ‘remember’ and ‘understand’, and a survey collecting qualitative data about the experience. Both groups received identical pre and post quizzes and surveys. The questions in the pre-quiz differed from those in the post quiz. A summary of the experiment design is provided below (Table 7):
{|border="1"
|'''Research Design Summary'''
|-
|'''Research Design'''
|'''Classical Experimental Design'''
|-
|'''Sampling'''
|Random without replacement (i.e. Avatars were prevented from taking either quiz more than once).
111+ selections.
|-
|'''Random Assignment'''
|Yes
|-
|'''Independent Variable'''
|Learning Delivery Method
Virtual 2D Slide Show Lecture vs. 3D Augmented Lecture
◦ Course Delivered: The Physics of Bridges
◦ Time 20 minutes for both
|-
|'''Groups'''
|2D Group: 2D Slide Show Lecture
3D Group: 3D Augmented Lecture
|-
|'''Dependant Variable'''
|Cognitive Learning Outcome
Post-test achievement scores measuring the lecture objectives of Bloom’s:
◦ Factual knowledge of Remember Cognitive process
◦ Factual knowledge of Understand Cognitive process
|-
|'''Instrument'''
|'''Pre-Test'''
Test current factual knowledge of topic before course delivery
'''Post-Test & Survey'''
Retest factual knowledge for ‘Remember’ & ‘Understand’ after course delivery
Survey of participant’s learning experience
|}
'''Table 7. Research Design Summary'''
==3.5 Research Population==
The population and frame for inclusions was the total residences in Second Life which consists of 16,318,063 million users (60 day logons 1,344,215 million) with demographics of 59% male and 41% female, where the highest group at 35% are aged between 24-34 years with a total age population being over 18 years of age. The majority of Second Life residences, 39%, live in the United States of America. Appendix I: Second Life Demographic provides a more detailed breakdown of these statistics (Linden Lab, 2008b).
It was decided to use only current in world users (rather than recruiting new users to participate in world) to avoid the weaknesses of previous research studies that was discussed in Chapter 2, where the participants were learning a new toolset rather than the learning-material presented (Martinez et al., 2007; Youngblut, 1998).
==3.6 The Virtual Learning Environment==
The virtual world Second Life was chosen over other virtual world environments in light of the discussion provided in Chapter 2 concerning Architecture Considerations and the review of Educational Research in virtual worlds. Second Life currently provides many benefits over other virtual worlds for open access to learning due to the capabilities of its toolset that simplify the rapid import of 2D materials and construction of 3D interactive environments. Second Life has powerful scripting and modelling tools that come standard as a part of its interface that provide a vast range of approaches with which to create the virtual learning environment. Lastly, as noted in Chapter 2, the take up by tertiary institutions of Second Life for education purposes worldwide numbers in the hundreds.
In the section that follows we will discuss the virtual world learning environment (the ‘laboratory’) that was built in Second Life in order to conduct this research experiment.
====3.6.1.1 Building the Virtual Learning Environment: Design Considerations====
There are two general approaches to the design layout of a virtual space (Corbit, 2002). One separates places within the space into discrete areas where users move around using portals (known as teleports in Second Life), the other is more representative of the real world where users navigate to different places using such things as pathways between buildings or rooms within the virtual space. Both of these constructs offer advantages depending upon the circumstances for example, the former method of using portals offers a more simple method for the user to navigate the space easily and quickly whereas if one wanted to assist the user in obtaining a sense of placement, presence and collaboration within the virtual environment then latter may be more appropriate (S. Clark & Maher, 2006) where the user is encouraged to explore the virtual space in order to form a relationship with the environment (Corbit, 2002).
This virtual learning environment was built largely around the first approach where a series of rooms were built and participants navigated the environment using teleports in order to complete the appropriate stage within the experiment, but with the rooms themselves emulating a real world environment with chairs for sitting, lecture rooms with projection screens and foyers, teller machines for delivering participant fees, etc.
The use of teleports not only offered simplicity for navigation but also enabled the control required over the steps in the process for the experimental design approach taken in this research. Teleports allowed the environment to be easily automated for participants to operate without the intervention or the assistance from the researcher so as to uphold the positivist research approach and remaining unbiased and inquiry free, and independent of the experiment under study (Orlikowski & Baroudi, 1991). Furthermore, the use of distinct purpose specific and separate rooms connected only by teleports was also indicated due to technical and security reasons that will be discuss later in the System Controls section below.
A further consideration was given to the construction of the rooms themselves, including the look and content of each room. Bellman and Landauer (2000) believe that a key question of the implementation and application of a virtual world is decide what reality should be made virtual by incorporating “functional realism”. Functional realism is purpose built realism that maintains sufficient realism for illusionary effects for presence and immersion but does not support the goal of absolute realism. Absolute realism in most instances, they believe, only distracts from the real objectives of the environment. For example, implementing window scenes in a university lecture room that have passing cars, jets flying though the sky and construction to a neighbouring building may be a realistic scene in the real world but in a virtual world it would only distract the students from their learning objectives. Applying functional realism not only provides focussed design but also enhances the virtual world by only including key components and excluding any adversities that may be disruptive in real world. [24]
This virtual learning environment build was based upon a real world setting, using a theatre theme, in rooms that were self contained with only the essentials elements included in order to complete the learning task at hand.
====3.6.1.2 Virtual Learning Campus Overview====
The overall virtual learning campus consisted of a Welcome Room, a Pre-Quiz Room, 6 Lecture Room complexes (containing an arrival foyer, theatre, exit foyer and theatre control room), a Post-Quiz/Survey Room and a central Control Room; Figure 49 provides an overview of the process flow of the virtual learning campus.
The starting area for all visitors was the Welcome Room, in this room the participant could read about the research, the rules, authority, and standards, etc. From this room a participant could take a teleport to the Pre-Quiz room. On arrival avatar identity keys were automatically recorded.
After completing the pre-quiz in the pre-quiz room participants were paid a minimum amount for attending and they could decide either to leave the research project, or continue onto a lecture. On commencement and completion of quizzes avatar identity keys were recorded.
There were 6 Lecture Rooms divided evenly into 2 types of lectures – a 2D audio-slide show presentation or a 3D augmented audio-slide show presentation. Each lecture theatre could hold up to 18 seated participants and were timed to commence every 10 minutes in pairs.
If participants continued onto the lecture their completion of the pre-quiz was automatically verified and they were randomly allocated on teleportation to either one of these lectures. Once the lecture completed they could then teleport to the Post-Quiz/Survey Room to be tested on their learning outcome and surveyed on their experience and finally they were paid for their participation in the research project.
This entire process took approximately 30 minutes for the participant to complete.
The entire virtual campus build time took approximately one man month to build [25] with the 3D presentation content taking approximately 3 times longer to build than the 2D presentation content (approximately 3 days to build for the 3D presentation and 1 day for the 2D presentation).
In the section that follows a detailed view of each room is provided along with the function of the room.
Figure 49. Environment: Virtual Learning Campus Flow Chart
====3.6.1.3 Welcome Room====
The Welcome Room provided the entry point into the virtual campus (Figure 50). Here the participants were provided information about the research and if they decided to participate what could be expected of them within the research experiment.
This room contained four large wall signs and four smaller floor signs in each corner.
The wall signs provided the following information (see Appendix C: Welcome Room Information Content for more details):
*The aim of this research;
*What can I expect?
*How long will it take?
*Payment?
The floor signs provided the participant with a web link to the research explanatory statement (see Appendix C: Welcome Room Information Content for more details) and a virtual note card providing them with the welcome room information that they could hold in their inventory to take away from the research location.
If the participant decided to take part in this research then they took a teleport (the gold rings partially visible in the image) from this room, which transported them to the Pre-Quiz Room.
Figure 50. Environment: Welcome Room
====3.6.1.4 Pre-Quiz Room====
The Pre-Quiz Room was a common area where all participants were given a Pre-Quiz to obtain their level of knowledge of the subject prior to the delivery of the lecture.
A participant would be teleported from the Welcome Room into the centre of this room and provided with instructions by the large sign on the main wall to be seated in order to take the pre-survey (Figure 51, Left). Once seated a web-link would be provided to them to take the pre-quiz. This web-link was connected to a survey engine that operated over the internet and stored details into a database outside of the Second Life environment. The survey database recorded the participant’s answers to the pre-quiz along with other details such as the participant’s avatar key (the unique identify of the Second Life user). The avatar’s key was used to verify that the participant had completed the pre-quiz prior to payment and teleportation into the next scheduled lecture.
Once the participant had completed the pre-quiz they could collect part payment for completion of this stage of the research from an ATM along the back wall (Figure 51, Right) and then could use a teleport, situated next to the ATMs, to transport them to the next scheduled lecture. The lectures were scheduled every 10 minutes for both the 2D and 3D presentations. If the blue beam on the teleport was displayed then this showed the participant that the next lecture was available for them to teleport. Timers beside the ATMs showed the time until the next lecture. On teleportation a participant was randomly allocated to either a 2D or 3D lecture.
Figure 51. Environment: Left Pre-Quiz Room, Right ATMs & Teleporters
====3.6.1.5 Lecture Theatre====
The participant would arrive in the foyer of the lecture theatre where they were instructed via floor signs to switch on their audio and video controls and to be seated inside the lecture theatre (Figure 52).
The slide presentation was delivered using streaming in world web-technology where PowerPoint slides were constructed and saved as html files and streamed into Second Life using an in world constructed HTML viewer. Audio streams were also recorded and synchronised to each of these slides throughout to the presentation.
Figure 52. Environment: Lecture Theatre
Both the 2D and 3D theatres were setup essentially the same and delivered within the same time frame: which took approximately 20 minutes of instructional delivery. The only variable that changed was the presence or absence of 3D objects in the delivery method of the presentation.
In the 2D presentation a participant would continue to be seated to watch and listen to the 2D lecture (Figure 53, Left) throughout the lecture. In the 3D presentation the participant would commence the session seated, but on commencement of the lecture a room would open up behind the front 2D presentation screen and the participant would be automatically transported in their chairs and dropped into the 3D presentation space to view the 3D slide show presentation in a specially designed 3D viewing area (Figure 53, Right). Participants in the 3D presentation were then left standing in this space and were able to move around in the 3D space if they wished. In the 2D mode the front facing projection screen displayed the slides, while in the 3D space, the 2D slides were projected on the walls around the 3D viewing space, with the 3D objects created and removed automatically in synch with the slides and audio in the centre of (and around) the 3D viewing space.
Figure 53. Environment: Learning Delivery Method
Careful consideration was given so that both groups obtained the same instructional information. The only exception was that the pictures contained in the 2D slide presentation was translated into 3D form and either rotated and animated, or positioned for ‘walking on’ or exploration in front of the participant.
Once the lecture had completed the participants for both groups were instructed to move to the exit foyer and teleport to the next phase of the research project via teleports located in the exit foyer. The entrance to the exit foyer and the teleports therein were only switched on after the last slide had been delivered (Figure 54).
Figure 54. Environment: Lecture Room Teleporters
Each lecture theatre contained a hidden control room and separate bank of teleports (restricted to the administration avatar) connecting them and the central control room that allowed for independent movement and invisible monitoring of the lecture rooms, and contained the control system and communication devices for that lecture theatre.
====3.6.1.6 Post-Quiz Room====
The final phase for the participants was to take a post-quiz and survey. This room operated the same as the Pre-Quiz room.
The Post-Quiz Room was a common room where all participants would be teleported into the middle of the room after their lecture. A participant would be instructed via the main sign on the wall to be seated in order to take the quiz and survey (Figure 55). Once they had completed the quiz and survey they were then instructed to go to the back of the room to collect their payment from an ATM for the final payment for their research participation. The survey engine would note they had completion of this survey and only then allow payment if completed.
Figure 55. Environment: Post-Quiz Room
====3.6.1.7 Control Room====
At the centre of this system was a Control Room. The Control Room was responsible for managing the 28 public teleports as well as containing separate teleports for members of the administration team. At any one time a member of the administrator team could bypass the controls contained within the system and move to any room within environment (Figure 56).
Figure 56. Environment: Control Room
====3.6.1.8 System Controls====
In the design consideration section it was mentioned that this environment was best setup using separated rooms with teleports to navigate the system. This decision allowed for an increase in security as well as allowing the use of teleports to operate as control gates.
Within Second Life you can use what is called roaming camera mode to navigate around without moving your avatar. A person can use this mode to move around to view other locations within a definable distance and even operate controls like the sit command therefore providing a security risk that a participant could bypass steps within the research process. Having rooms located far away from each other at random distances in 3D space and connected with teleports prevented this from occurring. Even if a participant found of way of teleporting to a location that was out of sequence to the research process (eg they had visited before and created a landmark to teleport back or had given away this landmark to another avatar) then the teleports, seats and ATMs all communicated with a central off-world web site (containing the survey engine) which verified the proper completion of each required step and acted as a gatekeeper to stop a person from breaching the system.
At every stage when an avatar used a teleport, used a quiz seat, or used an ATM, these teleports, seats or ATMs, connected to an external database that would look-up the avatar’s key to ensure that the appropriate stage had been completed prior to allowing access. For example, a participant had to have completed their pre-quiz survey prior to entry into a lecture theatre. If they tried to breach this sequence then the teleport reported an error message and would not allow them to teleport. A further example, a participant was required to complete an entire lecture prior to completing the post-survey. The exit Lecture Room teleports were disabled until lecture finished after which a participant could take a teleport to the Post-Quiz room in doing so the participant was flagged as having completed the lecture which enabled them take the post-quiz and survey.
Other controls were built into the ATM machines so that a participant could only be paid once and also built into the survey system so that a participant could only undertake the research once (although they were allowed to attend again if they chose they just could not take the quizzes or survey again).
This design construction of the virtual learning campus allowed for an automated system that could be operated over 24 hours for multiple participants. It was also fault tolerant to possible SIM crashes with the entire system to be able to automatically restart and recover correctly unattended.
Lastly, driven entirely by a specially designed control language in replaceable text files, the design made for an easily modifiable and manageable system with minimum scripting change to introduce any new rules. An entirely new lecture and testing set can be loaded into the system in less than 5 minutes (once the content has been written or built).
==3.7 Learning Task Design==
===3.7.1 Subject Matter===
The subject matter that was chosen was the Physics of Bridges. This topic was chosen both for its familiarity (everyone knows what a bridge is) and obscurity (they don’t generally know as much as they might initially believe about the detail of how they work) and because the content could be easily adapted for both forms of delivery. The level of difficulty was aimed at approximately a year 12 level high school student. The content of information was mainly sourced from academic and government information web-sites. Appendix D:Instruction: Slide Presentation contains the delivered presentation along with a references list on the last page of this presentation.
===3.7.2 Instruction Delivery===
A virtual learning system, no matter how good its delivery design is only as good as the instructional design of the learning task. As discussed in Chapter 2 Learning and Instructional Design Theory, the instruction methods used to assist in the delivery and assessment of the course was Gagne’s Nine Events of Instruction and the revised Bloom’s Taxonomy Cognitive domain.
This section provides details of how both the 2D and 3D materials were constructed, for the differences within these deliveries refer to section 3.6.1.5 Lecture Theatre in this chapter.
====3.7.2.1 Gagne====
The theme of this lecture was how the various bridge designs handled the key forces of tension and compression. A variety of bridge designs were explored with respect to these two forces.
Gagne’s 9 stages of instructional delivery were provided for as follows:
#Gaining Attention (Reception): This stage grabs the attention of the participant. A slide show was given while participants arrived in the theatre prior to the commencement of the formal presentation that contained a variety of bridge structures that were the ‘best of’ bridges along with music to motivate and excite the participant for the lecture that was to follow (see Appendix E: Pre-Presentation Slide Show).
#Informing Learners of the Objective (Expectancy): This stage informs the participant what new knowledge they can expect to learn. The 2nd Slide obtained the objectives of the presentation (see Appendix D: Instruction: Slide Presentation). These objectives were also written in conjunction using the revised Bloom’s taxonomy.
#Stimulating Recall of Prior Learning (Retrieval): This stage tries to place the new information that will be delivered in the form of current knowledge so that they can relate better to the newly presented information. Every slide that introduced a new bridge structure contained a picture of a real bridge so that the participant could relate to real life experience to the new information that would be presented.
#Presenting the Stimulus (Selective Perception): This is where the learning (or new knowledge) was presented, each bridge form was presented with an overview, its relationship to tension and compression and the limitations of the bridge design. The information was chunked into a logical structure. Stages (4) and (5) are interrelated which tries to provide the participant new knowledge in a logical and meaningful context.
#Providing Learning Guidance (Semantic Encoding): This stage presents the information in a deeper form allowing the participant to encode the new information into their long-term memory. Here the information was presented in different forms using both pictures (and in the case of the 3D group, 3D models) and text. Furthermore, three different concepts (ie overview, tension and compression and limitations) were provided for each bridge to enhance a participant’s breath of knowledge of that bridge. The bridges were also presented to the participant from simplest to most complex so that they could gradually understand the concept of a bridge structure and its relationship to tension and compression.
#Eliciting Performance (Responding): This stage of instructional delivery allows the participant to ‘do something’ with their new knowledge. Given we only had 20 minutes to deliver the material this stage was not performed. If Bloom’s cognitive process of Apply was tested then inclusion of this stage would have been imperative. The researcher recognises that although time was a limitation to this study, ultimately, this stage would have been interesting to include.
#Providing Feedback (Reinforcement): The stage of instructional delivery is usually performed with feedback from the lecturer to confirm that the participant understood the new knowledge presented. Again due to time constraints and the type research method used (experimental design) direct lecturer interaction was not an option, so in order to hold this experiment constant for all participants’ summary slides where used. These provided a form of feedback by presenting the information again but in a different form to that that was initially used in the main body of the presentation, forcing some degree of participant thought to process the summary information (and of course, the post quiz served a similar purpose, but without the learning confirmation).
#Assessing Performance (Retrieval): In this research study this was the final stage of delivery where the participant’s were provided with the post-quiz to assess their learning outcome.
#Enhancing Retention and Transfer (Generalisation) The final stage of Gagne’s instructional delivery is to generalise and transfer the information delivered in light of new information that may be presented in future. This step was partly performed at stage (7) were the information was summarised. Transfer in normal situations (ie non-experimental) would allow the student to take away their new knowledge, ie the lecture materials. Although in Second Life this is possible as this was under experimental conditions that had to be controlled the lecture materials were not transferred to the participant.
====3.7.2.2 Bloom’s====
The revised Bloom’s taxonomy (Anderson et al., 2001) provided the overall learning objectives of the course content (and therefore the new knowledge presented throughout the instruction) as well as the way in which participants were tested on this new knowledge. The two learning outcomes this research assessed were ‘Remember’ and ‘Understand’ of Factual Knowledge dimensions of the revised Bloom’s cognitive process as can be seen in Figure 57 below.
Figure 57. The Revised Bloom’s Taxonomy Table: Tested Process Dimensions
Bloom defines ‘remember’ of Factual Knowledge as knowledge that is presented to participants in the learning instruction, which are the basic elements of the subject matter. For example, Bridge Types presented were: Beam, Truss, Arch and Suspension. To recall the names of these bridges is the cognitive process of ‘remember’ of Factual information. Participants either remember or they do not when tested.
Bloom defines ‘understand’ of Factual Knowledge as a means to promote retention of ‘remember’ by linking the new knowledge of ‘remember’ with prior knowledge of the participant to be able to achieve more than just remember but utilise this new knowledge in other forms like interpreting, comparing, explaining etc which is not necessarily presented to them in instruction but rather it is assimilated from the entire information that is presented to them through instruction. For example, participants were tested on hybrid bridges but were never instructed on these forms of bridges in the lecture. The participant should have been able to construct this knowledge based upon the basic bridge forms presented in the lecture.
In application of the revised Bloom’s taxonomy the researcher identified the learning objectives, defined these learning objectives in terms of one of Bloom’s 19 levels of Cognitive Process (noting that each cognitive category contains specific cognitive processes), facilitated these objectives into instruction then assessed these objectives.
==3.8 Instrumentation==
The instrument used to assess a participant’s learning outcome as well as their overall learning experience was in survey form. Below is the survey structure that was used in this research study (Table 8):
{|align="center"
|-bgcolor="lightgrey"
|Pre-Survey
|''Total questions: 8''
|-bgcolor=white
|Pre-Quiz
|8 multi-choice questions
|-bgcolor=lightgrey
|Post-Survey
|''Total question: 32''
|-bgcolor=white
|Post-Quiz
|20 multi-choice questions
|-bgcolor=lightgrey
|Survey
|2 content knowledge: self-assessment of pre & post knowledge
3 Delivery Method : self-assessment of quality of learning materials
2 Technology: Assess technical difficulties
5 Learning Experience: Assess satisfaction level in learning method
|}
<p align=center >
'''''Table 8. Pre and Post Survey Structure'''''
</p >
The survey system that was used to record the data was a web based survey system as discussed in this chapter The Virtual Learning Environment section (Figure 58).
Figure 58. Web-Based Survey System
===3.8.1 Pre and Post Quiz===
A total of 28 quiz questions were prepared which were divided into 2 groups of Bloom’s Factual Knowledge of ‘remember’ and ‘understand’ (see section 3.7.2.2 Bloom’s for more details for the difference between these two cognitive dimensions). A total of 8 of these questions were given to all participants as a pre-quiz and 20 in the post-quiz.
A participant was never tested on the same question twice or provided the answers for either quiz, reducing the likelihood that a participant would learn from quiz questions rather than the lecture material presented. The pre-quiz was delivered to the participant prior to the lecture (see Appendix F: Pre-Quiz) and the post-quiz and survey was delivered directly after the lecture (see Appendix G: Post Quiz & Appendix H: Survey).
In order to construct these questions Bloom’s Taxonomy provides sample objectives and corresponding assessment examples within each cognitive category. The format of the multiple choice questions contained both direct selection and cueing as the question format. For example a direct selection question proposes a statement or asks a question and provides the participant with a list from which to select an answer while a cueing question provides the participant with a sentence that contained a blank space for which the responder selects an appropriate response from a multiple choice list.
===3.8.2 Survey: Learning Experience===
After a participant completed the post-quiz a brief survey made up of 12 questions was given (questions 21-32) to assess a participant’s own perception of their prior and post content knowledge, the delivery method, technological constraints and their learning experience. The structure of these questions used 6 Likert scale questions (5-point scales), 1 yes/no question for technical difficulty along with a general comment to explain difficulty, 2 questions to list both positive and negative experiences they perceived about the technology as a learning tool, and 2 open-ended questions for general comments about the course delivery and the participants overall experience (see Appendix H: Survey Q21-32).
The survey was implemented in order to assist the researcher as to whether there may had been any adverse effects that may have affected a participant’s performance in completing the knowledge quiz as well as to assist the researcher into gaining a better understanding of the overall research results and a participant’s relative experiences across the two delivery methods.
===3.8.3 Instrument Reliability===
Kuder-Richardson Formula 20 (KR-20) was the selected reliability test for the pre and post test quiz questions due to the design of the instrument. As the pre-test and post-test were not equivalent K20 measures internal consistency on a single set of survey results (Burns, 2000; Siegle, 2008). KR-20 is widely accepted by those educators and psychologist who support the instrument reliability concept to be a satisfactory method to measure the reliably of a testing instrument (Yount, 2006).
In order to test the Likert scales in the post survey Cronbach's Alpha was used to measure reliability. Similar to K20 in concept, but Cronbach's Alpha allows for testing of data across scales. K20 requires the data to be dichotomously scored (although both in reality produce the same results on dichotomously scored data).
The overall results of the instrument reliability test were low. The problem with the instrument reliability test is that there were too few questions within each group to obtain a true value for the reliability test. The results along with a discussion of the instrument reliability tests performed are provided in Appendix L: Instrument Reliability Results.
==3.9 Analysis Method==
===3.9.1 Introduction===
As discussed in the Research Method section of this chapter this research has generally taken a positivist research approach as opposed to an interpretive research approach. A purest approach to research from either side can lead to weaknesses when interpreting results (Onwuegbuzie, 2002; Richardson, 2005; Walsham, 1995; Weber, 2004), critics argue:
*Positivist: that this method can lead to narrow, non-innovative and repetitive thought, while failing to understand that the selection of data, the method of collection, form of quantification and the tests applied are not themselves objective processes.
*Interpretive: that this method can lead to unresolvable propositions, contextually isolated understandings, non-reproducible observations and ideas sustainable only in the mind of the interpreter.
Thus, in order to minimise the weakness of positivist research the researcher has used triangulation. Triangulation in research can be applied in many forms; in this research it has been used as ‘theory triangulation’ as described by Denzin (1978) which involves using multiple theoretical perspectives in order to interpret the data results. Although unlike the Denzin perspective where triangulation is used as a means of avoiding bias and validating the data results this researcher’s reasoning for the application of theory triangulation is to gain a greater understanding of the results by adding range and depth to the quantitative data analysis (Fielding & Fielding, 1986; Olsen, 2004).
===3.9.2 Data Processing===
The survey data was extracted from the database along with survey start and finishing times of participants and processed in Microsoft’s Excel spreadsheets. After conducting a small number of trials with independent trusted respondents, not otherwise part of the assessment, to determine the minimum practical time for completion of the quiz and survey, it was decided that a cut-off time of 2 minutes would be used as the basis filtering post-surveys. Post-quiz/surveys completed under this time were examined and removed. This time was based upon how long it took the researcher and the trusted responders to read and respond to only the quiz questions at a medium speed. Each survey was also reviewed for possible fake entry of the quiz answers eg selecting the first or last value for every question for their given answers. By extracting these surveys it was hoped to lessen the chance of erroneous results.
No missing data was contained in the survey because every field except the general comments and technical comment questions were all required response fields before a quiz/survey was accepted by the system and saved to the database.
===3.9.3 Software===
The software used to analyse the data results was Microsoft Excel 2007 Data Analysis add-in, STATGRAPHICS Centurion (2009) which is a statistical software package similar to SPSS, StatCal developed by David Moriarty (2008) an excel spreadsheet for testing normal distribution and Del Siegle (2008) excel spreadsheet for testing instrument reliability.
===3.9.4 Quantitative Analysis Methods===
Quantitative research methods are a natural fit with the principles of positivist research, which requires a scientific approach to analysis. Quantitative research can be described as a process of presenting and interpreting data that follows a linear research path using logical models to measure variables and test a hypothesis that is directly linked to a cause. Analysis is performed using hard data, (i.e. numerical) but soft data (i.e. non-numerical) may also be assessed by transforming natural phenomena into numbers using quantification techniques (Neuman, 2006).
====3.9.4.1 Operational Hypotheses====
Quantitative analysis methods require a research hypothesis (as given early in the Problem Statement and Research Hypothesis section) to be re-expressed into operational hypotheses so that each hypothesis forms a tighter a more testable statement (Burns, 2000). From the research hypothesis the following operational hypotheses were formed:
#(H1): That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
#(H2): That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
Statistical analysis requires testing be performed on a hypothesis where no difference exist thus known as a null hypothesis (H0). Since H1 and H2 are expressed in terms of differences the null hypotheses H01 and H02 respectively was tested for no significant difference. If either null hypothesis H01 and H02 measures a statistically significant result then null hypothesis of either H01 and H02 gets rejected thus accepting the probability that the results of the experiment are unlikely to be a random variation in sampling error and that the conclusions drawn from the sampled population in the experiment can be drawn for the entire research population (Burns, 2000).
The experimental data results used to test the above hypotheses were based upon participant multiple-choice post-quiz achievement scores. These multiple-choice answers were dichotomously scored (ie 0 wrong answer, 1 correct answer) and analysed as will be discussed next.
====3.9.4.2 Statistical Significance====
This study used the non-parametric Mann-Whitney U Test to test H01 and the parametric t-test for independent groups to test H02.. All significance tests used a critical alpha level (α) 0.05, i.e. the probability (p) that 95% of the results were not due to chance. The selection of this test was based upon the way in which the hypothesis was formed and whether the results data met the assumptions of parametric test selection.
Burns (2000, p. 155) provides a flowchart to assist in the selection of a statistical test. As can be seen in Figure 59, the highlighted statistical tests are the test options available in this research study. The test selection is based upon a combination of the data type, hypothesis statement and the sample population selection.
Figure 59. Significance Test Selection
Burns (2000) states that if a researcher has a choice between the selection of a parametric or non-parametric test it is best to select the parametric test. Parametric tests are more powerful at picking up significant differences than a non-parametric test because parametric tests not only take into account the rank order of scores but also calculate variances between these scores. The selection of a parametric test should only be chosen if the experimental data results meet three assumptions, which are that the data be – naturally numerical using interval or ratio scales, of normal distribution and homogeneity of variance.
Using Burns’ diagram above, in this study we measure the differences between 2 groups (2D and 3D) were the population was randomly selected therefore the data was in 2 independent groups. From Burns diagram[26] this research study should either use the parametric independent t-test or the non-parametric Mann-Whitney U test. If the data meets the three parametric test assumptions then a parametric test should be chosen over the non- parametric test.
Within the data analysis for significance, it was decided that the significant difference would be based upon a 2-tail hypothesis. Due to the lack of research that had been performed in this area the researcher was not able to come to a strong conclusion that either method would produce a significant difference in their test results.
=====3.9.4.2.1 Assumptions of Parametric Testing: Tests Performed=====
Prior to testing for significance the results data was tested to see if the data met the assumptions of parametric testing, that is that the data be; 1) naturally numerical using interval or ratio scales, 2) of normal distribution and 3) homogeneity of variance as provided by Burns above.
The first assumption is that the data be naturally numeric. The data type of the pre and post quiz scores was interval scaled therefore the first assumption of parametric testing was met.
The second assumption is that the data is of normal distribution. There are various methods with which you can test for normal distribution (Fife-Schaw, 2007). This research has adopted the following approach:
*The measure skewness and kurtosis can be used to test for normal distribution. If either skewness and kurtosis departs significantly from zero[27] (±2 standard errors of skewness (ses) or standard errors of kurtosis (sek)) then the results cannot be assumed to be normality distributed (Brown, 1997).
*D’Agostino-Pearson K2 omnibus test (K2) was chosen as the statistical test to measure whether the data does not deviate from normal distribution significantly. This test is known as the most powerful Gaussian test as it is not affected by duplicate values in the data (which the result data contains) (Fife-Schaw, 2007; Graphpad, 2009).
The third assumption is that the data between the two groups do not vary significantly. Levene's F-test was applied to measure if the standard deviation variance between the groups varied significantly (NIST, 2006).
====3.9.4.3 Other Tests Performed====
Other tests performed that will be discussed in the results section are statistical descriptive analysis for each group using both the pre-post quiz data and the survey data. These tests will provide further insight into the research results and the differences obtained in this experiment.
The Likert scales in the survey was treated as ordinal data and therefore where not seen to have the same variance and thus treated as 3 groups positive, neutral and negative (Jacoby & Matell, 1971).
===3.9.5 Qualitative Analysis Methods===
Qualitative research methods are a natural fit with an interpretive research approach. Qualitative research is a process of interpreting the data by applying ‘logic in practice’ using a non linear research path. The emphasis is on constructionism, using inductive analysis for the generation of theory. Data used in analysis is soft data, the researcher will analysis the data looking at ways in which an individual interprets their social construct (Neuman, 2006).
Unlike quantitative analysis, no hypothesis is formed at the start of a study. It is an inductive process where the main concern of the researcher is to generate and develop new theories based upon interpretation. Qualitative research analysis relies heavily on the application of phenomenological sociology, hermeneutics and ethnography in order to interpret their findings (A. Lee, 1991).
In this study we have used qualitative methods as a way to gain an understanding of the overall experience of a participant learning experience in a virtual world as well as any differences that they may have experienced in the alternative delivery methods of the lecture.
====3.9.5.1 Analysis Data====
The data in this research study that was analysed using qualitative analysis methods was the post-survey data (see Appendix H: Survey). This survey contained open questions to enable a participant to provided feedback on their learning experience, instructional delivery and any technical constraints that they may have had during their lecture delivery. The technical difficulty question was straight forward; if they answered yes then they could comment on what went wrong. The questions that were asked in order to understand their perception of virtual world learning and delivery method were as follows:
*'''DELIVERY METHOD ASSESSMENT''' (Q 25) General Comment:
*'''VIRTUAL WORLD LEARNING EXPERIENCE'''
**(Q 30) List 3 positive experiences you had with using this technology to learn:
**(Q 31) List 3 negative experiences you had with using this technology to learn:
**(Q 32) General Comment:
Qualitative analysis of these questions required the application of hermeneutic method which is the process of analysing verbal conversations, text, journals, pictures etc looking for meaning in the detail and as a whole to reveal the deeper meaning contained within - i.e. ‘reading between the lines’ in order to extract meaning. Within this method a hermeneutic circle is preformed were interpretation takes an iterate approach interpreting as a whole and of its parts then reinterpreting in light of the new understanding (Klein & Myers, 1999; A. Lee, 1991).
====3.9.5.2 Coding====
Using hermeneutic method on the survey data as described above data was coded into patterns, themes and contextual structures in light of the research problem and literature review. Coding generally takes 3 stages in qualitative analysis – Open, Axial and Selective coding (Neuman, 2006).
Open coding was performed as a preliminary analysis to develop codes to condense data into specific meanings and themes. This process was preformed several times prior and after the quantitative analysis was preformed.
Axial coding was then performed to develop possible relationships between the coded data.
Selective coding, the final stage, was performed to extract major themes and general theory that emerged which will be discussed in the Results section of this paper.
==3.10 Summary==
In this chapter the researcher has discussed the research design that required the construction of the virtual learning campus and learning materials. The instrument used to collect the data was a pre and post quiz and survey.
This research will be applying theory triangulation, which represents a mixed method approach to the analysis. An operational hypothesis was drawn from the research problem that will be assessed using quantitative analysis methods. Qualitative analysis will be used in order to gain a better understanding of the quantitative results as well as the learning experience of participants.
The next chapter discusses the results of this research project using the methods that were discussed under Analysis Method in this chapter.
</div >
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
ccccfae42a86ad81647f0dc7e9d1c0ef9e7c70bf
Real Learning in Virtual Worlds - CHAPTER 4: Results.
0
282
312
311
2018-10-29T11:40:36Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 4: Results=
==4.1 Introduction==
In this chapter the researcher provides the results of the experiment using the methods discussed in the previous chapter. The results presented are the quantitative and the qualitative results for the virtual world learning experiment conducted in Second Life between two groups of participants the 2D group and 3D group that undertook different methods of delivery of a lecture on The Physics of Bridges.
A quantitative analysis was performed on the pre and post quiz scores of the two groups. This analysis includes the statistical test for significant difference of the pre-quiz results and the hypothesis of this experiment which measured the differences in the learning outcome between the 2D and 3D groups for Bloom’s cognitive processes of ‘remember’ and ‘understand’.
The finding for the post quiz survey Likert scale questions will be presented that measured the responses from the two groups learning experience survey.
A qualitative analysis was performed on the post-survey open questions of both groups where the data was coded into themes in order to gain a further understanding of the quantitative results and as well as the learning experiences of the two groups.
==4.2 Quantitative Analysis Results: Achievement Scores==
In this section the researcher provides the quantitative results for the pre and post quiz score results, the significance results for our operational hypothesis and conclude with the quantitative results of the post survey results.
===4.2.1 Overview of Results===
The results of the pre and post quiz totals can be seen below in the charted box plots (Figure 60). The left box plot is a traditional box plot, which provides consolidated information into a single graph.[28] The right plot is the same plot but referenced in percentiles in order to display the variance of the pre to post quiz scores. The number of questions in the pre-quiz was 8 and the post-quiz 20, each of which were evenly divided into Bloom’s cognitive process of ‘remember’ and ‘understand’.
Figure 60. Results: Pre & Post Quiz- Box Plot
===4.2.2 Pre-Quiz Results===
Table 9 provides the overall results of the 2D and 3D groups for the pre-quiz achievement scores. The pass rate is a measure of how many participants scored 50% or higher their quiz scores.[29] The pre-quiz was an 8 question quiz that tested the prior knowledge of a participant before the lecture.
{|align=center width=50%
|-
|
|colspan="3" align=center bgcolor=lightblue |2D Group
|colspan="3" align=center bgcolor=#DDADAF |3D Group
|-bgcolor=lightgrey padding=4
|
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|-
|'''Pass Rate'''
|align="right"|80%
|align="right"|35%
|align="right"|51%
|align="right"|66%
|align="right"|52%
|align="right"|55%
|-bgcolor=lightgrey
|'''Average Score'''
|align="right"|2.44
|align="right"|1.25
|align="right"|3.69
|align="right"|2.071
|align="right"|1.60
|align="right"|3.68
|-
|'''Median Score'''
|align="right"|2
|align="right"|1
|align="right"|4
|align="right"|2
|align="right"|2
|align="right"|4
|-bgcolor=lightgrey
|'''Mode Score'''
|align="right"|3
|align="right"|1
|align="right"|3
|align="right"|3
|align="right"|1
|align="right"|4
|-
|'''Minimum Score'''
|align="right"|0
|align="right"|0
|align="right"|1
|align="right"|0
|align="right"|0
|align="right"|0
|-bgcolor=lightgrey
|'''Maximum Score'''
|align="right"|4
|align="right"|3
|align="right"|6
|align="right"|4
|align="right"|4
|align="right"|7
|-
|'''Standard Deviation'''
|align="right"|1.032
|align="right"|0.775
|align="right"|1.372
|align="right"|1.263
|align="right"|0.867
|align="right"|1.479
|-bgcolor=lightgrey
|'''Skewness'''
|align="right"| -0.138
|align="right"|0.261
|align="right"|0.007
|align="right"| -0.195
|align="right"|0.351
|align="right"| -0.188
|-
|'''Kurtosis'''
|align="right"| -0.730
|align="right"| -0.150
|align="right"| -0.718
|align="right"| -1.008
|align="right"|0.037
|align="right"| -0.278
|-bgcolor=lightgrey
|'''Number of Participants'''
|align="right"|55
|align="right"|55
|align="right"|55
|align="right"|56
|align="right"|56
|align="right"|56
|}
<p align=center >'''''Table 9. Pre-Quiz Descriptive Statistical Results'''''</p>
Figure 61 provides an inverse cumulative normal distribution graph for the total pre-quiz scores. This graph tells us what percentage (y-axis) of participants scored under a nominated score (x-axis). For example 50% of participants for both 2D and 3D scored under 4 in pre-quiz total score. As can be seen both the 2D and the 3D pre-quiz total scores were the same. For a detailed analysis of each of the Bloom’s cognitive processes for the pre-quiz see Appendix J: Pre-Quiz Score Results.
Figure 61. Results: Pre-Quiz Totals - Inverse Cumulative Normal Distribution Graph
Figure 62 provides a histogram and normal distribution curve of the total pre-quiz achievement scores. Both graphs provide frequency distributions but in different forms. The histogram provides for the number of participants (frequency y-axis) that scored between 1 and 8 (x-axis). The Gaussian distribution (or bell curve) provides the probability (y-axis) that a participant that would score between 1 and 8 (x-axis) based upon the average and standard deviation of the scores within each group. For a detailed analysis of each of the Bloom’s cognitive processes for the pre-quiz see Appendix J: Pre-Quiz Score Results.
Figure 62. Results: Pre-Quiz Totals - Histogram & Bell Curve
====4.2.2.1 Pre-Quiz Significant Results====
An independent t-test was performed on the pre-quiz total scores to ensure that the groups did not differ significantly in their prior knowledge of the lecture content on ‘The Physics of Bridges’, they did not (t = -0.367, df = 119, two-tailed p = 0.714, α = 0.05).
Although no significant difference was found between the two groups pre-quiz total scores, the scores for each of the Bloom’s cognitive processes of ‘remember’ and ‘understand’ did differ significantly between the groups. The 2D pre-quiz scored significantly higher than the 3D scores for the Bloom’s cognitive process of ‘remember’ (t = 1.665, df = 109, one-tailed p = 0.0494, α = 0.05). The 3D pre-quiz scored significantly higher than the 2D pre-quiz scores for the Bloom’s cognitive process of ‘understand’ (t = -3.03167, df = 109, one-tailed p = 0.0014, α = 0.05). Appendix J: Pre-Quiz Score Results provides a detailed analysis of these results.
===4.2.3 Post-Quiz Results===
Table 10 provides the results of the 2D and 3D groups for the post-quiz achievement scores. The post-quiz contained 20 questions of which were divided evenly into two groups of Bloom’s Factual cognitive processes of ‘remember’ and ‘understand’. The number of questions within each cognitive process was 10. As with the pre-quiz, the pass rate is a measure of how many participants scored 50% or higher on their quiz scores.
{|align=center width=50%
|-
|
|colspan="3" align=center bgcolor=lightblue |2D Group
|colspan="3" align=center bgcolor=#DDADAF |3D Group
|-bgcolor=lightgrey padding=4
|
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|-
|'''Pass Rate'''
|align="right"| 85%
|align="right"|35%
|align="right"|67%
|align="right"|93%
|align="right"|36%
|align="right"|77%
|-bgcolor=lightgrey
|'''Average Score'''
|align="right"| 7
|align="right"|3.98
|align="right"|10.98
|align="right"|7.32
|align="right"|4.04
|align="right"|11.36
|-
|'''Median Score'''
|align="right"|8
|align="right"|4
|align="right"|11
|align="right"|8
|align="right"|4
|align="right"|12
|-bgcolor=lightgrey
|'''Mode Score'''
|align="right"|8
|align="right"|4
|align="right"|11
|align="right"|8
|align="right"|4
|align="right"|12
|-
|'''Minimum Score'''
|align="right"|3
|align="right"|0
|align="right"|5
|align="right"|3
|align="right"|1
|align="right"|6
|-bgcolor=lightgrey
|'''Maximum Score'''
|align="right"|10
|align="right"|8
|align="right"|17
|align="right"|10
|align="right"|8
|align="right"|17
|-
|'''Standard Deviation'''
|align="right"|1.846
|align="right"|1.484
|align="right"|2.468
|align="right"|1.597
|align="right"|1.464
|align="right"|2.347
|-bgcolor=lightgrey
|'''Skewness'''
|align="right"| -0.642
|align="right"|0.068
|align="right"|0.052
|align="right"| -0.941
|align="right"|0.332
|align="right"| -0.229
|-
|'''Kurtosis'''
|align="right"| -0.729
|align="right"| 0.558
|align="right"| -0.152
|align="right"| 0.672
|align="right"|0.010
|align="right"| 0.265
|-bgcolor=lightgrey
|'''Number of Participants'''
|align="right"|55
|align="right"|55
|align="right"|55
|align="right"|56
|align="right"|56
|align="right"|56
|}
<p align=center >'''''Table 10. Post-Quiz Descriptive Statistical Results'''''</p>
Figure 63 provides an inverse cumulative normal distribution graph for the total post-quiz scores. As was provided above this graph displays what percentage of participants scored under a nominated score.
Figure 63. Results: Post-Quiz Totals Inverse - Cumulative Normal Distribution Graph
Figure 64 provides a histogram and normal distribution curve of the post-quiz scores. As provided above with the pre-quiz graphs these graphs measure the frequency distribution of both the 2D and 3D groups.
Figure 64. Results: Post-Quiz Totals - Histogram & Bell Curve
====4.2.3.1 Post-Quiz Significant Results====
An independent t-test was performed on the post-quiz total scores of the 2D group and the 3D group showed that there was no significant difference between the results of these groups (t = -0.8212, df = 119, two-tailed p = 0.4133, α = 0.05). Appendix K: Post-Quiz Score Results provides a detailed analysis of these results.
The next section provides an analysis of the results for each of the Bloom’s cognitive process to test for signification difference between the post-quiz results for the tested hypotheses.
===4.2.4 Hypotheses Results===
As stated in Chapter 3 the operational hypotheses for this research study were as follows:
:(H1): That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
:(H2): That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
This section will discuss test results for no significant difference using the null hypothesis of H01 and H02.
====4.2.4.1 Hypothesis One: Post-Quiz Remember====
Figure 65 provides the histogram and the density traces graphs for the post-quiz results where 10 questions were given to both the 2D and 3D groups for the Bloom’s cognitive process of ‘remember’. As discussed in the previous section the histogram provides the frequency distribution of a participant’s scores. The density traces graph has been provided instead of the normal distribution graph as the results of these scores was not of normal distribution. The density traces graph provides alternative view of frequency that is similar to the histogram graph.
Figure 65. Results: Post-Quiz Remember - Histogram & Density Traces
'''Hypothesis H<sub>01</sub>'''
The null hypothesis tested H<sub>01</sub>:
:That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in no significant difference in post-quiz scores between 2D and 3D participants.
H<sub>01</sub> was tested using the non-parametric Mann-Whitney U Test as the results for the post-quiz ‘remember’ scores did not meet the assumptions for parametric testing, which requires the scores to be normality distributed. The 3D scores failed the D’Agostino-Pearson (K2) normal distribution test (p = 0.01161 ie < 0.05) therefore the scores from this group deviate significantly from normal distribution. Appendix K: Post-Quiz Score Results provides a detailed analysis of the parametric testing results.
'''Formula H<sub>01</sub>'''
Using the following Mann-Whitney U Test formula to find U;
Where:
n1 = number of group 1 subjects
n2 = number of group 2 subjects
R1 = rank total for group which smallest rank sum
W = the critical value of U1
'''Results H<sub>01</sub>'''
The results of Mann-Whitney U Test when applied found that there was no significant difference between the 2D and 3D post-quiz ‘remember’ scores where the average ranked scores was 2D = 53.9364 and 3D = 58.0268 resulted in U = 1653.5, W = 113.5, 2 tailed p = 0.493107, thus we do not reject the null hypothesis for α = 0.05. (Note: There is a distinct “observable” difference between these two groups, just not a statistically significant difference. This is explored in the next chapter).
====4.2.4.2 Hypothesis Two: Post-Quiz Understand====
Figure 66 provides the histogram and normal distribution curve for Bloom’s cognitive ‘understand’ results of the 2D and 3D groups for the post-quiz achievement scores. As discussed above these graphs display the frequency distribution of both the 2D and 3D groups where 10 questions were given in the post-quiz for Bloom’s cognitive process of ‘understand’.
Figure 66. Results: Post-Quiz Understand - Histogram & Bell Curve
'''Hypothesis H<sub>02</sub>'''
The null hypothesis tested H02:
:That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in no significant difference in post-quiz scores between 2D and 3D participants.
H<sub>02</sub> was tested using the parametric independent t-test of equal variance as the results met the assumptions for parametric testing. Appendix K: Post-Quiz Score Results provides a detailed analysis of the parametric testing results.
'''Formula H<sub>021</sub>'''
Using the following t-test formula to find t;
Where:
= the mean of group 1
= the mean of group 2
= number of group 1 subjects
= number of group 2 subjects
= the standard deviation of group 1
= the standard deviation of group 2
'''Results H<sub>02</sub>'''
The results of an independent t-test found no significant difference (t = -0.1926, df = 109, two-tailed p = 0.8477, α = 0.05) between the results of the 2D (x1 = 3.982, s1 = 1.484) and 3D (x2 = 4.036, s2 = 1.464) post-quiz ‘understand’ scores, thus we do not reject the null hypothesis.
===4.2.5 Survey Results: Likert Scales===
Table 11 displays the percentages of the post survey results divided into content knowledge, delivery method and technology. The content knowledge and delivery method questions were standardised into a 3 point scales for analysis.
{|align=center width=80%
|-
|
|
|colspan="3" align=center bgcolor=lightblue |2D Group
|colspan="3" align=center bgcolor=#DDADAF |3D Group
|-bgcolor=lightyellow
|
|align=center |'''''Content Knowledge'''''
|align=center |'''''Low'''''
|align=center |'''''Med'''''
|align=center |'''''High'''''
|align=center |'''''Low'''''
|align=center |'''''Med'''''
|align=center |'''''High'''''
|-
|21
| My level of understanding of the topic PRIOR to subject delivery.
|align="right"| 89%
|align="right"| 9%
|align="right"| 2%
|align="right"| 91%
|align="right"| 5%
|align="right"| 4%
|-bgcolor=lightgrey
|22
| My level of understanding of the topic AFTER to subject delivery.
|align="right"| 22%
|align="right"| 51%
|align="right"| 27%
|align="right"| 23%
|align="right"| 50%
|align="right"| 27%
|-bgcolor=lightyellow
|
|align=center |'''''Delivery Method & Learning Experience'''''
|align=center |'''''Positive'''''
|align=center |'''''Neutral'''''
|align=center |'''''Negative'''''
|align=center |'''''Positive'''''
|align=center |'''''Neutral'''''
|align=center |'''''Negative'''''
|-
|23
|Outline of subject material was clear and informative.
|align="right"| 98%
|align="right"| 2%
|align="right"| 0%
|align="right"| 100%
|align="right"| 0%
|align="right"| 0%
|-bgcolor=lightgrey
|24
|align="right"|The lecture was detailed enough to provide an understanding of subject matter.
|align="right"| 100%
|align="right"| 0%
|align="right"| 0%
|align="right"| 93%
|align="right"| 7%
|align="right"| 0%
|-
|28
|I found the in-world experienced offered me a better learning experience than my usual methods of learning
|align="right"| 74%
|align="right"| 22%
|align="right"| 4%
|align="right"| 73%
|align="right"| 25%
|align="right"| 2%
|-bgcolor=lightgrey
|29
|I found the subject material to be appropriate to virtual world learning
|align="right"| 84%
|align="right"| 13%
|align="right"| 3%
|align="right"| 79%
|align="right"| 18%
|align="right"| 3%
|-bgcolor=lightyellow
|
|align=center |'''''Technology'''''
|align=center |'''''No'''''
|align=center |'''''Yes'''''
|align=center |
|align=center |'''''No'''''
|align=center |'''''Yes'''''
|align=center |
|-
|26
|During the course I experienced technical difficulties with the environment
|align="right"|91%
|align="right"|9%
|align="right"|
|align="right"|93%
|align="right"|7%
|align="right"|
|}
<p align=center >'''''Table 11. Survey Likert Scales Results'''''</p>
The content knowledge questions addressed the participant’s subjective impression of their knowledge before and after attending the presentation. Both groups perceived an increase in their understanding of the subject matter after the lecture. The delivery method questions measured the subjective satisfaction levels with the virtual world 2D or 3D delivery methods (as appropriate). Both 2D and 3D indicated very high levels of satisfaction. The technology question assessed if a participant had any technological constraints to their reception of the learning material. From the results presented above a few participants experienced technological problems.
==4.3 Qualitative Analysis Results==
===4.3.1 Introduction===
Qualitative analysis was performed using methods discussed in 3.9.4 Quantitative Analysis Methods section of this thesis for the 2D and 3D groups’ open question set (25, 30, 31 and 32) contained in the post survey. In this section we present a brief overview of how the analysis was performed and the major themes that emerged from the qualitative analysis results. Interpretation of these results will be discussed in the next chapter of this thesis.
===4.3.2 Analysis Approach===
Hermeneutic analysis of the post survey open questions was performed using an iterative approach in order to code data into contextual structures and common themes amongst 2D and 3D post survey responses. Data was first condensed into 2D and 3D categories and further into the individual question categories. Open coding uncovered general themes within each question and to further assist in this stage of coding a participant’s entire question responses were read as a whole in order to reveal the entire context of their individual responses. Axial coding was performed once a generic set of themes emerged to form relationships between the entire set of 2D and 3D group question responses. Opening coding and axial coding took several iterations before selective coding was preformed revealing 4 major themes along with sub themes that can be seen in Table 12 below. These themes along with their meaning are discussed below.
===4.3.3 Themes of the Open Survey Questions===
The open questions were as follows:
*DELIVERY METHOD ASSESSMENT (Q 25) General Comment:
*VIRTUAL WORLD LEARNING EXPERIENCE
**(Q 30) List 3 positive experiences you had with using this technology to learn:
**(Q 31) List 3 negative experiences you had with using this technology to learn:
**(Q 32) General Comment:
{| align="center" style="border-collapse: collapse; border-width: 1px; border-style: solid; border-color: #000"
|-
|align=center|'''''Theme'''''
|align=center|'''''Sub-Theme'''''
|-bgcolor=lightgrey
|'''Virtual World Learning'''
|
|-
|'''Virtual Learning Campus'''
|
|-bgcolor=lightgrey
|'''Lecture Delivery'''
|
*Format
*Information Content
*Learning
*Facets of 3D Learning
*Instruction
*Focus
*Navigation
*Technical Constraints
|-
|'''Survey Instrument'''
|
|}
<p align=center >'''''Table 12. Qualitative Analysis Reuslts: Themes'''''</p>
The above themes were classified as follows:
*'''Virtual World Learning''': This category included the aspects of a participant’s experience while using the virtual world as a learning platform. The types of comments contained in this category were not specific to the experiment but rather to the experience of the virtual world medium as a learning tool. The general features and characteristics of a virtual world that a participant disliked or liked about using this method of learning and their over-all impression that they had using the virtual world as a learning platform.
*'''Virtual Learning Campus''': This category included comments about the virtual learning campus experience. These comments related specifically to the set-up and operation of the entire virtual learning environment within the virtual world.
*'''Lecture Delivery''': This category was the major category that included comments about the lecture experience of a participant that was specific to the lecture delivery treatment they received. This category contained sub-themes as follows:
**'''Format''': The style and layout of the presentation, how the information was presented.
**'''Information Content''': The depth and breadth of information content presented about the topic (The Physics of Bridges).
**'''Learning''': The aspects of obtaining new knowledge.
**'''Facets of 3D Learning''': This theme contained only comments from the 3D group, their perception of the use of 3D models as a learning tool in delivery.
**'''Instruction''': The method by which knowledge was transferred from the instructor to the learner, the interface between the presentation and the learner.
**'''Focus''': The observations affecting attention and the temporal experience of a participant within the virtual world whilst they were learning.
**'''Navigation''': Comments that related to the controlling their avatar within the lecture theater.
**'''Technical Constraints''': Comments that related to technical constraints that a participant experienced during the lecture.
*'''Survey Instrument''': This category included comments that related to the pre or post quiz of the participant.
Figure 67 provides a diagram of the relationship of these themes in the context of the qualitative analysis performed on the survey results. In the next chapter we will discuss the results of this qualitative analysis.
Figure 67. Qualitative Analysis: Relationship of Vitrual World Learning Themes
==4.4 Summary==
In this chapter we presented the quantitative and qualitative results of the research study.
A quantitative analysis was performed for both the 2D and 3D groups where the number of participants was 55 and 56 respectively. The pass rate for both the 2D and 3D groups’ pre-quiz scores was 51% and 55% respectively.
A significance test performed on the results of the total pre-quiz showed no significant difference between the scores of each group. Significance tests performed on Bloom’s cognitive processes of ‘remember’ and ‘understand’ showed a significant difference between the groups. The 2D group scored significantly higher than the 3D group for the Bloom’s cognitive process of ‘remember’ and the 3D group scored significantly higher than the 2D group for the Bloom’s cognitive process of ‘understand’.
The post-quiz pass rate for both the 2D and 3D groups’ total post-quiz score was 67% and 77% respectively. In spite of this, the results for the significance tests performed for Bloom’s cognitive process of ‘remember’ and ‘understand’ for the hypothesis showed no significance differences between the 2D and the 3D groups learning outcomes.
The post-survey results for the Likert scale questions was presented that provided the results dividend into positive, neutral and negative percentiles for both of the groups.
A qualitative analysis performed on the open-questions contained in the post survey revealed 4 major themes in the survey comments of both groups combined, these themes were:
#Virtual world learning environment,
#Virtual learning campus,
#Lecture delivery and
#Survey components of the research study.
A definition for each of these themes was provided along with a relationship diagram.
The next chapter we discuss the results presented in this chapter.
</div>
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
bef6a7b89e0b84e573ccf0862b187de979e91196
366
312
2018-10-29T12:02:35Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 4: Results=
==4.1 Introduction==
In this chapter the researcher provides the results of the experiment using the methods discussed in the previous chapter. The results presented are the quantitative and the qualitative results for the virtual world learning experiment conducted in Second Life between two groups of participants the 2D group and 3D group that undertook different methods of delivery of a lecture on The Physics of Bridges.
A quantitative analysis was performed on the pre and post quiz scores of the two groups. This analysis includes the statistical test for significant difference of the pre-quiz results and the hypothesis of this experiment which measured the differences in the learning outcome between the 2D and 3D groups for Bloom’s cognitive processes of ‘remember’ and ‘understand’.
The finding for the post quiz survey Likert scale questions will be presented that measured the responses from the two groups learning experience survey.
A qualitative analysis was performed on the post-survey open questions of both groups where the data was coded into themes in order to gain a further understanding of the quantitative results and as well as the learning experiences of the two groups.
==4.2 Quantitative Analysis Results: Achievement Scores==
In this section the researcher provides the quantitative results for the pre and post quiz score results, the significance results for our operational hypothesis and conclude with the quantitative results of the post survey results.
===4.2.1 Overview of Results===
The results of the pre and post quiz totals can be seen below in the charted box plots (Figure 60). The left box plot is a traditional box plot, which provides consolidated information into a single graph.[28] The right plot is the same plot but referenced in percentiles in order to display the variance of the pre to post quiz scores. The number of questions in the pre-quiz was 8 and the post-quiz 20, each of which were evenly divided into Bloom’s cognitive process of ‘remember’ and ‘understand’.
Figure 60. Results: Pre & Post Quiz- Box Plot
===4.2.2 Pre-Quiz Results===
Table 9 provides the overall results of the 2D and 3D groups for the pre-quiz achievement scores. The pass rate is a measure of how many participants scored 50% or higher their quiz scores.[29] The pre-quiz was an 8 question quiz that tested the prior knowledge of a participant before the lecture.
{|align=center width=50%
|-
|
|colspan="3" align=center bgcolor=lightblue |2D Group
|colspan="3" align=center bgcolor=#DDADAF |3D Group
|-bgcolor=lightgrey padding=4
|
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|-
|'''Pass Rate'''
|align="right"|80%
|align="right"|35%
|align="right"|51%
|align="right"|66%
|align="right"|52%
|align="right"|55%
|-bgcolor=lightgrey
|'''Average Score'''
|align="right"|2.44
|align="right"|1.25
|align="right"|3.69
|align="right"|2.071
|align="right"|1.60
|align="right"|3.68
|-
|'''Median Score'''
|align="right"|2
|align="right"|1
|align="right"|4
|align="right"|2
|align="right"|2
|align="right"|4
|-bgcolor=lightgrey
|'''Mode Score'''
|align="right"|3
|align="right"|1
|align="right"|3
|align="right"|3
|align="right"|1
|align="right"|4
|-
|'''Minimum Score'''
|align="right"|0
|align="right"|0
|align="right"|1
|align="right"|0
|align="right"|0
|align="right"|0
|-bgcolor=lightgrey
|'''Maximum Score'''
|align="right"|4
|align="right"|3
|align="right"|6
|align="right"|4
|align="right"|4
|align="right"|7
|-
|'''Standard Deviation'''
|align="right"|1.032
|align="right"|0.775
|align="right"|1.372
|align="right"|1.263
|align="right"|0.867
|align="right"|1.479
|-bgcolor=lightgrey
|'''Skewness'''
|align="right"| -0.138
|align="right"|0.261
|align="right"|0.007
|align="right"| -0.195
|align="right"|0.351
|align="right"| -0.188
|-
|'''Kurtosis'''
|align="right"| -0.730
|align="right"| -0.150
|align="right"| -0.718
|align="right"| -1.008
|align="right"|0.037
|align="right"| -0.278
|-bgcolor=lightgrey
|'''Number of Participants'''
|align="right"|55
|align="right"|55
|align="right"|55
|align="right"|56
|align="right"|56
|align="right"|56
|}
<p align=center >'''''Table 9. Pre-Quiz Descriptive Statistical Results'''''</p>
Figure 61 provides an inverse cumulative normal distribution graph for the total pre-quiz scores. This graph tells us what percentage (y-axis) of participants scored under a nominated score (x-axis). For example 50% of participants for both 2D and 3D scored under 4 in pre-quiz total score. As can be seen both the 2D and the 3D pre-quiz total scores were the same. For a detailed analysis of each of the Bloom’s cognitive processes for the pre-quiz see Appendix J: Pre-Quiz Score Results.
Figure 61. Results: Pre-Quiz Totals - Inverse Cumulative Normal Distribution Graph
Figure 62 provides a histogram and normal distribution curve of the total pre-quiz achievement scores. Both graphs provide frequency distributions but in different forms. The histogram provides for the number of participants (frequency y-axis) that scored between 1 and 8 (x-axis). The Gaussian distribution (or bell curve) provides the probability (y-axis) that a participant that would score between 1 and 8 (x-axis) based upon the average and standard deviation of the scores within each group. For a detailed analysis of each of the Bloom’s cognitive processes for the pre-quiz see Appendix J: Pre-Quiz Score Results.
Figure 62. Results: Pre-Quiz Totals - Histogram & Bell Curve
====4.2.2.1 Pre-Quiz Significant Results====
An independent t-test was performed on the pre-quiz total scores to ensure that the groups did not differ significantly in their prior knowledge of the lecture content on ‘The Physics of Bridges’, they did not (t = -0.367, df = 119, two-tailed p = 0.714, α = 0.05).
Although no significant difference was found between the two groups pre-quiz total scores, the scores for each of the Bloom’s cognitive processes of ‘remember’ and ‘understand’ did differ significantly between the groups. The 2D pre-quiz scored significantly higher than the 3D scores for the Bloom’s cognitive process of ‘remember’ (t = 1.665, df = 109, one-tailed p = 0.0494, α = 0.05). The 3D pre-quiz scored significantly higher than the 2D pre-quiz scores for the Bloom’s cognitive process of ‘understand’ (t = -3.03167, df = 109, one-tailed p = 0.0014, α = 0.05). Appendix J: Pre-Quiz Score Results provides a detailed analysis of these results.
===4.2.3 Post-Quiz Results===
Table 10 provides the results of the 2D and 3D groups for the post-quiz achievement scores. The post-quiz contained 20 questions of which were divided evenly into two groups of Bloom’s Factual cognitive processes of ‘remember’ and ‘understand’. The number of questions within each cognitive process was 10. As with the pre-quiz, the pass rate is a measure of how many participants scored 50% or higher on their quiz scores.
{|align=center width=50%
|-
|
|colspan="3" align=center bgcolor=lightblue |2D Group
|colspan="3" align=center bgcolor=#DDADAF |3D Group
|-bgcolor=lightgrey padding=4
|
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|align=center |'''''Rem'''''
|align=center |'''''Und'''''
|align=center |'''''Total'''''
|-
|'''Pass Rate'''
|align="right"| 85%
|align="right"|35%
|align="right"|67%
|align="right"|93%
|align="right"|36%
|align="right"|77%
|-bgcolor=lightgrey
|'''Average Score'''
|align="right"| 7
|align="right"|3.98
|align="right"|10.98
|align="right"|7.32
|align="right"|4.04
|align="right"|11.36
|-
|'''Median Score'''
|align="right"|8
|align="right"|4
|align="right"|11
|align="right"|8
|align="right"|4
|align="right"|12
|-bgcolor=lightgrey
|'''Mode Score'''
|align="right"|8
|align="right"|4
|align="right"|11
|align="right"|8
|align="right"|4
|align="right"|12
|-
|'''Minimum Score'''
|align="right"|3
|align="right"|0
|align="right"|5
|align="right"|3
|align="right"|1
|align="right"|6
|-bgcolor=lightgrey
|'''Maximum Score'''
|align="right"|10
|align="right"|8
|align="right"|17
|align="right"|10
|align="right"|8
|align="right"|17
|-
|'''Standard Deviation'''
|align="right"|1.846
|align="right"|1.484
|align="right"|2.468
|align="right"|1.597
|align="right"|1.464
|align="right"|2.347
|-bgcolor=lightgrey
|'''Skewness'''
|align="right"| -0.642
|align="right"|0.068
|align="right"|0.052
|align="right"| -0.941
|align="right"|0.332
|align="right"| -0.229
|-
|'''Kurtosis'''
|align="right"| -0.729
|align="right"| 0.558
|align="right"| -0.152
|align="right"| 0.672
|align="right"|0.010
|align="right"| 0.265
|-bgcolor=lightgrey
|'''Number of Participants'''
|align="right"|55
|align="right"|55
|align="right"|55
|align="right"|56
|align="right"|56
|align="right"|56
|}
<p align=center >'''''Table 10. Post-Quiz Descriptive Statistical Results'''''</p>
Figure 63 provides an inverse cumulative normal distribution graph for the total post-quiz scores. As was provided above this graph displays what percentage of participants scored under a nominated score.
Figure 63. Results: Post-Quiz Totals Inverse - Cumulative Normal Distribution Graph
Figure 64 provides a histogram and normal distribution curve of the post-quiz scores. As provided above with the pre-quiz graphs these graphs measure the frequency distribution of both the 2D and 3D groups.
Figure 64. Results: Post-Quiz Totals - Histogram & Bell Curve
====4.2.3.1 Post-Quiz Significant Results====
An independent t-test was performed on the post-quiz total scores of the 2D group and the 3D group showed that there was no significant difference between the results of these groups (t = -0.8212, df = 119, two-tailed p = 0.4133, α = 0.05). Appendix K: Post-Quiz Score Results provides a detailed analysis of these results.
The next section provides an analysis of the results for each of the Bloom’s cognitive process to test for signification difference between the post-quiz results for the tested hypotheses.
===4.2.4 Hypotheses Results===
As stated in Chapter 3 the operational hypotheses for this research study were as follows:
:(H1): That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
:(H2): That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
This section will discuss test results for no significant difference using the null hypothesis of H01 and H02.
====4.2.4.1 Hypothesis One: Post-Quiz Remember====
Figure 65 provides the histogram and the density traces graphs for the post-quiz results where 10 questions were given to both the 2D and 3D groups for the Bloom’s cognitive process of ‘remember’. As discussed in the previous section the histogram provides the frequency distribution of a participant’s scores. The density traces graph has been provided instead of the normal distribution graph as the results of these scores was not of normal distribution. The density traces graph provides alternative view of frequency that is similar to the histogram graph.
Figure 65. Results: Post-Quiz Remember - Histogram & Density Traces
'''Hypothesis H<sub>01</sub>'''
The null hypothesis tested H<sub>01</sub>:
:That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in no significant difference in post-quiz scores between 2D and 3D participants.
H<sub>01</sub> was tested using the non-parametric Mann-Whitney U Test as the results for the post-quiz ‘remember’ scores did not meet the assumptions for parametric testing, which requires the scores to be normality distributed. The 3D scores failed the D’Agostino-Pearson (K2) normal distribution test (p = 0.01161 ie < 0.05) therefore the scores from this group deviate significantly from normal distribution. Appendix K: Post-Quiz Score Results provides a detailed analysis of the parametric testing results.
'''Formula H<sub>01</sub>'''
Using the following Mann-Whitney U Test formula to find U;
Where:
n1 = number of group 1 subjects
n2 = number of group 2 subjects
R1 = rank total for group which smallest rank sum
W = the critical value of U1
'''Results H<sub>01</sub>'''
The results of Mann-Whitney U Test when applied found that there was no significant difference between the 2D and 3D post-quiz ‘remember’ scores where the average ranked scores was 2D = 53.9364 and 3D = 58.0268 resulted in U = 1653.5, W = 113.5, 2 tailed p = 0.493107, thus we do not reject the null hypothesis for α = 0.05. (Note: There is a distinct “observable” difference between these two groups, just not a statistically significant difference. This is explored in the next chapter).
====4.2.4.2 Hypothesis Two: Post-Quiz Understand====
Figure 66 provides the histogram and normal distribution curve for Bloom’s cognitive ‘understand’ results of the 2D and 3D groups for the post-quiz achievement scores. As discussed above these graphs display the frequency distribution of both the 2D and 3D groups where 10 questions were given in the post-quiz for Bloom’s cognitive process of ‘understand’.
Figure 66. Results: Post-Quiz Understand - Histogram & Bell Curve
'''Hypothesis H<sub>02</sub>'''
The null hypothesis tested H02:
:That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in no significant difference in post-quiz scores between 2D and 3D participants.
H<sub>02</sub> was tested using the parametric independent t-test of equal variance as the results met the assumptions for parametric testing. Appendix K: Post-Quiz Score Results provides a detailed analysis of the parametric testing results.
'''Formula H<sub>021</sub>'''
Using the following t-test formula to find t;
Where:
= the mean of group 1
= the mean of group 2
= number of group 1 subjects
= number of group 2 subjects
= the standard deviation of group 1
= the standard deviation of group 2
'''Results H<sub>02</sub>'''
The results of an independent t-test found no significant difference (t = -0.1926, df = 109, two-tailed p = 0.8477, α = 0.05) between the results of the 2D (x1 = 3.982, s1 = 1.484) and 3D (x2 = 4.036, s2 = 1.464) post-quiz ‘understand’ scores, thus we do not reject the null hypothesis.
===4.2.5 Survey Results: Likert Scales===
Table 11 displays the percentages of the post survey results divided into content knowledge, delivery method and technology. The content knowledge and delivery method questions were standardised into a 3 point scales for analysis.
{|align=center width=80%
|-
|
|
|colspan="3" align=center bgcolor=lightblue |2D Group
|colspan="3" align=center bgcolor=#DDADAF |3D Group
|-bgcolor=lightyellow
|
|align=center |'''''Content Knowledge'''''
|align=center |'''''Low'''''
|align=center |'''''Med'''''
|align=center |'''''High'''''
|align=center |'''''Low'''''
|align=center |'''''Med'''''
|align=center |'''''High'''''
|-
|21
| My level of understanding of the topic PRIOR to subject delivery.
|align="right"| 89%
|align="right"| 9%
|align="right"| 2%
|align="right"| 91%
|align="right"| 5%
|align="right"| 4%
|-bgcolor=lightgrey
|22
| My level of understanding of the topic AFTER to subject delivery.
|align="right"| 22%
|align="right"| 51%
|align="right"| 27%
|align="right"| 23%
|align="right"| 50%
|align="right"| 27%
|-bgcolor=lightyellow
|
|align=center |'''''Delivery Method & Learning Experience'''''
|align=center |'''''Positive'''''
|align=center |'''''Neutral'''''
|align=center |'''''Negative'''''
|align=center |'''''Positive'''''
|align=center |'''''Neutral'''''
|align=center |'''''Negative'''''
|-
|23
|Outline of subject material was clear and informative.
|align="right"| 98%
|align="right"| 2%
|align="right"| 0%
|align="right"| 100%
|align="right"| 0%
|align="right"| 0%
|-bgcolor=lightgrey
|24
|align="right"|The lecture was detailed enough to provide an understanding of subject matter.
|align="right"| 100%
|align="right"| 0%
|align="right"| 0%
|align="right"| 93%
|align="right"| 7%
|align="right"| 0%
|-
|28
|I found the in-world experienced offered me a better learning experience than my usual methods of learning
|align="right"| 74%
|align="right"| 22%
|align="right"| 4%
|align="right"| 73%
|align="right"| 25%
|align="right"| 2%
|-bgcolor=lightgrey
|29
|I found the subject material to be appropriate to virtual world learning
|align="right"| 84%
|align="right"| 13%
|align="right"| 3%
|align="right"| 79%
|align="right"| 18%
|align="right"| 3%
|-bgcolor=lightyellow
|
|align=center |'''''Technology'''''
|align=center |'''''No'''''
|align=center |'''''Yes'''''
|align=center |
|align=center |'''''No'''''
|align=center |'''''Yes'''''
|align=center |
|-
|26
|During the course I experienced technical difficulties with the environment
|align="right"|91%
|align="right"|9%
|align="right"|
|align="right"|93%
|align="right"|7%
|align="right"|
|}
<p align=center >'''''Table 11. Survey Likert Scales Results'''''</p>
The content knowledge questions addressed the participant’s subjective impression of their knowledge before and after attending the presentation. Both groups perceived an increase in their understanding of the subject matter after the lecture. The delivery method questions measured the subjective satisfaction levels with the virtual world 2D or 3D delivery methods (as appropriate). Both 2D and 3D indicated very high levels of satisfaction. The technology question assessed if a participant had any technological constraints to their reception of the learning material. From the results presented above a few participants experienced technological problems.
==4.3 Qualitative Analysis Results==
===4.3.1 Introduction===
Qualitative analysis was performed using methods discussed in 3.9.4 Quantitative Analysis Methods section of this thesis for the 2D and 3D groups’ open question set (25, 30, 31 and 32) contained in the post survey. In this section we present a brief overview of how the analysis was performed and the major themes that emerged from the qualitative analysis results. Interpretation of these results will be discussed in the next chapter of this thesis.
===4.3.2 Analysis Approach===
Hermeneutic analysis of the post survey open questions was performed using an iterative approach in order to code data into contextual structures and common themes amongst 2D and 3D post survey responses. Data was first condensed into 2D and 3D categories and further into the individual question categories. Open coding uncovered general themes within each question and to further assist in this stage of coding a participant’s entire question responses were read as a whole in order to reveal the entire context of their individual responses. Axial coding was performed once a generic set of themes emerged to form relationships between the entire set of 2D and 3D group question responses. Opening coding and axial coding took several iterations before selective coding was preformed revealing 4 major themes along with sub themes that can be seen in Table 12 below. These themes along with their meaning are discussed below.
===4.3.3 Themes of the Open Survey Questions===
The open questions were as follows:
*DELIVERY METHOD ASSESSMENT (Q 25) General Comment:
*VIRTUAL WORLD LEARNING EXPERIENCE
**(Q 30) List 3 positive experiences you had with using this technology to learn:
**(Q 31) List 3 negative experiences you had with using this technology to learn:
**(Q 32) General Comment:
{| align="center" style="border-collapse: collapse; border-width: 1px; border-style: solid; border-color: #000"
|-
|align=center|'''''Theme'''''
|align=center|'''''Sub-Theme'''''
|-bgcolor=lightgrey
|'''Virtual World Learning'''
|
|-
|'''Virtual Learning Campus'''
|
|-bgcolor=lightgrey
|'''Lecture Delivery'''
|
*Format
*Information Content
*Learning
*Facets of 3D Learning
*Instruction
*Focus
*Navigation
*Technical Constraints
|-
|'''Survey Instrument'''
|
|}
<p align=center >'''''Table 12. Qualitative Analysis Reuslts: Themes'''''</p>
The above themes were classified as follows:
*'''Virtual World Learning''': This category included the aspects of a participant’s experience while using the virtual world as a learning platform. The types of comments contained in this category were not specific to the experiment but rather to the experience of the virtual world medium as a learning tool. The general features and characteristics of a virtual world that a participant disliked or liked about using this method of learning and their over-all impression that they had using the virtual world as a learning platform.
*'''Virtual Learning Campus''': This category included comments about the virtual learning campus experience. These comments related specifically to the set-up and operation of the entire virtual learning environment within the virtual world.
*'''Lecture Delivery''': This category was the major category that included comments about the lecture experience of a participant that was specific to the lecture delivery treatment they received. This category contained sub-themes as follows:
**'''Format''': The style and layout of the presentation, how the information was presented.
**'''Information Content''': The depth and breadth of information content presented about the topic (The Physics of Bridges).
**'''Learning''': The aspects of obtaining new knowledge.
**'''Facets of 3D Learning''': This theme contained only comments from the 3D group, their perception of the use of 3D models as a learning tool in delivery.
**'''Instruction''': The method by which knowledge was transferred from the instructor to the learner, the interface between the presentation and the learner.
**'''Focus''': The observations affecting attention and the temporal experience of a participant within the virtual world whilst they were learning.
**'''Navigation''': Comments that related to the controlling their avatar within the lecture theater.
**'''Technical Constraints''': Comments that related to technical constraints that a participant experienced during the lecture.
*'''Survey Instrument''': This category included comments that related to the pre or post quiz of the participant.
Figure 67 provides a diagram of the relationship of these themes in the context of the qualitative analysis performed on the survey results. In the next chapter we will discuss the results of this qualitative analysis.
Figure 67. Qualitative Analysis: Relationship of Vitrual World Learning Themes
==4.4 Summary==
In this chapter we presented the quantitative and qualitative results of the research study.
A quantitative analysis was performed for both the 2D and 3D groups where the number of participants was 55 and 56 respectively. The pass rate for both the 2D and 3D groups’ pre-quiz scores was 51% and 55% respectively.
A significance test performed on the results of the total pre-quiz showed no significant difference between the scores of each group. Significance tests performed on Bloom’s cognitive processes of ‘remember’ and ‘understand’ showed a significant difference between the groups. The 2D group scored significantly higher than the 3D group for the Bloom’s cognitive process of ‘remember’ and the 3D group scored significantly higher than the 2D group for the Bloom’s cognitive process of ‘understand’.
The post-quiz pass rate for both the 2D and 3D groups’ total post-quiz score was 67% and 77% respectively. In spite of this, the results for the significance tests performed for Bloom’s cognitive process of ‘remember’ and ‘understand’ for the hypothesis showed no significance differences between the 2D and the 3D groups learning outcomes.
The post-survey results for the Likert scale questions was presented that provided the results dividend into positive, neutral and negative percentiles for both of the groups.
A qualitative analysis performed on the open-questions contained in the post survey revealed 4 major themes in the survey comments of both groups combined, these themes were:
#Virtual world learning environment,
#Virtual learning campus,
#Lecture delivery and
#Survey components of the research study.
A definition for each of these themes was provided along with a relationship diagram.
The next chapter we discuss the results presented in this chapter.
</div>
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
bef6a7b89e0b84e573ccf0862b187de979e91196
Real Learning in Virtual Worlds - CHAPTER 5: Discussion & Conclusion
0
283
314
313
2018-10-29T11:40:37Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 5: Discussion & Conclusion=
==5.1 Introduction==
This chapter provides the analysis of the results presented in the previous chapter along with a discussion of these results and opportunities for further research.
In analysis of the results the researcher has applied both quantitative and qualitative methods in order to answer the research question: How effective is it to learn in a virtual world using a traditional 2D slide show method compared to that of a 3D interactive simulation?
Quantitative methods were performed on participant’s achievement scores for the pre and post quiz and Likert scale results. Qualitative methods were used on responses from participant’s post survey open questions results.
Discussion of results applied triangulation combining both the quantitative and qualitative results in order to better understand the 2D and 3D group’s learning experience and any differences that were observed between these groups.
This chapter concludes with a discussion on the opportunities for further research.
==5.2 Quantitative Analysis==
===5.2.1 The Results of the Hypothesis===
The aim of this study was to determine if two lectures differing only in the presence or absence of 3D models (and therefore employing either 2D or 3D learning delivery) in an online 3D virtual world would produce different learning outcomes for Bloom’s cognitive processes of ‘remember’ or ‘understand’. The following hypotheses were formed:
*(H<sub>1</sub>): That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
*(H<sub>2</sub>): That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
Measured statistically, neither of the above hypotheses were sustained by the scored (quiz) testing results as there was no significant statistical difference between the results of the two groups. The researcher applied statistical significance testing as the foundation for rejection of the null hypothesis formation of the above hypotheses (i.e. that, in each case, the process will result in NO significant difference) based upon a statistically measurable difference. If there is no measurable difference found between the samples; the primary hypotheses remains unconfirmed. An unconfirmed hypothesis does not mean the hypothesis is false rather it means it is capable of disproof thus unconfirmed (Karl Popper’s principals of falsifiability).
As the researcher was not able to refute the null hypothesis on the basis of a raw statistical comparison of the test scores, the researcher turned to the real data results to see if there was an actual (although possibly not significant) difference between the results of the two groups, or any clearly emerging or suggested trends that might qualify the implications of the raw statistical comparison.
===5.2.2 The Results of the Pre-Quiz===
====5.2.2.1 Pre-Quiz Total Scores====
Analysis of the results in the previous chapter for the total pre-quiz scores (i.e. both cognitive processes combined) between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 51% and 55% respectively, therefore 4% of 3D participants scored better than the 2D participants for a pass rate of 4 out of 8.
*Average scores (mean) for the 2D and 3D groups were 3.69 and 3.68 respectively. Both groups’ average scores were effectively the same.
*Median scores for the 2D and 3D groups were both the same with a value of 4.
*Mode for the 2D group was lower than the 3D group, 3 and 4 respectively. Effectively demonstrating that more 2D participants scored a 3 whereas more 3D participants scored a 4. A score of 3 for the 2D and 3D groups were 31% and 23% respectively and a score of 4 for the 2D and 3D groups were 20% and 23% respectively.
*The range of scores for the 2D group was less than the 3D group, 1-6 and 0-7 respectively.
*Standard deviation for the 2D groups was less than the 3D group 1.372 and 1.479 respectively, therefore the 2D total groups’ scores were closer to the centre of the mean (average score) than the 3D group.
*Skewness was positive for the 2D group and negative for the 3D group, 0.007 and -0.188 respectively. This demonstrates that the *3D groups’ scores were slightly higher than the 2D scores. This skewness difference is due to the mode difference between the groups as both the median and average scores where equal.
*Kurtosis was negative (platykurtic) for the both groups. Platykurtic distributions are flatter at the top of a distribution curve and less peaked around the average score (mean). The slight difference in value of kurtosis across the two groups accounts for the probability density value being lower in the Gaussian distribution graph in Figure 62. Results: Pre-Quiz Totals - Histogram & Bell Curve
'''Summary & Interpretation: Pre-Quiz Total Scores'''
There was a 4% higher pass rate for the 3D group and the mode value of the 3D group was higher than the 2D groups’ total pre-quiz scores. The pass rate was higher because of the greater mode value obtained by the 3D group. The 3D group obtained a greater range of scores than the 2D group thus providing the 2D group with a tighter (smaller) distribution of scores around the mean.
Given the distribution of scores between the two groups the 2D group had a higher probability of scoring around the mean than the 3D group (28% and 26% respectively). Thus, although the 3D group obtained a higher pass rate and mode value, a participant in the 2D group was 2% more likely of scoring a 4 than a participant in the 3D group. This small percentage difference can be seen in Figure 61 inverse normal distribution graph, in the lower and higher quartiles the 2D group varied away from the 3D group. In the lower, quartile participants in the 2D group scored higher. In the higher quartile, participants in the 2D group scored lower. Thus this slight shift away from the 3D group curve toward the mean demonstrates that the 2D group was more likely to obtain the mean value than the 3D group.
Although there was a difference in the 2D and 3D group pre-quiz scores the percentage difference was, in the opinion of this researcher, effectively immaterial; showing that both groups stated with the same level knowledge on the topic ‘The Physics of Bridges’ prior to the lecture.
The result of the question 21 in the Likert scale survey is comparative with the above analysis. When asked to scale their level of knowledge on the topic ‘prior’ to the subject the low plus medium scores for the 2D and 3D participants were 98% and 96% respectively. The response that their knowledge was high from the 2D and 3D participants was 2% and 4% respectively. This provides a 2% difference for both responses, which is comparative to the real results of the data analysis above. So the difference in the participant group’s subjective assessment matches that showed by the tested assessment.
====5.2.2.2 Pre-Quiz Remember and Understand Scores====
In the previous chapter we found that when a significance test was performed independently on Bloom’s cognitive processes of ‘remember’ and ‘understand’ for the pre-quiz a significant difference was found between the two groups. The 2D group scored significantly higher than the 3D group for the Bloom’s cognitive process of ‘remember’ (t = 1.665, df = 109, one-tailed p = 0.0494, α = 0.05), and the 3D group scored significantly higher than the 2D group for the Bloom’s cognitive process of ‘understand’ (t = -3.03167, df = 109, one-tailed p = 0.00138, α = 0.05).
The pass rates for Bloom’s ‘remember’ cognitive process for the 2D and 3D groups were 80% and 66% respectively. The pass rates of Bloom’s ‘understand’ cognitive process for the 2D and 3D groups were 35% and 52% repetitively. The average score for the 2D and 3D groups for Bloom’s ‘remember’ was 2.44 and 2.071 and ‘understand’ 1.25 and 1.60 respectively. The standard deviation for the 2D and 3D groups for Bloom’s ‘remember’ were 1.032 and 0.775 and for Bloom’s ‘understand’ were 1.263 and 0.867 respectively.
The scores for the Bloom’s splits at the pre-quiz stage are of passing interest in this experiment (independent of the post-quiz results) and the significant differences found for these figures were not especially surprising.
This experiment was not designed to measure and compare pre versus post learning outcomes of the participants. Rather, it was designed to find differences between the 2D and 3D groups comparative learning outcomes (i.e. the post-quiz results). In other words, the research was not trying to measure ‘by how much’ learning or understanding improves, but rather the relative difference in the final results between the 2D and 3D groups.
The pre-quiz was given to obtain an indicator of the general knowledge of the material that was to be delivered so that relative differences in outcomes could be normalised against the initial positions.
With the total number of pre-quiz questions being 8, of which both of the Bloom’s cognitive process were represented by only 4 questions each, there were not enough questions in each group to test reliably the true levels each of Bloom’s cognitive processes of ‘remember’ and ‘understand’ prior to the lecture. With so few data points for the individual processes, small variations in responses produce large variations in final scores. Hence the 2D/3D group variations were not especially surprising.
The problem for the research design was to avoid impacting the outcomes with the measurement instrument itself. The post-quiz was taken within approx 30 minutes of the pre-quiz, and only a single lecture was delivered, between those two measurement points. Providing more than 8 questions in the pre-quiz for a single 20 minute lecture would have increased the risk that the participants learnt from the pre-quiz questions relative to the lecture.
Furthermore, the concept of ‘remember’ and ‘understand’ for Bloom’s cognitive processes prior to instruction does not especially make sense in the context of the experiment. As discussed in Chapter 3 (instrument design), the development of the questions within the instrument was based upon the lecture. ‘Remember’ questions were extracted from the instructional content of the lecture whereas the ‘understand’ questions were derived from material not taught in the lecture. The pre-quiz questions were also specifically targeted at the four bridge types covered in the lecture to calibrate the extent of pre-existing content knowledge.
A participant being tested within each of these levels prior to instruction (over which no certainty of prior topic learning experience can be established) can only be measured with respect to their pre-existing general knowledge of the topic. This may reflect either memory or understanding. The extent to which this analysis grouped the pre-quiz questions into ‘remember’ or ‘understand’ in this discussion, reflects only the researcher’s perfect knowledge of the lecture content as to whether the topic of the question was subsequently directly taught or not in the lecture – not whether the participant was actually remembering or understanding at the pre-quiz stage.
The extent to which the split at the pre-quiz stage matters to the discussion is that if a participant already had an indicative level of ‘understanding’ prior to the lecture, that ‘understanding’ should improve when assessed after the lecture. If one group, for example, starts with a level of 60% and ends with 61%, this is possibly a worse outcome than the other group starting with 45% and ending with 58% (although there is also some discussion that could qualify even that conclusion).
===5.2.3 The Results of the Post-Quiz===
====5.2.3.1 Post-Quiz Total Scores====
An analysis of the results in the previous chapter for the total (i.e. combined Bloom’s) post-quiz scores between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 67% and 77% respectively, therefore 10% of 3D participants scored better than the 2D participants for a pass rate of 10 out of 20.
*Average scores for the 2D and 3D groups were 10.98 and 11.36 respectively. A 3D participant scored on average 0.38 higher than a 2D participant.
*Median scores for the 2D and 3D groups were 11 and 12 respectively. The 3D participants scored higher in the 2 quartile than the 2D participants.
*Mode for the 2D group was lower than the 3D group, 11 and 12 respectively. Effectively demonstrating that more 2D participants scored 11 and more 3D participants scored 12. A score of 11 for the 2D and 3D groups were 20% and 21% respectively and a score of 12 for the 2D and 3D groups were 11% and 29% respectively.
*The range of scores for the 2D group was more than the 3D group, 5-17 and 6-17 respectively.
Standard deviation for the 2D group was slightly more than the 3D group 2.468 and 2.347 respectively, therefore the 3D total groups’ scores were slightly closer to the centre of the mean (average score) than the 2D group.
*Skewness was positive for the 2D group and negative for the 3D group, 0.052 and -0.229 respectively. This demonstrates that the 3D groups’ scores were slightly higher than the 2D scores. This skewness difference is due to the mean, median and mode differences between the two groups’ scores.
*Kurtosis was negative (platykurtic) for the 2D group and positive (leptokurtic) for the 3D group, -0.2 and 0.3 respectively. As mentioned above platykurtic distributions are flatter at the top of a distribution curve whereas leptokurtic distributions are higher and peaked around the mean score. The differences in value of kurtosis between the two groups account for the probability density value being higher for the 3D group in the Gaussian distribution graph in Figure 64.
'''Summary & Interpretation: Post-Quiz Total Scores'''
The above analysis finds that the 3D participants scored overall better than the 2D participants in the post-quiz. Although this difference was not statistically significant from the t-test results (t = -0.8212, df = 119, two-tailed p = 0.4133, α = 0.05) the real results indicate that there was a slight difference between the two group results. Analysing the Gaussian distribution curve (Figure 64) shows that the 2D and 3D participants had a 15% and 16% likelihood respectively of scoring a 12 in their total post-quiz score. In general the overall results showed that the 3D group performed better by 1%, this can also be seen on the inverse distribution graph (Figure 63) where the two groups almost run parallel to one another with the 3D group performing approximately 1% better in their overall test results.
The results of question 22 in the Likert scale, when asked to scale their level of knowledge on the topic ‘after’ the lecture, the 2D and 3D participants low response was 22% and 23% respectively and medium response 73% and 74% respectively. At the medium level the self assessment was consistent with the test results of a 1% difference. At the low level the 3D group seemed to be more conservative in their response perceiving that their knowledge was less than the 2D group although the real result showed the contrary. In either case a 1% difference is within the margin of error.
====5.2.3.2 Post-Quiz Remember Scores====
Analysis of the results in the previous chapter for the post-quiz ‘remember’ scores between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 85% and 93% respectively, therefore 8% of 3D participants scored better than the 2D participants for a pass rate of 5 out of 10).
*Average scores for the 2D and 3D groups were 7 and 7.32 respectively. The 3D participants scored on average 0.32 higher than the 2D participants.
*Median and mode scores for the 2D and 3D group was 8 for both groups.
*The range of scores for both groups was the same, 3-8.
*Standard deviation for the 2D group was higher than the 3D group 1.8 and 1.6 respectively, with a 0.2 difference between the groups.
*Skewness was negative for both groups with the 2D and 3D skew of -0.6 and -0.9 respectively. As both groups were close to 0 with a 0.3 difference between the two groups this demonstrates that the distribution of the results for both groups was almost symmetrical.
*Kurtosis was negative (platykurtic) for the 2D group and positive (leptokurtic) for the 3D group, -0.7 and 0.7 respectively.
'''Summary & Interpretation: Post-Quiz Remember Scores'''
The post-quiz scores mask a complexity that requires further consideration. Although the 2D group was normality distributed, the 3D group failed D’Agostino-Pearson (K2) normal distribution test (p = 0.01161 ie < 0.05). In order to compare meaningfully the results of the 2D and 3D groups, the researcher needed to look into why the 3D group failed normal distribution and what, if anything, it implies to the interpretation of the apparently “better” 3D pass rates.
Analysis of the histogram and density traces graph Figure 65 show that both the 2D and 3D graph displays a bimodal distribution in the histogram graph with 2 peaks at 3 and 8. As can be seen on the density traces graph, for the 2D scores between the scores of 3-8, the variance was greater. This causes the curve to flatten prior to its peak.
Although the statistical analysis determined that difference between the pass rates and mean (by which the 3D group was higher than the 2D group) was not significant when taken as a whole there is a clear visual difference between the graphs that deserves explanation. When considered within specific score ranks the outcome slightly favours the 3D group because:
#2D group participants were 8% more to likely to score 4 or below,
#3D group participants were 6% more likely to score 8 or above, and
#3D group participants were 2% more likely to score 9 or and above.
This analysis can be easily seen in frequency table: below (Table 13. Frequency Table: Post-Quiz Remember).
{| align="center" width="50%" style="background-color:#ffffcc; "
|-
|colspan="4" align="center" |'''Post-Quiz Remember'''
|-
|align=center|'''Score'''
|align=center bgcolor="#DDADAF" |'''2D'''
'''(Cumulative)'''
|align=center bgcolor="lightblue" |'''3D'''
'''(Cumulative)'''
|align=center bgcolor="lightgrey"|'''Difference'''
'''3D vs. 2D'''
|-
|align=right |0
|align=right | 0%
|align=right | 0%
|align=right | 0%
|- bgcolor="lightgrey"
|align=right |1
|align=right | 0%
|align=right | 0%
|align=right | 0%
|-
|align=right |2
|align=right | 0%
|align=right | 0%
|align=right | 0%
|- bgcolor="lightgrey"
|align=right |3
|align=right | 4%
|align=right | 4%
|align=right | 0%
|-
|align=right |4
|align=right | 15%
|align=right | 7%
|align=right | -8%
|- bgcolor="lightgrey"
|align=right |5
|align=right | 25%
|align=right | 13%
|align=right | -12%
|-
|align=right |6
|align=right | 33%
|align=right | 27%
|align=right | -6%
|- bgcolor="lightgrey"
|align=right |7
|align=right | 47%
|align=right | 41%
|align=right | -6%
|-
|align=right |8
|align=right | 78%
|align=right | 80%
|align=right | 2%
|- bgcolor="lightgrey"
|align=right |9
|align=right | 98%
|align=right | 96%
|align=right | -2%
|-
|align=right |10
|align=right | 100%
|align=right | 100%
|align=right | 0
|}
<p align="center" >'''''Table 13. Frequency Table: Post-Quiz Remember (Rounded)'''''</p>
The frequency table show a cumulative analysis of each group at a particular score. As can be seen in the table, the 3D scores in general were lower than the 2D scores for each level of score below 8. The implication is therefore that the relative performance of 3D versus 2D ‘remember’ outcomes is slightly better at the higher rankings (80% and above), but slightly worse at the lower pass mark scores.
While the difference in the means may not be statistically significant, the results suggest that the outcomes at particular bands are potentially significant. To put this into context; if the desired group learning outcome is to achieve a pass or better, both methods of delivery were similar, but if the desired outcome is to maximise the potential scores, the 3D delivery might be indicated.
In general, the overall performance of both groups was better than for the score obtained in Bloom’s cognitive process of ‘understand’ which we will discuss in the next section.
====5.2.3.3 Post-Quiz Understand Scores====
Analysis of the results in the previous chapter for the post-quiz ‘understand’ scores between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 35% and 36% respectively, therefore 1% of 3D participants scored better than the 2D participants for a pass rate of 5 out of 10).
*Average scores for the 2D and 3D groups were 3.98 and 4.04. A 3D participant scored on average 0.05 higher than a 2D participant.
*Median and mode scores for the 2D and 3D group was 4 for both groups.
*The range of scores for the 2D group was more than the 3D group, 0-8 and 1-8 respectively.
*Standard deviation for the 2D group was slightly higher than the 3D group 1.48 and 1.46 respectively. A 0.02 difference between the groups shows very little difference in standard deviation.
*Skewness was positive for both groups the 2D and 3D was 0.068 and 0.332 respectively. As both groups were close to 0 with a 0.27 difference between the two groups this demonstrates that the distribution of the results for both groups was almost symmetrical.
*Kurtosis was positive (leptokurtic) for both groups with the 2D and 3D groups being 0.558 and 0.010 respectively. With a result of a 0.55 difference between the two groups shows very differences between the two groups kurtosis values.
'''Summary & Interpretation: Post-Quiz Understand Scores'''
From the above analysis both groups scored almost the same for Bloom’s post-quiz ‘understand’ results. This is clear from a study of the histogram and Gaussian distribution curve in Figure 66: both the 2D and 3D data points are almost identical.
Further, the frequency distribution comparison of the two groups confirms that the scored results at each rating band of the 2D and 3D groups exhibit no considerable difference.
Bloom’s cognitive process of ‘understand’ is a higher level cognitive process than ‘remember’. Given the pass results and the mean, median and mode scores both groups scored ‘badly’ (35% – 36%) in Bloom’s cognitive process of ‘understand’. On the face of it, the results suggest that both groups did not show a ‘high’ level of understanding of the subject matter after training; however, it should be remembered that the mean, median and mode results are a reflection of the difficulty relationship between the questions testing understanding and the lecture itself. The decision was made during the design stage to include some ‘very high’ difficulty questions in the understanding question set to ensure real test of the achieved level of understanding. Some additional light is shed on these results in the Likert scale and qualitative analysis that follows.
This research is primarily interested in the comparative difference of the 2 delivery methods, rather than the absolute scores, and for this purpose the results suggest that there is no significant or effective difference between the 2D and 3D group testing (quiz) results for the ‘understand’ cognitive process, within the confines of this experimental process.
===5.2.4 Likert Scale Analysis===
The above analysis of the quiz results showed that there was a positive result for the Bloom’s cognitive process of ‘remember’ whereas for Bloom’s ‘understand’ there seemed to be fewer participants in both groups that understood the subject matter on ‘The Physics of Bridges’ to the same level that they remembered it. In order to understand this result we will turn to the Likert scales where we asked the participants to assess the quality of the deliver method. Questions 23 and 24 specially answered these questions.
*Question 23 asked whether “the subject matter was clear and informative”. The 2D and 3D groups’ responses were positive 98% and 100% and neutral 2% and 0% respectively. With exception to the 2% neutral response it would seem that the majority of people found the subject matter to be clear and informative. Of interest the 2% neutral result was a single participant who actually performed better than group’s average score for the post-quiz results in both cognitive processes of ‘remember’ and ‘understand’ with a z-score of 0.54 and 0.69 respectively. Given their actual results it seems that within their group that this participant understood the material better than they remembered it.
*Question 24, was the lecture detailed enough to understand the subject matter. The 2D and 3D groups’ responses were positive 100% and 93% and neutral 0% and 7% respectively. Of interest were the neutral responses that came from the 3D group. These responses were made up of 4 participants all of whose post-quiz results in both cognitive processes of ‘remember’ and ‘understand’ scored less than the group’s average in their z-scores, with exception to one that scored better on their ‘understand’ post-quiz score than the ‘remember’ score.
From the above results of questions 23 and 24 the majority of participants perceived that the lecture material was clear, informative and detailed enough in order for them to understand the subject matter. The few in the 3D group who were only neutrally satisfied that the level of information detail was sufficient to understand the topic achieved post-quiz z-scores that were below average for the total group so their self assessment seemed to be correct.
Question 29 asked if the topic was appropriate to virtual world learning. This question was asked in order to gain an understanding of a participant’s view on the choice of topics that was delivered for instruction. The majority response for both groups was positive with the 2D and 3D group’s responses positive 84% and 79% respectively and neural 13% and 18% respectively. Within the 2D and 3D groups the neutral scores accounted for 7 and 10 participants respectively. For these participants in the 2D group the z-scores showed that 4 performed below average for the cognitive process of ‘remember’ and 2 for the cognitive process of ‘understand’. Within the 3D group the z-scores showed that 5 performed below average for the cognitive process of ‘remember’ and 7 for the cognitive process of ‘understand’. It seems from these results that although the majority of the participants where positive about the choice of topics a few were neutral with the appropriateness of the material to the environment, and more so in the 3D group, in spite of the fact that the material was identical in both cases. Given their z-score results from the neutral responses the 2D participants still performed better for ‘understand’ than ‘remember’, while within the 3D group the neutral responders appeared to not ‘remember’ or ‘understand’ the topic well – suggesting their relative (to the group) self assessment was consistent with their relative scored outcomes.
Question 28 asked a participant whether the in world learning method offered a better learning experience than their usual (real world) learning methods. The results showed between the 2D and 3D groups positive 74% and 73%, neutral 13% and 18% and negative 3% and 3% respectively. Although the overall results showed a positive result there was more variance with respect to quiz scores in their responses on this question.
Question 26 asked participants if they experienced any technical difficulties. The majority of participants in both groups did not indicate that they had had any technical difficulties. The responses for the 2D and 3D groups ‘No’ 91% and 93% and ‘Yes’ 9% and 7% respectively. For the participants that answered yes to this question the major problems were sound and picture loading delay (lag). All of these people commented that it was only for a short period and the problem was rectified quickly. Although a small number of participants answered yes to this question that they had no technological constraint, the open format questions showed slightly more experienced some technical issues (although apparently not perceived as sufficient to rank a “yes” in this question), which will be discussed in the next section.
This group of questions essentially assessed the participant’s perception of quality, appropriateness, purpose and “fit” to the medium of the experience. Necessarily the responses to these questions are likely to be coloured by the participant’s perception of the lecture delivery system experienced (i.e. 2D or 3D). Throughout this group of questions the responses were very strongly positive while the worst grade with a significant number of responders was neutral (excluding Q26). With the exception of the assessment of the clarity of the material, the Likert assessments slightly favoured the 2D delivery method.
The slight favouring of the 2D delivery could be either an absolute result, or a result coloured by raised expectations of one or other of the two delivery methods. We need to investigate, therefore, the qualitative analysis of the open questions to adequately interpret this slight bias in the results.
Question 26 was a check-question to allow explanation of the results in the other questions should the results therein had proven dramatically negative.
==5.3 Qualitative Analysis==
From the qualitative analysis of the post-survey responses many aspects came out about the learning experience of participants as well as the differences between the two groups in this study.
===5.3.1 Thematic Analysis Results===
As discussed in the previous chapter the results of the post survey open questions were grouped into themes and coded for qualitative analysis in order to provide further insight into the achievement results and the learning experience of participants. There were four themes that were found on analysis of the data as follows:
*Virtual World Learning
*Virtual Learning Campus
*Lecture Delivery
*Survey Instrument
In this section we provide a thematic analysis of these themes that emerged from the post-survey.
====5.3.1.1 Virtual World Learning====
This theme was specially related to the use of the virtual world platform as a learning tool rather than the delivery method of the presentation.
Convenience was the main factor mentioned from both groups. The theme identified included: doing it from home, in my own time and not having to travel in order to learn. These sorts of comments are not specific to virtual world learning technology as today many educational courses cater for students via online courses. However, there was a sense of presence that the participants felt from “being there with other people” and seeing others learn that seemed to make the experience more enjoyable to them over traditional or alternative learning methods. Quite a few commented on how the experience felt “personal like they were really sitting in a lecture room taking the course”, the atmosphere was relaxed, soothing, and providing less pressure than traditional class room methods of learning. These comments are interesting, partly because the lecture mirrored a real-world lecture in that it could not be “paused” by a participant and ran for a fixed time per slide, and a fixed time in total, so to some extent it was more rigid in delivery format than a real-world lecture in which the lecture might be paused while a question is asked and answered.
Another theme that emerged was that this medium offered a new way of learning where it was ‘on demand’ rather than a planned course where one would have to prepare in advance. Similar to searching the web to find out about a specific topic, participants felt that this medium offered them a way learn new material when they wanted and to experience this material rather than just read it over a webpage. The lectures ran on a continuous loop over the experimental period – so this perception is reasonable, in spite of the fact that the lectures were not actually ‘on demand’.
The technology seemed to offer a learning medium to reach people that traditionally would not formally learn or even use the virtual world for learning which they had not done before. It seemed to inspire people to want to learn more and do more learning exercises in and out of Second Life. For many participants this was a new experience they had never thought about using online virtual worlds as a learning platform, for them they had only used the medium as a game rather than taking a course. After experiencing this study many were inspired to seek out more leaning in Second Life or even in real life.
The overall impression from all the participants was that the virtual world learning experience was fun and enjoyable. Very few negative comments were made about the experience other than they could see that this may have the potential to not be taken seriously or possibly cheat. The experience seemed to open people’s minds about the opportunities that virtual world technology could be used seriously rather than just as a gaming environment. A comment from a participant that sums up the general impression of this technology being used as a learning tool:
<blockquote >
I'm still not convinced that virtual learning can replace learning in real world but now I think it might be possible.
</blockquote >
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
====5.3.1.2 Virtual Learning Campus====
This theme included comments made about the virtual learning campus, the setup and operations of the entire virtual learning environment in which the experiment was conducted.
The majority of comments were that the participants found it to be ‘user friendly’ and ‘easy to use’. The layout of the different rooms seemed to provide a fun way for them learn. There were only 2 people that commented on having a problem with the signage when they got to the post survey room they missed the board that told them how to take the post-quiz.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
====5.3.1.3 Lecture Delivery====
This theme is where the majority of comments were made from participants. These comments directly related a participants learning experience of the research project. The range of comments was coded into sub-categories; format, information content, learning, facets of 3D learning, instruction, focus, navigation and technology constraints.
=====5.3.1.3.1 Format=====
This theme included comments on the layout and format of the slide presentation. The comments from both groups were mostly positive. Participants could offer comments in positive, negative or general sections of the survey. In total, across the 2D and 3D groups, there were comments clearly identified as positive 11, 24 and negative 3, 1 respectively in this theme.
The positive comments liked the layout of the slides and the way the information was presented. A few more negative comments came from the 2D group; one that they wished they had the ability to interact with the pictures on the screen, another wanted annotation on the images (similar to the interaction question) and someone had problems with the colour differentiation of the tension and compression markings (tension and compression was shown in red and green respectively suggesting either colour blindness or graphic card faults). Only one person from the 3D group made a negative comment in this area identifying a desire for more pictures on the slides (the slides in the 2D and 3D lectures were identical).
While the largest proportion of the responses to the general comments question were provided by the 3D group, a common suggestion received from both groups concerning the format was that they wished the presentation could be paused or controlled such as by forwarding or rewinding. As a proportion of each group that actually provided a comment at all, this comment was marginally more frequent among the 2D participants.
With respect to the 3D group’s comments about presentation speed, it seemed that although they had been presented with a model and voice over that mirrored the images of the slides and the text therein, they still desired the opportunity to read the slides to view the information. The time per slide and the slides themselves were identical in both the 2D and 3D lectures and set to allow sufficient time for reading the slide – in fact the voice over effectively read the slide to the participant. In the 3D case the addition of the 3D models in the same time window meant that participants had an additional vector of information to absorb in the same amount of time as the 2D participants. The researcher’s impression from the comments in this respect is that in the 2D case the motivator was about the desired to review and contemplate the information, while in the 3D case it was more to do with their ability absorb multiple information vectors simultaneously.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.2 Information Content=====
This theme included comments to do with information content in the presentation. There were 56 comments from the 2D group and 33 from the 3D group.
On the most part people found that the presentation very interesting and informative but in this area the 2D group seemed to be more satisfied than the 3D group. Within the 3D group a number of people desired more information or perceived the information was too technical to appreciate without additional enquiry or time – yet the information in both cases was identical.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.3 Learning=====
This theme included comments to do with people obtaining new information. Both group comments here were very positive. All participants that commented in this group stated they enjoyed the experience of learning and gaining the new knowledge. Most seemed to enjoy the topic and the new knowledge that they took away with them on bridges and/or considered that the material was well thought out and presented. Some commented that they enjoyed the opportunity to obtaining new knowledge in the virtual world/game space were inspired to seek additional in-world learning.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.4 Facets of 3D Learning=====
The comments in this category were specific to the 3D lecture with the use of models. The participants in the 3D group were universally positive about the use of 3D models. Many seemed to believe that having a model of the presentation assisted them in the understanding of the subject matter. (Note, however, that the test scores did not reflect a significant advantage from the 3D models with respect to understanding, although there were indications of an advantage in remembering).
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.5 Instruction=====
The comments in this category had to do with the method by which the new knowledge transferred to the participant. In this area a small but significant number of participants in both groups commented that they missed not having a real person to ask questions to clarify the information but more so in the 3D group which seemed to want to find out more information about the topic than was presented to them. (Note, as mentioned, the information was identical in both cases).
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.6 Focus=====
The comments in this category had to do with observations affecting attention and the temporal learning experience of a participant.
This theme emerged though the general comments throughout the survey. There seemed to two broad sub groups of comments in the focus them: the presence of distractions during the learning experience and the participant’s perception of the available time per slide for learning. Although both groups experienced the same general learning conditions and real-world times, there seemed to be opposing perception of the significance of sources of distraction and perceptions of time across the two groups during the presentation. We will break this category into these two sub-themes (distractions and time) to better understand the focus aspect of the participant groups.
'''Distractions'''
The sources of distractions seemed to come from either the outside world or the inside world.
:'''Inside world distractions'''
:Only 3 comments were made from the 2D group with distractions from the inside world experience: distracting avatars, a participant’s outfit getting in the way of their view and a participant distracted by their curiosity with the technology setup used to deliver and manage the lectures.
:Whereas with the 3D group quite a number of people complained about inside world distractions, particularly being annoyed with other avatars disrupting their learning. As a group, the 3D participants were comparatively emotional/animated (with respect to the 2D group) in their response to these distractions and in a number of cases complained that the other people were not taking education as seriously as them.
'''Outside world distractions'''
:A small number of the 2D group complained/commented about outside world distraction or commented upon the advantages of staying in touch with the outside world. Such comments as being able to answer the phone, using yahoo messaging, doing things at their desk and people in real life talking to them were some of the comments made from the 2D participants.
:Whereas there was only one member of the 3D group commenting upon outside world distractions.
'''Time'''
The main theme that emerged from the 2D group was that a small number of participants commented that the presentation was a bit slow and/or that their attention wandered and/or that they “zoned out” during some slides. Contrast this with the 3D group who tended to say that the presentation was fast or a reasonable number even complained that it went too fast. The 3D group commented that the material kept them engaged and the presentation held their attention. In both cases the real-world times were identical – so the observations are directly related to perception, and in the light of other comments made, the implication is that there was a difference in perceived ‘engagement’ that arose from the single variable of the presence of the 3D objects.
The 2D participants who observed that occasionally they ‘zoned’ out during some of the slides also commented that the voice over was too smooth/calm. Nobody in the 3D group observed this problem, and conversely a number commented on how the voice over was exactly right for the presentation and kept their attention during the presentation. Interestingly the voiceover was identical in each case – but the presence of the 3D objects appearing around participants may have presented an additional level of stress that was properly countered by the voice over.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.7 Navigation=====
Traditionally a significant problem in virtual-world training experiments, learning the appropriate method of avatar navigation has typically been compounded by the use of first-time virtual world participants unfamiliar with the control of their avatar. This researcher considered this a flaw in previous studies that distorted the results with a temporary experience that would be overcome with only a small amount of in-world experience. The participants in this study, therefore, were intentionally recruited from users already present in second-life rather than brought into the virtual world specifically for the purpose of the experiment.
Consequently the negative comments on navigation were lower than in previous studies, and not generally of the same fundamental ‘how do I operate my avatar?’ nature present in a number of the studies considered in the literature review. In any case the campus and lecture environment was specifically designed to minimise the likelihood of these types of problems, and required only minimal knowledge of avatar controls (sufficient for someone with about 30 minutes of experience – based on the packaged avatar training in the second-life orientation islands).
The comments in this category had to do with how their avatar viewed the presentation. These comments were complaints from the 2D and 3D participants about some viewing aspect of the presentation.
Three (3) of the 2D group complained that the chairs blocking their view of the presentation. It was obvious from this comment that these people lacked the knowledge to use mouse view and used third person view, and did not understand how to control the third person roaming camera effectively.
The 3D group’s complaints provided the most insight as to how they viewed the presentation. A small, but significant, number of the participants complained that the 3D models of the bridges ‘got in the way’ of their reading of the slides (a function of navigation) or that they could not both read the slides and look at the models (a function of time). Although avatars were not seated once the 3D presentation began, and avatars were free to wander around the space, with slides projected onto the walls around the models, some users clearly did not realise the additional freedom allowed them to position their avatar for clear slide viewing at any time. Further, it seemed, although presented with a 3D model and the voice over that covered the entire slide content a number of the 3D group still attempted to use the traditional method of viewing the slides whilst looking at the models.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.8 Technology Constraints=====
This category contained comments by participants about the technology constraints that they had experience during the lecture deliver. Although this question was also asked in the Likert questions as provided in the previous section above where the 2D and 3D groups responded ‘9% and 7% respectively, more participants made a comment identifying technical problems in their open comments.
From the 2D and 3D groups’ comments 20% and 18% respectively identified at least one technology constraint. Of these all of the participants had already answered yes in the Likert question therefore a further 11% in both groups commented upon having a technology related problems. The technical difficulties were due to sound and lag/object rezzing, the same problems given by the participants in the Likert questions.
As discussed in the literature review this technology is streamed in real time therefore ‘lag’ is a common risk in using this technology and will vary with network connection speed (real lag) and individual computer problem (false lag – but possibly the single most common culprit). Although no one made any comments that the lag affected their ability to learn. In most cases where it was reported the lag caused only a slight delay in the slide show with comment being that they experienced ‘some’ lag. As each slide, audio and object was independently synched together, lag problems could not accumulate across the slides and any synching problems were corrected with the next slide (or in some cases half way through a slide).
The sound constraints were only temporary in all cases. This problem was due to drop outs of the presentation voice-over. This problem was picked-up early in the testing phase where occasionally the audio would stop and a re-log of the application was required in order to get the audio back. As this was picked up in testing signs were placed around the lecture screens instructing the participant to re-log if they experienced audio dropouts. In all case if they complained about the audio drop, participants also noted that a re-log solved their problem quickly. The impact of an immediate re-log on the learning would be at most half the content of a slide would have been lost. As all slides were summarised at points during the presentation the participant was unlikely to completely miss the associated material.
====5.3.1.4 Survey Instrument====
This category included comments that related to the pre or post survey instrument.
There were 6 participants from both groups that commented upon the size of the pictures in the diagrams of the post-quiz being too small. From their comments they had trouble distinguishing some of the bridges in the pictures.
As the display size is based upon a person’s monitor size people that had small monitors may have had problems distinguishing the details in the pictures. The survey viewed correctly on a 17 inch monitor at 96 dpi, but anyone with a smaller than this or unusual resolution settings may (possibly) have had problems.
This problem was not realised until quite a number of participants had already completed the research. It was therefore decided that any change in the picture size in the survey would only corrupt the experiment conditions and may bias the results so no modification was made. Therefore all participants that undertook this research operated under the same picture constraints in the survey.
On review of the results of the 6 participants that complained 3 of these were from the 2D group and 3 from the 3D group. The participants results for the post-quiz scores for the 2D for ‘remember’ and ‘understand’ were; 9, 7; 9, 4; 9, 4 and the 3D group; 8, 4; 8, 5; 8, 4 respectively. All of these participants passed both Blooms’ cognitive processes categories. The average z-scores for their groups for ‘remember’ were all above average but the ‘understand’ showed that these participants were either on average score or scored below average.
There were 9 ‘remember’ questions and 8 ‘understand’ questions in the survey that required the participant to use pictures in answering the question. The Bloom’s cognitive process of ‘understand’ would have been more affected by the picture constraints. The questions in the Bloom’s ‘understand’ cognitive process were substantially more difficult with material that was not presented during the lecture therefore the participant had to use the picture to recognised and assimilate information in order to answer the question.
The researcher notes that this problem may have contributed to some of the low scores results especially within the Bloom’s cognitive process of ‘understand’. Although from the comments only 6 out of 111 people complained about this problem there is no way to know how much of a problem this presented, from the lack of comments we can only assume that this was not a constraint for most participants – or, at least, not one they were realising they were experiencing.
===5.3.2 Qualitative Analysis of Thematic Results===
====5.3.2.1 Introduction====
The Survey comment questions were not compulsory, but less than 4% reflected nonsense or non-responses with an average of 100 words per person, and 3D participants providing approximately 12% more comment volume than the 2D participants.
Interpreting the collected thematic responses was aided by the consistency of the emotion and approval expressed by participants, and the surprising number of instant messages sent directly to the researcher by participants in thanks for the experience, and the range of both supportive comments and recommendations provided in the open comments. To that end the researcher offers the following generalised collation of the qualitative opinions expressed by participants.
The general lack of negative observations reflects that same proportion in the underlying data. Three positive and three negative observations were requested as well as open/general comments. Overwhelmingly, the positive question was populated while the negative question was generally underpopulated, or comments like ‘I have none’. The most frequent negative comments were an expressed desire to control the delivery speed, acquire additional information in some way, or the opportunity for distraction. In some cases these were also identified as positives. The lack of colour in the negative comments was contrasted by the diversity of positive comments. Different participants chose to comment on different positive aspects of the experience, and an individual participant tended to concentrate comments within a theme.
To aid in interpretation of the analysis while avoiding the implication of hard statistical interpretation, where some degree of researcher subjectivity and ‘translation’ is involved, the researcher has used the following terms with some degree of overlap at the margins:
*Few – 5% or less of comments
*A number – 5% to 15% of comments
*A significant number – 15% to 25% of comments
*Many – More than 25% of comments
*A majority – More than 50% of comments
*Most – More than 60% of comments
Outside of these terms the researcher has provided clear absolute percentage counts where the numbers are at the extremes.
====5.3.2.2 The Virtual Learning Experience: Both Groups====
The two most used words to describe their experience were ‘fun’ and ‘interesting’. The frequency and strength of these positive comments surprised the researcher, representing over 60% of the participants.
The virtual world seemed to offer participants with a fun way to learn with the convenience of learning on line in their own time but further, at least as the experimental campus and lecture rooms were constructed in this experiment, offering a participant with a sense of presence that provided them with the perception of a similar experience to that of learning in a real world learning environment. Seeing others in the environment while attending a lecture as their avatar in a simulated theatre, gave the participant more of a connection to the learning process than one might expect if they were doing an online purely HTML page based traditional distance education learning course. To the majority of participants this experience felt personal, and the atmosphere relaxed and many found that it offered a more pleasurable experience than the traditional learning method of attending a lecture class in the real world.
The environment seemed to promote a favourable attitude to learning. Not only did the majority of the participants say it was “fun” but a number commented that they felt inspired to learn more about the topic, wanted to ask further questions on the same, or seek for more details and a significant number expressed surprise that although they clearly had experience of the topic in real life, they had never really considered how exciting a bridge could be. Only one participant expressed a non favourable attitude to this form of learning and/or the topic.
Based on the comments, the average participant was clearly immersed in this aspect of virtual learning as reflected by many comments that expressed varying degrees of ownership over the experience – and even, in some cases, resentment of others or extraneous circumstances had interfered in their learning.
To many this was a new experience in a virtual world and although they initially saw the offer of ‘linden’ as an easy way to make fast money, by the end of their experience instead of thanking the researcher for the money they thanked the researcher for the learning experience. The content of some of comments expressed surprise that the game they had known before was no longer ‘just’ a game to them. Participation had opened the possibility for a whole new world of learning, inside and outside of Second Life.
The virtual learning campus provided the participant with a seamless way to learn. Many liked the staged approach reflected by the testing and learning process (necessary as part of the automated control regime for the experimental process) - finding it a novel approach to the learning experience. Going from room to room to complete the each stage in the learning process possibly made this more fun than an alternative virtual world learning approach utilising a single class room in which all stages of a process might occur. Not knowing where the teleports would lead them in the next stage of their journey provided an exploratory feel to the environment. For most participants they found the environment very easy to use and welcoming.
The format and the information provided in the slide presentation received, on the most part, positive feedback. The requirement for more control over the slide show to pause, forward and rewind came from both groups. Enabling user control like this in this experiment was not an option as control over the information delivery for both groups’ had to be placed under strict experimental conditions so that only one independent variable changed in the experiment – the presence or absence of the 3D models.
Even so, if this or a similar lecture was not under experimental conditions the researcher cannot help but question if this addition would have lessened the entire experience of the participant. Sharing in the learning process within a set time frame and the pressure of the quiz after completion may have also added to the positive experience felt by the participants. Possibly allowing the user to walk away with additional material may have assisted in providing the participant with the convenience to learn more than just the information presented. In addition to this a live lecturer as some participants would have like to have seen may have also satisfied the participant’s requirements for more controlled information.
Technology constraints certainly presented itself in this experiment with approximately 20% of the participants from both groups commenting upon a technology issues to varying degrees. The major problems related to network latency (lag) and audio dropouts. In a streamed world (such as Second Life) especially when there are many avatars in a SIM lag is a typical problem. Audio although not as bad or as frequent as visual lag does occasionally present a problem in Second Life. The audio stream occasionally is lost and the only way to fix the problem is to re-log the application. Both problems from participant’s comments did not seem to affect their learning experience, and for only 7-9% warranted rating as having an impact. In the experience of this researcher, the majority of lag class problems are in fact not network lag but recipient computer performance issues. The entire sim and the various lecture rooms were monitored continually during the experiment and true (network) lag was not observed on the researcher’s computers during the experiment, nor did the SIM performance statistics monitored during the period demonstrate any significant decrease in performance.
Approximately 5% people from both groups complained that some of the pictures were too small in the survey instrument thus potentially obscuring the details of the effected bridges displayed. This could have been a major constraint on a participant’s ability to answer the Bloom’s cognitive process of ‘understand’ questions more than the ‘remember’ questions, and therefore may have contributed to perceptions of difficulty in Bloom’s ‘understand’ cognitive process portion of the post-quiz.
====5.3.2.3 The Participants: Differences Between Groups====
Whilst the 3D participants were presented with 3D models to aid learning, a number still seemed to be reading the slide show presentation. This effectively provided the 3D participants with 4 channels of learning; slide show pictures, slide show text, audio and models, whereas the 2D participants only had 3 of these channels.
There were 24 slides 20 of which were learning slides provided within a 20 minute lecture session for both groups. This meant a participant had approximately one minute per slide where they were presented with something new. There were 11 3D models of 4 bridge types therefore a new model was presented to them approximately every 2 minutes. Combining the models with the slides in the same time frame as the 2D participants may have disadvantaged the 3D participants.
The information content that was delivered to both groups was the same. No more or less technical or providing anything new with exception to the 3D models for the 3D group. Yet from the 3D groups’ comments some participants seemed to want more information or simpler explanations. Within the 2D group many had commented that it was easy to follow not too technical and easy to comprehend – none commented that the material was complex. Possibly this difference is not that they needed more information but rather that with 4 information channels there was too much information provided in the time allocated for the 3D group. Alternatively the difference might also reflect as case of ‘not knowing what you don’t know’ in the 2D group, while the addition of accurately constructed 3D models raised additional questions in the minds of the participants, or improved their general level of attentiveness.
The 3D group found the addition of 3D models to be a useful learning tool. From their comments it seemed that 3D models of the bridges were perceived to have helped them understand the subject matter better than they perceived they would with a lecture without the models. (Note, however that in this case the perception is not supported by the test results). Many participants perceived that the 3D models also made the entire lecture experience more engaging than whatever assumed alternative against which they were measuring the experience.
The focus of the 3D participants was more strongly inside the world rather than their outside world. Furthermore the extent to which their focus inside the world provided distraction brought about more emotional response than distractions noted by from the 2D participants. The former tended to use repetition, descriptive adjectives and emphatic declamations concerning distractions, while the latter tended to merely note or comment favourably about the ability to be distracted! This seems to suggest that the 3D participants experienced a greater feeling of presence and possibly immersion in their virtual world learning experience.
To appreciate these comments, the reader is referred to the literature review where the difference between immersion and presence is discussed (see page 39). Immersion or ‘system immersion’ is an objective measure it is the extent to which a person becomes removed from their outside world to operate within the virtual world space. Whereas, presence is a subjective measure it is the extent to which a person feels connected inside the virtual world or the feeling of ‘being there’ and their ‘willingness to suspend disbelief’ they are a part of, and inside, the virtual world.
The classification model presented by Benford (see Figure 9. Shared Space Technology According to Artificiality and Transportation) virtual reality environments are placed on a scale of artificiality and transportation. The degree to which a participant becomes removed from their local space to operate in a remote space is transportation that from Benford model is purely based upon the physical aspect of the virtual environment.
In this study the strong difference in the emotion and terms consistently used by participants in the 2D versus 3D lectures seemed to suggest that given the same virtual reality technology (desktop CVE) a greater transportation occurred for the 3D participants. The 3D participants become removed from their local world distractions and were transported into the virtual remote world. Thus in turn lead to a higher degree of presence within the virtual environment. The 2D comments of distraction compares with the results obtained by Martinez, Martinez, & Warkentin (2007) reviewed in Chapter Two Literature Review. They found when participants were presented with a 2D lecture in world participants reported distractions or a ‘disconnect’ from the lecture in world (see p. 86).
The degree of presence in the environment is often linked with desktop virtual worlds based around social interaction. As discussed the literature review Schroeder defines presence in terms of presence, copresence and connected presence (see Figure 10) which can be described respectively as ‘being there’, ‘being there together’ and ‘being connected together’. As discussed in the literature review for a social virtual world the level of presence is greater than a game virtual world due to the social connective aspects that occur within the virtual world. Heeter also defines that the presence of an individual is increased when social relationships are formed within the environment. Whereas, in this study, both groups where given the same social interactive aspects but it seems that the introduction of 3D models produced a higher level of presence for the 3D participant. The 3D participants clearly displayed more ‘ownership’ over their learning experience than the 2D group.
Of interest this higher level of engagement by the 3D group carried over to the volume of survey responses. The 3D group provided more descriptive and richer comments than the 2D group. Rather than a short dot points as often used by 2D participants, the 3D participants tended to use sentences in their open comments. The researcher was left with the subjective impression that the 3D participants, as a group, were motivated to greater detail and consideration of their comments, than was typical of the 2D group. Although not specifically measured, it is possible that the 3D group were still engaged with the experience even after they had left the lecture environment.
A further noticeable difference between the two groups was their relative concept of time. The 2D group made more comments that the slide show was a bit slow, whereas the 3D group made more comments that the lecture was too fast. (Note the actual timing and content were identical). This differing perception of time is most likely is due to a combination to the extra channel of information delivered to the 3D participants (being the 3D models) that had to be absorbed in the same time span as the 2D participants and the higher level of engagement the 3D participants expressed about their learning experience. One cannot rule out the effects of a possible unmeasured elevation of participant stress from the more “intense” learning experience vectored on the addition of the extra information channel.
==5.4 Discussion of Results==
This research sought to find the difference in learning outcomes of participants that were presented with two different forms of delivery methods; a 2D slide show and the same 2D slide show augmented with 3D models and simulations.
For the quantitative analysis the level of learning outcomes was the difference in the measure of achievement scores between the 2D group and 3D group.
Did they learn more after being presented with a 2D slide show or a 3D simulation model? From the results of both groups there was a slight, not statistically significant, lean towards the 3D group’s results on the total post-quiz scores. When analysed within each of Bloom’s cognitive process of ‘remember’ and ‘understand’, the 3D group performed slightly better than the 2D group (most notably at the upper score ranges) in the ‘remember’ dimension but there was no appreciable difference in the ‘understand’ dimension. The subjective interpretation might be that, with respect to the ‘remember’ outcome, the 3D approach may assist ‘stronger’ students to do better than they would otherwise do under the 2D approach, but that there was little impact on the ‘average’ student. The study did measure the ‘instantaneous’ ‘remember’ outcome, not the ‘remember’ outcome over an extended period, which might reveal greater differences.
Regardless of any anecdotal differences that may have been found, and the foregoing comments, the results of the statistical analysis of the post-quiz score across both groups revealed no statistically significant difference between the two groups learning outcomes within the confines of this experimental model. Thus the hypotheses defined for the quantitative analyses for this experiment remains unconfirmed.
Learning outcomes for a student traditionally are measured by a student’s achievement scores. Although an important measure, this does not provide any insight to the learning experience of the student. A student that obtains a high learning outcome by quantitative methods is not a measure of success from a qualitative approach. Quantitative methods focus on outcomes, qualitative methods focus upon the journey that leads the student to their end results.
While both the 2D and 3D groups were strongly positive of the learning experience, the qualitative analysis of both groups’ open comments revealed noticeable differences between the two groups’ journey to their end results. The 3D group tended towards greater ‘ownership’ of their learning experience, and while the 2D tended to merely observe (in some cases as a benefit) with the opportunity for distraction, the 3D almost universally, expressed resentment, or even anger, about the same distractions.
The experimental constraint of ‘same time’ may have adversely impacted the 3D group’s scored outcome due to the delivery of an additional information channel over the same time frame – even though at least 2 of the channels were effectively redundant. As the two groups performed the same and if anything the 3D group did slightly better, such a conclusion is by no means certain. The affect may rather have been to induce greater involvement by raising the stress factor for the 3D group and force greater participation in order to ‘keep up’ with the information flow.
The presence of the 3D models was widely perceived by the participants to enhance their understanding of the subject matter – although the scoring suggests that they assisted with remembering rather than understanding.
From the literature review of previous research it was found virtual world learning does take longer than traditional methods (Arreguin, 2007; Joseph, 2007). In this lecture we provided 20 minutes to both groups for a post-quiz of 20 questions. Although the 2D participants did not display a problem with the time allocated to the lecture from their comments given the results of the post-quiz particularly with Bloom’s ‘understand’, possibly both groups needed more time in which to understand the material, and particularly the 3D group where they were presented with an extra channel of information which could be interactively explored by which to learn.
Of the Likert scale questions 28 and 29 showed the most variation across the participants. The questions were specific to a participant’s learning experience. Question 28 asked if they found the learning experience better than their usual methods of learning. The vast majority from both groups agreed.
When asked in the Likert scale if the information provided was enough to understand the topic the 2D group was slightly more satisfied than the 3D group. The open questions shed some light on this issue, with more 3D group participants expressing a desire for more time to assimilate what was provided and more opportunity for self driven information collection, questioning and investigation – rather than merely more information per se. This difference might also reflect the greater level of participation, immersion, presence or transportation evidenced in the 3D group.
==5.5 Conclusion==
In answering the research question: How effective is it to learn in a virtual world using a traditional 2D slide show method compared to that of a 3D interactive simulation? The conclusions from this research are clear, and not necessarily as expected by the researcher at the commencement of the study:
#Transportation of a 2D real world lecture presentation into a virtual world situation is an acceptable use of the virtual world technology producing no statistically different outcome for Bloom’s ‘remember’ and ‘understand’ and combined cognitive processes at the mean, although there are some indicators that the ‘remember’ outcome might be enhanced at the upper and lower deciles of participant ability through augmentation of the 2D presentation with 3D representation and simulation.
#Adoption of 3D visual aids is not a pre-requisite for successful learning in a virtual space.
#The presence of 3D visual aids assisted participant’s perceptions of enjoyment, engagement, presence, immersion and/or transportation, and may therefore have a longer term effect on participation rates where participation in learning is purely voluntary.
Projecting these conclusions into a practical teaching scenario, where outcomes are the same, and only instantaneous outcome measures are considered (as the researcher did not examine long term outcomes) after taking account of the input costs of material preparation, it is clearly more cost effective to use the 2D presentation strategy for delivering virtual world courses. This conclusion is sustained where cost is measured in terms of time required for input preparation regardless of the sourcing (i.e. where the 3D models are acquired for no input hours and no financial cost the cost measure would void the observation), and outcomes are measured in terms of test scored results taken within a short period of the learning.
Where the outcome measure includes participant perception of the experience, the 3D augmented learning approach is indicated, but in this scenario, grading the relative ‘worth’ of the greater experiential outcome is more difficult and it is less clear how it can be factored absolutely into a cost benefit analysis.
==5.6 Opportunities for Further Research==
Experimental research as the name suggests is applying scientific methods and analysis to learn new insights so that other researchers can pick up from the experiment to reproduce, reform and critique. In this section the researcher proposes some opportunities for further research based upon the analysis of the results discovered in this research.
===5.6.1 Improving Instrument Reliability===
One limitation that is difficult to avoid was found in the analysis of the instrument reliability using formal (statistical) instrument reliability testing. Essentially in this experiment there were too few questions within each of the two Bloom’s cognitive process test sets to provide a conclusive reliability measure of the instrument. Increasing the number of questions within each group would certainly provide more data points in which to measure achievement results, and as a consequence of how the reliability measure algorithm works, would improve instrument reliability. The first obvious problem faced with the pre-quiz and post-quiz design for this type of experiment is that, as the number of test questions (data points) is increased, there is a point at which the testing might materially affect the training experience and therefore the outcomes, as the participants would eventually start learning from the quiz questions.
If the number of question were to be increase the range of information presented to the participant would also have to increase. Increasing the range of information provided would require additional time to be allocated to the lecture and possibly each topic therein. There is a point at which the length of time required to complete the lecture and quiz / survey combined would affect the quality of the results as the voluntary participants might judge the exercise was taking too much time, and rush the final testing / survey stages.
===5.6.2 Course versus Lecture===
The experiment focussed on a single lecture, measuring the affordances over a sequence of lectures using a similar experimental model would provide additional depth of analysis and would neutralise any initial ‘wow’ factor that might have influenced participation and attentiveness in this single event based experiment. It is possible that differences in outcomes might be more apparent between the two groups if a course was involved rather than a single lecture. There are other factors that might influence such an experiment design – such as motivation for attending the course in the first place.
===5.6.3 Introducing a Real and Robot Presenter to the Experience===
The 3D group displayed a higher level of presence in this research study. The contributing factor in this observed difference between the two groups was prima-facie, the 3D models. The opportunity for further research lies in the introduction of a presenter (even an automated robot presenter) into the lecture experience to see if this increased the level of presence had by the 3D group would occur for both groups given a live or virtually-live lecturer. As presence is generally shown to be increased by relationships with other people within a virtual world the introduction of a lecturer may add further insight as to why the 3D group displayed a higher level of presence given they only had the addition of 3D models.
===5.6.4 Testing Other Bloom’s Cognitive Processes===
The 3D group seemed to believe that the models contributed to their understanding of the subject matter. Testing higher levels of Bloom’s cognitive processes such as Apply, Analysis, Evaluate and Create may reveal that this increase in understanding may present differences between the two groups for the higher levels of Bloom’s cognitive processes.
===5.6.5 Outcome Measurement Over Time===
In this experiment the post-quiz was given directly after the lecture. Re-testing the participants over a number of periods to assess which group retained the information better for longer, and the extent to which the two approaches impacted understanding outcomes over time. The experiment would probably require a vastly greater number of initial participants so that each time lagged testing group could be tested once at different intervals, rather than re-tested, so that the testing itself did not colour the results. The researcher suspects that the greater level of post lecture engagement demonstrated by 3D participants might result in both slower degradation of the ‘remember’ outcome and a post lecture improvement in the ‘understand’ outcome over time.
===5.6.6 Comparison to Real-World Training===
Perhaps the most obvious inquiry that presents itself for further research is to include another experimental group. As the virtual world 2D lecture was effectively a real world lecture delivered in a virtual world, the addition of a real world participant group operating under the same constraints as the virtual world groups would provide an interesting control reference for virtual-real world comparison of outcomes. Providing the 2D presentation to real life participants may provide further insight to the differences of the virtual learning experience in addition providing a control group that was based around more traditional learning methods.
</div>
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
e0dc814868f08786a70ffb525e6714fceaef6213
368
314
2018-10-29T12:02:36Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<div class="nonumtoc">
=CHAPTER 5: Discussion & Conclusion=
==5.1 Introduction==
This chapter provides the analysis of the results presented in the previous chapter along with a discussion of these results and opportunities for further research.
In analysis of the results the researcher has applied both quantitative and qualitative methods in order to answer the research question: How effective is it to learn in a virtual world using a traditional 2D slide show method compared to that of a 3D interactive simulation?
Quantitative methods were performed on participant’s achievement scores for the pre and post quiz and Likert scale results. Qualitative methods were used on responses from participant’s post survey open questions results.
Discussion of results applied triangulation combining both the quantitative and qualitative results in order to better understand the 2D and 3D group’s learning experience and any differences that were observed between these groups.
This chapter concludes with a discussion on the opportunities for further research.
==5.2 Quantitative Analysis==
===5.2.1 The Results of the Hypothesis===
The aim of this study was to determine if two lectures differing only in the presence or absence of 3D models (and therefore employing either 2D or 3D learning delivery) in an online 3D virtual world would produce different learning outcomes for Bloom’s cognitive processes of ‘remember’ or ‘understand’. The following hypotheses were formed:
*(H<sub>1</sub>): That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
*(H<sub>2</sub>): That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
Measured statistically, neither of the above hypotheses were sustained by the scored (quiz) testing results as there was no significant statistical difference between the results of the two groups. The researcher applied statistical significance testing as the foundation for rejection of the null hypothesis formation of the above hypotheses (i.e. that, in each case, the process will result in NO significant difference) based upon a statistically measurable difference. If there is no measurable difference found between the samples; the primary hypotheses remains unconfirmed. An unconfirmed hypothesis does not mean the hypothesis is false rather it means it is capable of disproof thus unconfirmed (Karl Popper’s principals of falsifiability).
As the researcher was not able to refute the null hypothesis on the basis of a raw statistical comparison of the test scores, the researcher turned to the real data results to see if there was an actual (although possibly not significant) difference between the results of the two groups, or any clearly emerging or suggested trends that might qualify the implications of the raw statistical comparison.
===5.2.2 The Results of the Pre-Quiz===
====5.2.2.1 Pre-Quiz Total Scores====
Analysis of the results in the previous chapter for the total pre-quiz scores (i.e. both cognitive processes combined) between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 51% and 55% respectively, therefore 4% of 3D participants scored better than the 2D participants for a pass rate of 4 out of 8.
*Average scores (mean) for the 2D and 3D groups were 3.69 and 3.68 respectively. Both groups’ average scores were effectively the same.
*Median scores for the 2D and 3D groups were both the same with a value of 4.
*Mode for the 2D group was lower than the 3D group, 3 and 4 respectively. Effectively demonstrating that more 2D participants scored a 3 whereas more 3D participants scored a 4. A score of 3 for the 2D and 3D groups were 31% and 23% respectively and a score of 4 for the 2D and 3D groups were 20% and 23% respectively.
*The range of scores for the 2D group was less than the 3D group, 1-6 and 0-7 respectively.
*Standard deviation for the 2D groups was less than the 3D group 1.372 and 1.479 respectively, therefore the 2D total groups’ scores were closer to the centre of the mean (average score) than the 3D group.
*Skewness was positive for the 2D group and negative for the 3D group, 0.007 and -0.188 respectively. This demonstrates that the *3D groups’ scores were slightly higher than the 2D scores. This skewness difference is due to the mode difference between the groups as both the median and average scores where equal.
*Kurtosis was negative (platykurtic) for the both groups. Platykurtic distributions are flatter at the top of a distribution curve and less peaked around the average score (mean). The slight difference in value of kurtosis across the two groups accounts for the probability density value being lower in the Gaussian distribution graph in Figure 62. Results: Pre-Quiz Totals - Histogram & Bell Curve
'''Summary & Interpretation: Pre-Quiz Total Scores'''
There was a 4% higher pass rate for the 3D group and the mode value of the 3D group was higher than the 2D groups’ total pre-quiz scores. The pass rate was higher because of the greater mode value obtained by the 3D group. The 3D group obtained a greater range of scores than the 2D group thus providing the 2D group with a tighter (smaller) distribution of scores around the mean.
Given the distribution of scores between the two groups the 2D group had a higher probability of scoring around the mean than the 3D group (28% and 26% respectively). Thus, although the 3D group obtained a higher pass rate and mode value, a participant in the 2D group was 2% more likely of scoring a 4 than a participant in the 3D group. This small percentage difference can be seen in Figure 61 inverse normal distribution graph, in the lower and higher quartiles the 2D group varied away from the 3D group. In the lower, quartile participants in the 2D group scored higher. In the higher quartile, participants in the 2D group scored lower. Thus this slight shift away from the 3D group curve toward the mean demonstrates that the 2D group was more likely to obtain the mean value than the 3D group.
Although there was a difference in the 2D and 3D group pre-quiz scores the percentage difference was, in the opinion of this researcher, effectively immaterial; showing that both groups stated with the same level knowledge on the topic ‘The Physics of Bridges’ prior to the lecture.
The result of the question 21 in the Likert scale survey is comparative with the above analysis. When asked to scale their level of knowledge on the topic ‘prior’ to the subject the low plus medium scores for the 2D and 3D participants were 98% and 96% respectively. The response that their knowledge was high from the 2D and 3D participants was 2% and 4% respectively. This provides a 2% difference for both responses, which is comparative to the real results of the data analysis above. So the difference in the participant group’s subjective assessment matches that showed by the tested assessment.
====5.2.2.2 Pre-Quiz Remember and Understand Scores====
In the previous chapter we found that when a significance test was performed independently on Bloom’s cognitive processes of ‘remember’ and ‘understand’ for the pre-quiz a significant difference was found between the two groups. The 2D group scored significantly higher than the 3D group for the Bloom’s cognitive process of ‘remember’ (t = 1.665, df = 109, one-tailed p = 0.0494, α = 0.05), and the 3D group scored significantly higher than the 2D group for the Bloom’s cognitive process of ‘understand’ (t = -3.03167, df = 109, one-tailed p = 0.00138, α = 0.05).
The pass rates for Bloom’s ‘remember’ cognitive process for the 2D and 3D groups were 80% and 66% respectively. The pass rates of Bloom’s ‘understand’ cognitive process for the 2D and 3D groups were 35% and 52% repetitively. The average score for the 2D and 3D groups for Bloom’s ‘remember’ was 2.44 and 2.071 and ‘understand’ 1.25 and 1.60 respectively. The standard deviation for the 2D and 3D groups for Bloom’s ‘remember’ were 1.032 and 0.775 and for Bloom’s ‘understand’ were 1.263 and 0.867 respectively.
The scores for the Bloom’s splits at the pre-quiz stage are of passing interest in this experiment (independent of the post-quiz results) and the significant differences found for these figures were not especially surprising.
This experiment was not designed to measure and compare pre versus post learning outcomes of the participants. Rather, it was designed to find differences between the 2D and 3D groups comparative learning outcomes (i.e. the post-quiz results). In other words, the research was not trying to measure ‘by how much’ learning or understanding improves, but rather the relative difference in the final results between the 2D and 3D groups.
The pre-quiz was given to obtain an indicator of the general knowledge of the material that was to be delivered so that relative differences in outcomes could be normalised against the initial positions.
With the total number of pre-quiz questions being 8, of which both of the Bloom’s cognitive process were represented by only 4 questions each, there were not enough questions in each group to test reliably the true levels each of Bloom’s cognitive processes of ‘remember’ and ‘understand’ prior to the lecture. With so few data points for the individual processes, small variations in responses produce large variations in final scores. Hence the 2D/3D group variations were not especially surprising.
The problem for the research design was to avoid impacting the outcomes with the measurement instrument itself. The post-quiz was taken within approx 30 minutes of the pre-quiz, and only a single lecture was delivered, between those two measurement points. Providing more than 8 questions in the pre-quiz for a single 20 minute lecture would have increased the risk that the participants learnt from the pre-quiz questions relative to the lecture.
Furthermore, the concept of ‘remember’ and ‘understand’ for Bloom’s cognitive processes prior to instruction does not especially make sense in the context of the experiment. As discussed in Chapter 3 (instrument design), the development of the questions within the instrument was based upon the lecture. ‘Remember’ questions were extracted from the instructional content of the lecture whereas the ‘understand’ questions were derived from material not taught in the lecture. The pre-quiz questions were also specifically targeted at the four bridge types covered in the lecture to calibrate the extent of pre-existing content knowledge.
A participant being tested within each of these levels prior to instruction (over which no certainty of prior topic learning experience can be established) can only be measured with respect to their pre-existing general knowledge of the topic. This may reflect either memory or understanding. The extent to which this analysis grouped the pre-quiz questions into ‘remember’ or ‘understand’ in this discussion, reflects only the researcher’s perfect knowledge of the lecture content as to whether the topic of the question was subsequently directly taught or not in the lecture – not whether the participant was actually remembering or understanding at the pre-quiz stage.
The extent to which the split at the pre-quiz stage matters to the discussion is that if a participant already had an indicative level of ‘understanding’ prior to the lecture, that ‘understanding’ should improve when assessed after the lecture. If one group, for example, starts with a level of 60% and ends with 61%, this is possibly a worse outcome than the other group starting with 45% and ending with 58% (although there is also some discussion that could qualify even that conclusion).
===5.2.3 The Results of the Post-Quiz===
====5.2.3.1 Post-Quiz Total Scores====
An analysis of the results in the previous chapter for the total (i.e. combined Bloom’s) post-quiz scores between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 67% and 77% respectively, therefore 10% of 3D participants scored better than the 2D participants for a pass rate of 10 out of 20.
*Average scores for the 2D and 3D groups were 10.98 and 11.36 respectively. A 3D participant scored on average 0.38 higher than a 2D participant.
*Median scores for the 2D and 3D groups were 11 and 12 respectively. The 3D participants scored higher in the 2 quartile than the 2D participants.
*Mode for the 2D group was lower than the 3D group, 11 and 12 respectively. Effectively demonstrating that more 2D participants scored 11 and more 3D participants scored 12. A score of 11 for the 2D and 3D groups were 20% and 21% respectively and a score of 12 for the 2D and 3D groups were 11% and 29% respectively.
*The range of scores for the 2D group was more than the 3D group, 5-17 and 6-17 respectively.
Standard deviation for the 2D group was slightly more than the 3D group 2.468 and 2.347 respectively, therefore the 3D total groups’ scores were slightly closer to the centre of the mean (average score) than the 2D group.
*Skewness was positive for the 2D group and negative for the 3D group, 0.052 and -0.229 respectively. This demonstrates that the 3D groups’ scores were slightly higher than the 2D scores. This skewness difference is due to the mean, median and mode differences between the two groups’ scores.
*Kurtosis was negative (platykurtic) for the 2D group and positive (leptokurtic) for the 3D group, -0.2 and 0.3 respectively. As mentioned above platykurtic distributions are flatter at the top of a distribution curve whereas leptokurtic distributions are higher and peaked around the mean score. The differences in value of kurtosis between the two groups account for the probability density value being higher for the 3D group in the Gaussian distribution graph in Figure 64.
'''Summary & Interpretation: Post-Quiz Total Scores'''
The above analysis finds that the 3D participants scored overall better than the 2D participants in the post-quiz. Although this difference was not statistically significant from the t-test results (t = -0.8212, df = 119, two-tailed p = 0.4133, α = 0.05) the real results indicate that there was a slight difference between the two group results. Analysing the Gaussian distribution curve (Figure 64) shows that the 2D and 3D participants had a 15% and 16% likelihood respectively of scoring a 12 in their total post-quiz score. In general the overall results showed that the 3D group performed better by 1%, this can also be seen on the inverse distribution graph (Figure 63) where the two groups almost run parallel to one another with the 3D group performing approximately 1% better in their overall test results.
The results of question 22 in the Likert scale, when asked to scale their level of knowledge on the topic ‘after’ the lecture, the 2D and 3D participants low response was 22% and 23% respectively and medium response 73% and 74% respectively. At the medium level the self assessment was consistent with the test results of a 1% difference. At the low level the 3D group seemed to be more conservative in their response perceiving that their knowledge was less than the 2D group although the real result showed the contrary. In either case a 1% difference is within the margin of error.
====5.2.3.2 Post-Quiz Remember Scores====
Analysis of the results in the previous chapter for the post-quiz ‘remember’ scores between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 85% and 93% respectively, therefore 8% of 3D participants scored better than the 2D participants for a pass rate of 5 out of 10).
*Average scores for the 2D and 3D groups were 7 and 7.32 respectively. The 3D participants scored on average 0.32 higher than the 2D participants.
*Median and mode scores for the 2D and 3D group was 8 for both groups.
*The range of scores for both groups was the same, 3-8.
*Standard deviation for the 2D group was higher than the 3D group 1.8 and 1.6 respectively, with a 0.2 difference between the groups.
*Skewness was negative for both groups with the 2D and 3D skew of -0.6 and -0.9 respectively. As both groups were close to 0 with a 0.3 difference between the two groups this demonstrates that the distribution of the results for both groups was almost symmetrical.
*Kurtosis was negative (platykurtic) for the 2D group and positive (leptokurtic) for the 3D group, -0.7 and 0.7 respectively.
'''Summary & Interpretation: Post-Quiz Remember Scores'''
The post-quiz scores mask a complexity that requires further consideration. Although the 2D group was normality distributed, the 3D group failed D’Agostino-Pearson (K2) normal distribution test (p = 0.01161 ie < 0.05). In order to compare meaningfully the results of the 2D and 3D groups, the researcher needed to look into why the 3D group failed normal distribution and what, if anything, it implies to the interpretation of the apparently “better” 3D pass rates.
Analysis of the histogram and density traces graph Figure 65 show that both the 2D and 3D graph displays a bimodal distribution in the histogram graph with 2 peaks at 3 and 8. As can be seen on the density traces graph, for the 2D scores between the scores of 3-8, the variance was greater. This causes the curve to flatten prior to its peak.
Although the statistical analysis determined that difference between the pass rates and mean (by which the 3D group was higher than the 2D group) was not significant when taken as a whole there is a clear visual difference between the graphs that deserves explanation. When considered within specific score ranks the outcome slightly favours the 3D group because:
#2D group participants were 8% more to likely to score 4 or below,
#3D group participants were 6% more likely to score 8 or above, and
#3D group participants were 2% more likely to score 9 or and above.
This analysis can be easily seen in frequency table: below (Table 13. Frequency Table: Post-Quiz Remember).
{| align="center" width="50%" style="background-color:#ffffcc; "
|-
|colspan="4" align="center" |'''Post-Quiz Remember'''
|-
|align=center|'''Score'''
|align=center bgcolor="#DDADAF" |'''2D'''
'''(Cumulative)'''
|align=center bgcolor="lightblue" |'''3D'''
'''(Cumulative)'''
|align=center bgcolor="lightgrey"|'''Difference'''
'''3D vs. 2D'''
|-
|align=right |0
|align=right | 0%
|align=right | 0%
|align=right | 0%
|- bgcolor="lightgrey"
|align=right |1
|align=right | 0%
|align=right | 0%
|align=right | 0%
|-
|align=right |2
|align=right | 0%
|align=right | 0%
|align=right | 0%
|- bgcolor="lightgrey"
|align=right |3
|align=right | 4%
|align=right | 4%
|align=right | 0%
|-
|align=right |4
|align=right | 15%
|align=right | 7%
|align=right | -8%
|- bgcolor="lightgrey"
|align=right |5
|align=right | 25%
|align=right | 13%
|align=right | -12%
|-
|align=right |6
|align=right | 33%
|align=right | 27%
|align=right | -6%
|- bgcolor="lightgrey"
|align=right |7
|align=right | 47%
|align=right | 41%
|align=right | -6%
|-
|align=right |8
|align=right | 78%
|align=right | 80%
|align=right | 2%
|- bgcolor="lightgrey"
|align=right |9
|align=right | 98%
|align=right | 96%
|align=right | -2%
|-
|align=right |10
|align=right | 100%
|align=right | 100%
|align=right | 0
|}
<p align="center" >'''''Table 13. Frequency Table: Post-Quiz Remember (Rounded)'''''</p>
The frequency table show a cumulative analysis of each group at a particular score. As can be seen in the table, the 3D scores in general were lower than the 2D scores for each level of score below 8. The implication is therefore that the relative performance of 3D versus 2D ‘remember’ outcomes is slightly better at the higher rankings (80% and above), but slightly worse at the lower pass mark scores.
While the difference in the means may not be statistically significant, the results suggest that the outcomes at particular bands are potentially significant. To put this into context; if the desired group learning outcome is to achieve a pass or better, both methods of delivery were similar, but if the desired outcome is to maximise the potential scores, the 3D delivery might be indicated.
In general, the overall performance of both groups was better than for the score obtained in Bloom’s cognitive process of ‘understand’ which we will discuss in the next section.
====5.2.3.3 Post-Quiz Understand Scores====
Analysis of the results in the previous chapter for the post-quiz ‘understand’ scores between the 2D and 3D groups shows:
*The pass rate for the 2D and 3D groups were 35% and 36% respectively, therefore 1% of 3D participants scored better than the 2D participants for a pass rate of 5 out of 10).
*Average scores for the 2D and 3D groups were 3.98 and 4.04. A 3D participant scored on average 0.05 higher than a 2D participant.
*Median and mode scores for the 2D and 3D group was 4 for both groups.
*The range of scores for the 2D group was more than the 3D group, 0-8 and 1-8 respectively.
*Standard deviation for the 2D group was slightly higher than the 3D group 1.48 and 1.46 respectively. A 0.02 difference between the groups shows very little difference in standard deviation.
*Skewness was positive for both groups the 2D and 3D was 0.068 and 0.332 respectively. As both groups were close to 0 with a 0.27 difference between the two groups this demonstrates that the distribution of the results for both groups was almost symmetrical.
*Kurtosis was positive (leptokurtic) for both groups with the 2D and 3D groups being 0.558 and 0.010 respectively. With a result of a 0.55 difference between the two groups shows very differences between the two groups kurtosis values.
'''Summary & Interpretation: Post-Quiz Understand Scores'''
From the above analysis both groups scored almost the same for Bloom’s post-quiz ‘understand’ results. This is clear from a study of the histogram and Gaussian distribution curve in Figure 66: both the 2D and 3D data points are almost identical.
Further, the frequency distribution comparison of the two groups confirms that the scored results at each rating band of the 2D and 3D groups exhibit no considerable difference.
Bloom’s cognitive process of ‘understand’ is a higher level cognitive process than ‘remember’. Given the pass results and the mean, median and mode scores both groups scored ‘badly’ (35% – 36%) in Bloom’s cognitive process of ‘understand’. On the face of it, the results suggest that both groups did not show a ‘high’ level of understanding of the subject matter after training; however, it should be remembered that the mean, median and mode results are a reflection of the difficulty relationship between the questions testing understanding and the lecture itself. The decision was made during the design stage to include some ‘very high’ difficulty questions in the understanding question set to ensure real test of the achieved level of understanding. Some additional light is shed on these results in the Likert scale and qualitative analysis that follows.
This research is primarily interested in the comparative difference of the 2 delivery methods, rather than the absolute scores, and for this purpose the results suggest that there is no significant or effective difference between the 2D and 3D group testing (quiz) results for the ‘understand’ cognitive process, within the confines of this experimental process.
===5.2.4 Likert Scale Analysis===
The above analysis of the quiz results showed that there was a positive result for the Bloom’s cognitive process of ‘remember’ whereas for Bloom’s ‘understand’ there seemed to be fewer participants in both groups that understood the subject matter on ‘The Physics of Bridges’ to the same level that they remembered it. In order to understand this result we will turn to the Likert scales where we asked the participants to assess the quality of the deliver method. Questions 23 and 24 specially answered these questions.
*Question 23 asked whether “the subject matter was clear and informative”. The 2D and 3D groups’ responses were positive 98% and 100% and neutral 2% and 0% respectively. With exception to the 2% neutral response it would seem that the majority of people found the subject matter to be clear and informative. Of interest the 2% neutral result was a single participant who actually performed better than group’s average score for the post-quiz results in both cognitive processes of ‘remember’ and ‘understand’ with a z-score of 0.54 and 0.69 respectively. Given their actual results it seems that within their group that this participant understood the material better than they remembered it.
*Question 24, was the lecture detailed enough to understand the subject matter. The 2D and 3D groups’ responses were positive 100% and 93% and neutral 0% and 7% respectively. Of interest were the neutral responses that came from the 3D group. These responses were made up of 4 participants all of whose post-quiz results in both cognitive processes of ‘remember’ and ‘understand’ scored less than the group’s average in their z-scores, with exception to one that scored better on their ‘understand’ post-quiz score than the ‘remember’ score.
From the above results of questions 23 and 24 the majority of participants perceived that the lecture material was clear, informative and detailed enough in order for them to understand the subject matter. The few in the 3D group who were only neutrally satisfied that the level of information detail was sufficient to understand the topic achieved post-quiz z-scores that were below average for the total group so their self assessment seemed to be correct.
Question 29 asked if the topic was appropriate to virtual world learning. This question was asked in order to gain an understanding of a participant’s view on the choice of topics that was delivered for instruction. The majority response for both groups was positive with the 2D and 3D group’s responses positive 84% and 79% respectively and neural 13% and 18% respectively. Within the 2D and 3D groups the neutral scores accounted for 7 and 10 participants respectively. For these participants in the 2D group the z-scores showed that 4 performed below average for the cognitive process of ‘remember’ and 2 for the cognitive process of ‘understand’. Within the 3D group the z-scores showed that 5 performed below average for the cognitive process of ‘remember’ and 7 for the cognitive process of ‘understand’. It seems from these results that although the majority of the participants where positive about the choice of topics a few were neutral with the appropriateness of the material to the environment, and more so in the 3D group, in spite of the fact that the material was identical in both cases. Given their z-score results from the neutral responses the 2D participants still performed better for ‘understand’ than ‘remember’, while within the 3D group the neutral responders appeared to not ‘remember’ or ‘understand’ the topic well – suggesting their relative (to the group) self assessment was consistent with their relative scored outcomes.
Question 28 asked a participant whether the in world learning method offered a better learning experience than their usual (real world) learning methods. The results showed between the 2D and 3D groups positive 74% and 73%, neutral 13% and 18% and negative 3% and 3% respectively. Although the overall results showed a positive result there was more variance with respect to quiz scores in their responses on this question.
Question 26 asked participants if they experienced any technical difficulties. The majority of participants in both groups did not indicate that they had had any technical difficulties. The responses for the 2D and 3D groups ‘No’ 91% and 93% and ‘Yes’ 9% and 7% respectively. For the participants that answered yes to this question the major problems were sound and picture loading delay (lag). All of these people commented that it was only for a short period and the problem was rectified quickly. Although a small number of participants answered yes to this question that they had no technological constraint, the open format questions showed slightly more experienced some technical issues (although apparently not perceived as sufficient to rank a “yes” in this question), which will be discussed in the next section.
This group of questions essentially assessed the participant’s perception of quality, appropriateness, purpose and “fit” to the medium of the experience. Necessarily the responses to these questions are likely to be coloured by the participant’s perception of the lecture delivery system experienced (i.e. 2D or 3D). Throughout this group of questions the responses were very strongly positive while the worst grade with a significant number of responders was neutral (excluding Q26). With the exception of the assessment of the clarity of the material, the Likert assessments slightly favoured the 2D delivery method.
The slight favouring of the 2D delivery could be either an absolute result, or a result coloured by raised expectations of one or other of the two delivery methods. We need to investigate, therefore, the qualitative analysis of the open questions to adequately interpret this slight bias in the results.
Question 26 was a check-question to allow explanation of the results in the other questions should the results therein had proven dramatically negative.
==5.3 Qualitative Analysis==
From the qualitative analysis of the post-survey responses many aspects came out about the learning experience of participants as well as the differences between the two groups in this study.
===5.3.1 Thematic Analysis Results===
As discussed in the previous chapter the results of the post survey open questions were grouped into themes and coded for qualitative analysis in order to provide further insight into the achievement results and the learning experience of participants. There were four themes that were found on analysis of the data as follows:
*Virtual World Learning
*Virtual Learning Campus
*Lecture Delivery
*Survey Instrument
In this section we provide a thematic analysis of these themes that emerged from the post-survey.
====5.3.1.1 Virtual World Learning====
This theme was specially related to the use of the virtual world platform as a learning tool rather than the delivery method of the presentation.
Convenience was the main factor mentioned from both groups. The theme identified included: doing it from home, in my own time and not having to travel in order to learn. These sorts of comments are not specific to virtual world learning technology as today many educational courses cater for students via online courses. However, there was a sense of presence that the participants felt from “being there with other people” and seeing others learn that seemed to make the experience more enjoyable to them over traditional or alternative learning methods. Quite a few commented on how the experience felt “personal like they were really sitting in a lecture room taking the course”, the atmosphere was relaxed, soothing, and providing less pressure than traditional class room methods of learning. These comments are interesting, partly because the lecture mirrored a real-world lecture in that it could not be “paused” by a participant and ran for a fixed time per slide, and a fixed time in total, so to some extent it was more rigid in delivery format than a real-world lecture in which the lecture might be paused while a question is asked and answered.
Another theme that emerged was that this medium offered a new way of learning where it was ‘on demand’ rather than a planned course where one would have to prepare in advance. Similar to searching the web to find out about a specific topic, participants felt that this medium offered them a way learn new material when they wanted and to experience this material rather than just read it over a webpage. The lectures ran on a continuous loop over the experimental period – so this perception is reasonable, in spite of the fact that the lectures were not actually ‘on demand’.
The technology seemed to offer a learning medium to reach people that traditionally would not formally learn or even use the virtual world for learning which they had not done before. It seemed to inspire people to want to learn more and do more learning exercises in and out of Second Life. For many participants this was a new experience they had never thought about using online virtual worlds as a learning platform, for them they had only used the medium as a game rather than taking a course. After experiencing this study many were inspired to seek out more leaning in Second Life or even in real life.
The overall impression from all the participants was that the virtual world learning experience was fun and enjoyable. Very few negative comments were made about the experience other than they could see that this may have the potential to not be taken seriously or possibly cheat. The experience seemed to open people’s minds about the opportunities that virtual world technology could be used seriously rather than just as a gaming environment. A comment from a participant that sums up the general impression of this technology being used as a learning tool:
<blockquote >
I'm still not convinced that virtual learning can replace learning in real world but now I think it might be possible.
</blockquote >
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
====5.3.1.2 Virtual Learning Campus====
This theme included comments made about the virtual learning campus, the setup and operations of the entire virtual learning environment in which the experiment was conducted.
The majority of comments were that the participants found it to be ‘user friendly’ and ‘easy to use’. The layout of the different rooms seemed to provide a fun way for them learn. There were only 2 people that commented on having a problem with the signage when they got to the post survey room they missed the board that told them how to take the post-quiz.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
====5.3.1.3 Lecture Delivery====
This theme is where the majority of comments were made from participants. These comments directly related a participants learning experience of the research project. The range of comments was coded into sub-categories; format, information content, learning, facets of 3D learning, instruction, focus, navigation and technology constraints.
=====5.3.1.3.1 Format=====
This theme included comments on the layout and format of the slide presentation. The comments from both groups were mostly positive. Participants could offer comments in positive, negative or general sections of the survey. In total, across the 2D and 3D groups, there were comments clearly identified as positive 11, 24 and negative 3, 1 respectively in this theme.
The positive comments liked the layout of the slides and the way the information was presented. A few more negative comments came from the 2D group; one that they wished they had the ability to interact with the pictures on the screen, another wanted annotation on the images (similar to the interaction question) and someone had problems with the colour differentiation of the tension and compression markings (tension and compression was shown in red and green respectively suggesting either colour blindness or graphic card faults). Only one person from the 3D group made a negative comment in this area identifying a desire for more pictures on the slides (the slides in the 2D and 3D lectures were identical).
While the largest proportion of the responses to the general comments question were provided by the 3D group, a common suggestion received from both groups concerning the format was that they wished the presentation could be paused or controlled such as by forwarding or rewinding. As a proportion of each group that actually provided a comment at all, this comment was marginally more frequent among the 2D participants.
With respect to the 3D group’s comments about presentation speed, it seemed that although they had been presented with a model and voice over that mirrored the images of the slides and the text therein, they still desired the opportunity to read the slides to view the information. The time per slide and the slides themselves were identical in both the 2D and 3D lectures and set to allow sufficient time for reading the slide – in fact the voice over effectively read the slide to the participant. In the 3D case the addition of the 3D models in the same time window meant that participants had an additional vector of information to absorb in the same amount of time as the 2D participants. The researcher’s impression from the comments in this respect is that in the 2D case the motivator was about the desired to review and contemplate the information, while in the 3D case it was more to do with their ability absorb multiple information vectors simultaneously.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.2 Information Content=====
This theme included comments to do with information content in the presentation. There were 56 comments from the 2D group and 33 from the 3D group.
On the most part people found that the presentation very interesting and informative but in this area the 2D group seemed to be more satisfied than the 3D group. Within the 3D group a number of people desired more information or perceived the information was too technical to appreciate without additional enquiry or time – yet the information in both cases was identical.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.3 Learning=====
This theme included comments to do with people obtaining new information. Both group comments here were very positive. All participants that commented in this group stated they enjoyed the experience of learning and gaining the new knowledge. Most seemed to enjoy the topic and the new knowledge that they took away with them on bridges and/or considered that the material was well thought out and presented. Some commented that they enjoyed the opportunity to obtaining new knowledge in the virtual world/game space were inspired to seek additional in-world learning.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.4 Facets of 3D Learning=====
The comments in this category were specific to the 3D lecture with the use of models. The participants in the 3D group were universally positive about the use of 3D models. Many seemed to believe that having a model of the presentation assisted them in the understanding of the subject matter. (Note, however, that the test scores did not reflect a significant advantage from the 3D models with respect to understanding, although there were indications of an advantage in remembering).
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.5 Instruction=====
The comments in this category had to do with the method by which the new knowledge transferred to the participant. In this area a small but significant number of participants in both groups commented that they missed not having a real person to ask questions to clarify the information but more so in the 3D group which seemed to want to find out more information about the topic than was presented to them. (Note, as mentioned, the information was identical in both cases).
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.6 Focus=====
The comments in this category had to do with observations affecting attention and the temporal learning experience of a participant.
This theme emerged though the general comments throughout the survey. There seemed to two broad sub groups of comments in the focus them: the presence of distractions during the learning experience and the participant’s perception of the available time per slide for learning. Although both groups experienced the same general learning conditions and real-world times, there seemed to be opposing perception of the significance of sources of distraction and perceptions of time across the two groups during the presentation. We will break this category into these two sub-themes (distractions and time) to better understand the focus aspect of the participant groups.
'''Distractions'''
The sources of distractions seemed to come from either the outside world or the inside world.
:'''Inside world distractions'''
:Only 3 comments were made from the 2D group with distractions from the inside world experience: distracting avatars, a participant’s outfit getting in the way of their view and a participant distracted by their curiosity with the technology setup used to deliver and manage the lectures.
:Whereas with the 3D group quite a number of people complained about inside world distractions, particularly being annoyed with other avatars disrupting their learning. As a group, the 3D participants were comparatively emotional/animated (with respect to the 2D group) in their response to these distractions and in a number of cases complained that the other people were not taking education as seriously as them.
'''Outside world distractions'''
:A small number of the 2D group complained/commented about outside world distraction or commented upon the advantages of staying in touch with the outside world. Such comments as being able to answer the phone, using yahoo messaging, doing things at their desk and people in real life talking to them were some of the comments made from the 2D participants.
:Whereas there was only one member of the 3D group commenting upon outside world distractions.
'''Time'''
The main theme that emerged from the 2D group was that a small number of participants commented that the presentation was a bit slow and/or that their attention wandered and/or that they “zoned out” during some slides. Contrast this with the 3D group who tended to say that the presentation was fast or a reasonable number even complained that it went too fast. The 3D group commented that the material kept them engaged and the presentation held their attention. In both cases the real-world times were identical – so the observations are directly related to perception, and in the light of other comments made, the implication is that there was a difference in perceived ‘engagement’ that arose from the single variable of the presence of the 3D objects.
The 2D participants who observed that occasionally they ‘zoned’ out during some of the slides also commented that the voice over was too smooth/calm. Nobody in the 3D group observed this problem, and conversely a number commented on how the voice over was exactly right for the presentation and kept their attention during the presentation. Interestingly the voiceover was identical in each case – but the presence of the 3D objects appearing around participants may have presented an additional level of stress that was properly countered by the voice over.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.7 Navigation=====
Traditionally a significant problem in virtual-world training experiments, learning the appropriate method of avatar navigation has typically been compounded by the use of first-time virtual world participants unfamiliar with the control of their avatar. This researcher considered this a flaw in previous studies that distorted the results with a temporary experience that would be overcome with only a small amount of in-world experience. The participants in this study, therefore, were intentionally recruited from users already present in second-life rather than brought into the virtual world specifically for the purpose of the experiment.
Consequently the negative comments on navigation were lower than in previous studies, and not generally of the same fundamental ‘how do I operate my avatar?’ nature present in a number of the studies considered in the literature review. In any case the campus and lecture environment was specifically designed to minimise the likelihood of these types of problems, and required only minimal knowledge of avatar controls (sufficient for someone with about 30 minutes of experience – based on the packaged avatar training in the second-life orientation islands).
The comments in this category had to do with how their avatar viewed the presentation. These comments were complaints from the 2D and 3D participants about some viewing aspect of the presentation.
Three (3) of the 2D group complained that the chairs blocking their view of the presentation. It was obvious from this comment that these people lacked the knowledge to use mouse view and used third person view, and did not understand how to control the third person roaming camera effectively.
The 3D group’s complaints provided the most insight as to how they viewed the presentation. A small, but significant, number of the participants complained that the 3D models of the bridges ‘got in the way’ of their reading of the slides (a function of navigation) or that they could not both read the slides and look at the models (a function of time). Although avatars were not seated once the 3D presentation began, and avatars were free to wander around the space, with slides projected onto the walls around the models, some users clearly did not realise the additional freedom allowed them to position their avatar for clear slide viewing at any time. Further, it seemed, although presented with a 3D model and the voice over that covered the entire slide content a number of the 3D group still attempted to use the traditional method of viewing the slides whilst looking at the models.
Refer to Appendix M: Qualitative Analysis: A Sample of Participants Comments for a representative sample of comments from participants. Nonsense strings and repeated comments were excluded.
=====5.3.1.3.8 Technology Constraints=====
This category contained comments by participants about the technology constraints that they had experience during the lecture deliver. Although this question was also asked in the Likert questions as provided in the previous section above where the 2D and 3D groups responded ‘9% and 7% respectively, more participants made a comment identifying technical problems in their open comments.
From the 2D and 3D groups’ comments 20% and 18% respectively identified at least one technology constraint. Of these all of the participants had already answered yes in the Likert question therefore a further 11% in both groups commented upon having a technology related problems. The technical difficulties were due to sound and lag/object rezzing, the same problems given by the participants in the Likert questions.
As discussed in the literature review this technology is streamed in real time therefore ‘lag’ is a common risk in using this technology and will vary with network connection speed (real lag) and individual computer problem (false lag – but possibly the single most common culprit). Although no one made any comments that the lag affected their ability to learn. In most cases where it was reported the lag caused only a slight delay in the slide show with comment being that they experienced ‘some’ lag. As each slide, audio and object was independently synched together, lag problems could not accumulate across the slides and any synching problems were corrected with the next slide (or in some cases half way through a slide).
The sound constraints were only temporary in all cases. This problem was due to drop outs of the presentation voice-over. This problem was picked-up early in the testing phase where occasionally the audio would stop and a re-log of the application was required in order to get the audio back. As this was picked up in testing signs were placed around the lecture screens instructing the participant to re-log if they experienced audio dropouts. In all case if they complained about the audio drop, participants also noted that a re-log solved their problem quickly. The impact of an immediate re-log on the learning would be at most half the content of a slide would have been lost. As all slides were summarised at points during the presentation the participant was unlikely to completely miss the associated material.
====5.3.1.4 Survey Instrument====
This category included comments that related to the pre or post survey instrument.
There were 6 participants from both groups that commented upon the size of the pictures in the diagrams of the post-quiz being too small. From their comments they had trouble distinguishing some of the bridges in the pictures.
As the display size is based upon a person’s monitor size people that had small monitors may have had problems distinguishing the details in the pictures. The survey viewed correctly on a 17 inch monitor at 96 dpi, but anyone with a smaller than this or unusual resolution settings may (possibly) have had problems.
This problem was not realised until quite a number of participants had already completed the research. It was therefore decided that any change in the picture size in the survey would only corrupt the experiment conditions and may bias the results so no modification was made. Therefore all participants that undertook this research operated under the same picture constraints in the survey.
On review of the results of the 6 participants that complained 3 of these were from the 2D group and 3 from the 3D group. The participants results for the post-quiz scores for the 2D for ‘remember’ and ‘understand’ were; 9, 7; 9, 4; 9, 4 and the 3D group; 8, 4; 8, 5; 8, 4 respectively. All of these participants passed both Blooms’ cognitive processes categories. The average z-scores for their groups for ‘remember’ were all above average but the ‘understand’ showed that these participants were either on average score or scored below average.
There were 9 ‘remember’ questions and 8 ‘understand’ questions in the survey that required the participant to use pictures in answering the question. The Bloom’s cognitive process of ‘understand’ would have been more affected by the picture constraints. The questions in the Bloom’s ‘understand’ cognitive process were substantially more difficult with material that was not presented during the lecture therefore the participant had to use the picture to recognised and assimilate information in order to answer the question.
The researcher notes that this problem may have contributed to some of the low scores results especially within the Bloom’s cognitive process of ‘understand’. Although from the comments only 6 out of 111 people complained about this problem there is no way to know how much of a problem this presented, from the lack of comments we can only assume that this was not a constraint for most participants – or, at least, not one they were realising they were experiencing.
===5.3.2 Qualitative Analysis of Thematic Results===
====5.3.2.1 Introduction====
The Survey comment questions were not compulsory, but less than 4% reflected nonsense or non-responses with an average of 100 words per person, and 3D participants providing approximately 12% more comment volume than the 2D participants.
Interpreting the collected thematic responses was aided by the consistency of the emotion and approval expressed by participants, and the surprising number of instant messages sent directly to the researcher by participants in thanks for the experience, and the range of both supportive comments and recommendations provided in the open comments. To that end the researcher offers the following generalised collation of the qualitative opinions expressed by participants.
The general lack of negative observations reflects that same proportion in the underlying data. Three positive and three negative observations were requested as well as open/general comments. Overwhelmingly, the positive question was populated while the negative question was generally underpopulated, or comments like ‘I have none’. The most frequent negative comments were an expressed desire to control the delivery speed, acquire additional information in some way, or the opportunity for distraction. In some cases these were also identified as positives. The lack of colour in the negative comments was contrasted by the diversity of positive comments. Different participants chose to comment on different positive aspects of the experience, and an individual participant tended to concentrate comments within a theme.
To aid in interpretation of the analysis while avoiding the implication of hard statistical interpretation, where some degree of researcher subjectivity and ‘translation’ is involved, the researcher has used the following terms with some degree of overlap at the margins:
*Few – 5% or less of comments
*A number – 5% to 15% of comments
*A significant number – 15% to 25% of comments
*Many – More than 25% of comments
*A majority – More than 50% of comments
*Most – More than 60% of comments
Outside of these terms the researcher has provided clear absolute percentage counts where the numbers are at the extremes.
====5.3.2.2 The Virtual Learning Experience: Both Groups====
The two most used words to describe their experience were ‘fun’ and ‘interesting’. The frequency and strength of these positive comments surprised the researcher, representing over 60% of the participants.
The virtual world seemed to offer participants with a fun way to learn with the convenience of learning on line in their own time but further, at least as the experimental campus and lecture rooms were constructed in this experiment, offering a participant with a sense of presence that provided them with the perception of a similar experience to that of learning in a real world learning environment. Seeing others in the environment while attending a lecture as their avatar in a simulated theatre, gave the participant more of a connection to the learning process than one might expect if they were doing an online purely HTML page based traditional distance education learning course. To the majority of participants this experience felt personal, and the atmosphere relaxed and many found that it offered a more pleasurable experience than the traditional learning method of attending a lecture class in the real world.
The environment seemed to promote a favourable attitude to learning. Not only did the majority of the participants say it was “fun” but a number commented that they felt inspired to learn more about the topic, wanted to ask further questions on the same, or seek for more details and a significant number expressed surprise that although they clearly had experience of the topic in real life, they had never really considered how exciting a bridge could be. Only one participant expressed a non favourable attitude to this form of learning and/or the topic.
Based on the comments, the average participant was clearly immersed in this aspect of virtual learning as reflected by many comments that expressed varying degrees of ownership over the experience – and even, in some cases, resentment of others or extraneous circumstances had interfered in their learning.
To many this was a new experience in a virtual world and although they initially saw the offer of ‘linden’ as an easy way to make fast money, by the end of their experience instead of thanking the researcher for the money they thanked the researcher for the learning experience. The content of some of comments expressed surprise that the game they had known before was no longer ‘just’ a game to them. Participation had opened the possibility for a whole new world of learning, inside and outside of Second Life.
The virtual learning campus provided the participant with a seamless way to learn. Many liked the staged approach reflected by the testing and learning process (necessary as part of the automated control regime for the experimental process) - finding it a novel approach to the learning experience. Going from room to room to complete the each stage in the learning process possibly made this more fun than an alternative virtual world learning approach utilising a single class room in which all stages of a process might occur. Not knowing where the teleports would lead them in the next stage of their journey provided an exploratory feel to the environment. For most participants they found the environment very easy to use and welcoming.
The format and the information provided in the slide presentation received, on the most part, positive feedback. The requirement for more control over the slide show to pause, forward and rewind came from both groups. Enabling user control like this in this experiment was not an option as control over the information delivery for both groups’ had to be placed under strict experimental conditions so that only one independent variable changed in the experiment – the presence or absence of the 3D models.
Even so, if this or a similar lecture was not under experimental conditions the researcher cannot help but question if this addition would have lessened the entire experience of the participant. Sharing in the learning process within a set time frame and the pressure of the quiz after completion may have also added to the positive experience felt by the participants. Possibly allowing the user to walk away with additional material may have assisted in providing the participant with the convenience to learn more than just the information presented. In addition to this a live lecturer as some participants would have like to have seen may have also satisfied the participant’s requirements for more controlled information.
Technology constraints certainly presented itself in this experiment with approximately 20% of the participants from both groups commenting upon a technology issues to varying degrees. The major problems related to network latency (lag) and audio dropouts. In a streamed world (such as Second Life) especially when there are many avatars in a SIM lag is a typical problem. Audio although not as bad or as frequent as visual lag does occasionally present a problem in Second Life. The audio stream occasionally is lost and the only way to fix the problem is to re-log the application. Both problems from participant’s comments did not seem to affect their learning experience, and for only 7-9% warranted rating as having an impact. In the experience of this researcher, the majority of lag class problems are in fact not network lag but recipient computer performance issues. The entire sim and the various lecture rooms were monitored continually during the experiment and true (network) lag was not observed on the researcher’s computers during the experiment, nor did the SIM performance statistics monitored during the period demonstrate any significant decrease in performance.
Approximately 5% people from both groups complained that some of the pictures were too small in the survey instrument thus potentially obscuring the details of the effected bridges displayed. This could have been a major constraint on a participant’s ability to answer the Bloom’s cognitive process of ‘understand’ questions more than the ‘remember’ questions, and therefore may have contributed to perceptions of difficulty in Bloom’s ‘understand’ cognitive process portion of the post-quiz.
====5.3.2.3 The Participants: Differences Between Groups====
Whilst the 3D participants were presented with 3D models to aid learning, a number still seemed to be reading the slide show presentation. This effectively provided the 3D participants with 4 channels of learning; slide show pictures, slide show text, audio and models, whereas the 2D participants only had 3 of these channels.
There were 24 slides 20 of which were learning slides provided within a 20 minute lecture session for both groups. This meant a participant had approximately one minute per slide where they were presented with something new. There were 11 3D models of 4 bridge types therefore a new model was presented to them approximately every 2 minutes. Combining the models with the slides in the same time frame as the 2D participants may have disadvantaged the 3D participants.
The information content that was delivered to both groups was the same. No more or less technical or providing anything new with exception to the 3D models for the 3D group. Yet from the 3D groups’ comments some participants seemed to want more information or simpler explanations. Within the 2D group many had commented that it was easy to follow not too technical and easy to comprehend – none commented that the material was complex. Possibly this difference is not that they needed more information but rather that with 4 information channels there was too much information provided in the time allocated for the 3D group. Alternatively the difference might also reflect as case of ‘not knowing what you don’t know’ in the 2D group, while the addition of accurately constructed 3D models raised additional questions in the minds of the participants, or improved their general level of attentiveness.
The 3D group found the addition of 3D models to be a useful learning tool. From their comments it seemed that 3D models of the bridges were perceived to have helped them understand the subject matter better than they perceived they would with a lecture without the models. (Note, however that in this case the perception is not supported by the test results). Many participants perceived that the 3D models also made the entire lecture experience more engaging than whatever assumed alternative against which they were measuring the experience.
The focus of the 3D participants was more strongly inside the world rather than their outside world. Furthermore the extent to which their focus inside the world provided distraction brought about more emotional response than distractions noted by from the 2D participants. The former tended to use repetition, descriptive adjectives and emphatic declamations concerning distractions, while the latter tended to merely note or comment favourably about the ability to be distracted! This seems to suggest that the 3D participants experienced a greater feeling of presence and possibly immersion in their virtual world learning experience.
To appreciate these comments, the reader is referred to the literature review where the difference between immersion and presence is discussed (see page 39). Immersion or ‘system immersion’ is an objective measure it is the extent to which a person becomes removed from their outside world to operate within the virtual world space. Whereas, presence is a subjective measure it is the extent to which a person feels connected inside the virtual world or the feeling of ‘being there’ and their ‘willingness to suspend disbelief’ they are a part of, and inside, the virtual world.
The classification model presented by Benford (see Figure 9. Shared Space Technology According to Artificiality and Transportation) virtual reality environments are placed on a scale of artificiality and transportation. The degree to which a participant becomes removed from their local space to operate in a remote space is transportation that from Benford model is purely based upon the physical aspect of the virtual environment.
In this study the strong difference in the emotion and terms consistently used by participants in the 2D versus 3D lectures seemed to suggest that given the same virtual reality technology (desktop CVE) a greater transportation occurred for the 3D participants. The 3D participants become removed from their local world distractions and were transported into the virtual remote world. Thus in turn lead to a higher degree of presence within the virtual environment. The 2D comments of distraction compares with the results obtained by Martinez, Martinez, & Warkentin (2007) reviewed in Chapter Two Literature Review. They found when participants were presented with a 2D lecture in world participants reported distractions or a ‘disconnect’ from the lecture in world (see p. 86).
The degree of presence in the environment is often linked with desktop virtual worlds based around social interaction. As discussed the literature review Schroeder defines presence in terms of presence, copresence and connected presence (see Figure 10) which can be described respectively as ‘being there’, ‘being there together’ and ‘being connected together’. As discussed in the literature review for a social virtual world the level of presence is greater than a game virtual world due to the social connective aspects that occur within the virtual world. Heeter also defines that the presence of an individual is increased when social relationships are formed within the environment. Whereas, in this study, both groups where given the same social interactive aspects but it seems that the introduction of 3D models produced a higher level of presence for the 3D participant. The 3D participants clearly displayed more ‘ownership’ over their learning experience than the 2D group.
Of interest this higher level of engagement by the 3D group carried over to the volume of survey responses. The 3D group provided more descriptive and richer comments than the 2D group. Rather than a short dot points as often used by 2D participants, the 3D participants tended to use sentences in their open comments. The researcher was left with the subjective impression that the 3D participants, as a group, were motivated to greater detail and consideration of their comments, than was typical of the 2D group. Although not specifically measured, it is possible that the 3D group were still engaged with the experience even after they had left the lecture environment.
A further noticeable difference between the two groups was their relative concept of time. The 2D group made more comments that the slide show was a bit slow, whereas the 3D group made more comments that the lecture was too fast. (Note the actual timing and content were identical). This differing perception of time is most likely is due to a combination to the extra channel of information delivered to the 3D participants (being the 3D models) that had to be absorbed in the same time span as the 2D participants and the higher level of engagement the 3D participants expressed about their learning experience. One cannot rule out the effects of a possible unmeasured elevation of participant stress from the more “intense” learning experience vectored on the addition of the extra information channel.
==5.4 Discussion of Results==
This research sought to find the difference in learning outcomes of participants that were presented with two different forms of delivery methods; a 2D slide show and the same 2D slide show augmented with 3D models and simulations.
For the quantitative analysis the level of learning outcomes was the difference in the measure of achievement scores between the 2D group and 3D group.
Did they learn more after being presented with a 2D slide show or a 3D simulation model? From the results of both groups there was a slight, not statistically significant, lean towards the 3D group’s results on the total post-quiz scores. When analysed within each of Bloom’s cognitive process of ‘remember’ and ‘understand’, the 3D group performed slightly better than the 2D group (most notably at the upper score ranges) in the ‘remember’ dimension but there was no appreciable difference in the ‘understand’ dimension. The subjective interpretation might be that, with respect to the ‘remember’ outcome, the 3D approach may assist ‘stronger’ students to do better than they would otherwise do under the 2D approach, but that there was little impact on the ‘average’ student. The study did measure the ‘instantaneous’ ‘remember’ outcome, not the ‘remember’ outcome over an extended period, which might reveal greater differences.
Regardless of any anecdotal differences that may have been found, and the foregoing comments, the results of the statistical analysis of the post-quiz score across both groups revealed no statistically significant difference between the two groups learning outcomes within the confines of this experimental model. Thus the hypotheses defined for the quantitative analyses for this experiment remains unconfirmed.
Learning outcomes for a student traditionally are measured by a student’s achievement scores. Although an important measure, this does not provide any insight to the learning experience of the student. A student that obtains a high learning outcome by quantitative methods is not a measure of success from a qualitative approach. Quantitative methods focus on outcomes, qualitative methods focus upon the journey that leads the student to their end results.
While both the 2D and 3D groups were strongly positive of the learning experience, the qualitative analysis of both groups’ open comments revealed noticeable differences between the two groups’ journey to their end results. The 3D group tended towards greater ‘ownership’ of their learning experience, and while the 2D tended to merely observe (in some cases as a benefit) with the opportunity for distraction, the 3D almost universally, expressed resentment, or even anger, about the same distractions.
The experimental constraint of ‘same time’ may have adversely impacted the 3D group’s scored outcome due to the delivery of an additional information channel over the same time frame – even though at least 2 of the channels were effectively redundant. As the two groups performed the same and if anything the 3D group did slightly better, such a conclusion is by no means certain. The affect may rather have been to induce greater involvement by raising the stress factor for the 3D group and force greater participation in order to ‘keep up’ with the information flow.
The presence of the 3D models was widely perceived by the participants to enhance their understanding of the subject matter – although the scoring suggests that they assisted with remembering rather than understanding.
From the literature review of previous research it was found virtual world learning does take longer than traditional methods (Arreguin, 2007; Joseph, 2007). In this lecture we provided 20 minutes to both groups for a post-quiz of 20 questions. Although the 2D participants did not display a problem with the time allocated to the lecture from their comments given the results of the post-quiz particularly with Bloom’s ‘understand’, possibly both groups needed more time in which to understand the material, and particularly the 3D group where they were presented with an extra channel of information which could be interactively explored by which to learn.
Of the Likert scale questions 28 and 29 showed the most variation across the participants. The questions were specific to a participant’s learning experience. Question 28 asked if they found the learning experience better than their usual methods of learning. The vast majority from both groups agreed.
When asked in the Likert scale if the information provided was enough to understand the topic the 2D group was slightly more satisfied than the 3D group. The open questions shed some light on this issue, with more 3D group participants expressing a desire for more time to assimilate what was provided and more opportunity for self driven information collection, questioning and investigation – rather than merely more information per se. This difference might also reflect the greater level of participation, immersion, presence or transportation evidenced in the 3D group.
==5.5 Conclusion==
In answering the research question: How effective is it to learn in a virtual world using a traditional 2D slide show method compared to that of a 3D interactive simulation? The conclusions from this research are clear, and not necessarily as expected by the researcher at the commencement of the study:
#Transportation of a 2D real world lecture presentation into a virtual world situation is an acceptable use of the virtual world technology producing no statistically different outcome for Bloom’s ‘remember’ and ‘understand’ and combined cognitive processes at the mean, although there are some indicators that the ‘remember’ outcome might be enhanced at the upper and lower deciles of participant ability through augmentation of the 2D presentation with 3D representation and simulation.
#Adoption of 3D visual aids is not a pre-requisite for successful learning in a virtual space.
#The presence of 3D visual aids assisted participant’s perceptions of enjoyment, engagement, presence, immersion and/or transportation, and may therefore have a longer term effect on participation rates where participation in learning is purely voluntary.
Projecting these conclusions into a practical teaching scenario, where outcomes are the same, and only instantaneous outcome measures are considered (as the researcher did not examine long term outcomes) after taking account of the input costs of material preparation, it is clearly more cost effective to use the 2D presentation strategy for delivering virtual world courses. This conclusion is sustained where cost is measured in terms of time required for input preparation regardless of the sourcing (i.e. where the 3D models are acquired for no input hours and no financial cost the cost measure would void the observation), and outcomes are measured in terms of test scored results taken within a short period of the learning.
Where the outcome measure includes participant perception of the experience, the 3D augmented learning approach is indicated, but in this scenario, grading the relative ‘worth’ of the greater experiential outcome is more difficult and it is less clear how it can be factored absolutely into a cost benefit analysis.
==5.6 Opportunities for Further Research==
Experimental research as the name suggests is applying scientific methods and analysis to learn new insights so that other researchers can pick up from the experiment to reproduce, reform and critique. In this section the researcher proposes some opportunities for further research based upon the analysis of the results discovered in this research.
===5.6.1 Improving Instrument Reliability===
One limitation that is difficult to avoid was found in the analysis of the instrument reliability using formal (statistical) instrument reliability testing. Essentially in this experiment there were too few questions within each of the two Bloom’s cognitive process test sets to provide a conclusive reliability measure of the instrument. Increasing the number of questions within each group would certainly provide more data points in which to measure achievement results, and as a consequence of how the reliability measure algorithm works, would improve instrument reliability. The first obvious problem faced with the pre-quiz and post-quiz design for this type of experiment is that, as the number of test questions (data points) is increased, there is a point at which the testing might materially affect the training experience and therefore the outcomes, as the participants would eventually start learning from the quiz questions.
If the number of question were to be increase the range of information presented to the participant would also have to increase. Increasing the range of information provided would require additional time to be allocated to the lecture and possibly each topic therein. There is a point at which the length of time required to complete the lecture and quiz / survey combined would affect the quality of the results as the voluntary participants might judge the exercise was taking too much time, and rush the final testing / survey stages.
===5.6.2 Course versus Lecture===
The experiment focussed on a single lecture, measuring the affordances over a sequence of lectures using a similar experimental model would provide additional depth of analysis and would neutralise any initial ‘wow’ factor that might have influenced participation and attentiveness in this single event based experiment. It is possible that differences in outcomes might be more apparent between the two groups if a course was involved rather than a single lecture. There are other factors that might influence such an experiment design – such as motivation for attending the course in the first place.
===5.6.3 Introducing a Real and Robot Presenter to the Experience===
The 3D group displayed a higher level of presence in this research study. The contributing factor in this observed difference between the two groups was prima-facie, the 3D models. The opportunity for further research lies in the introduction of a presenter (even an automated robot presenter) into the lecture experience to see if this increased the level of presence had by the 3D group would occur for both groups given a live or virtually-live lecturer. As presence is generally shown to be increased by relationships with other people within a virtual world the introduction of a lecturer may add further insight as to why the 3D group displayed a higher level of presence given they only had the addition of 3D models.
===5.6.4 Testing Other Bloom’s Cognitive Processes===
The 3D group seemed to believe that the models contributed to their understanding of the subject matter. Testing higher levels of Bloom’s cognitive processes such as Apply, Analysis, Evaluate and Create may reveal that this increase in understanding may present differences between the two groups for the higher levels of Bloom’s cognitive processes.
===5.6.5 Outcome Measurement Over Time===
In this experiment the post-quiz was given directly after the lecture. Re-testing the participants over a number of periods to assess which group retained the information better for longer, and the extent to which the two approaches impacted understanding outcomes over time. The experiment would probably require a vastly greater number of initial participants so that each time lagged testing group could be tested once at different intervals, rather than re-tested, so that the testing itself did not colour the results. The researcher suspects that the greater level of post lecture engagement demonstrated by 3D participants might result in both slower degradation of the ‘remember’ outcome and a post lecture improvement in the ‘understand’ outcome over time.
===5.6.6 Comparison to Real-World Training===
Perhaps the most obvious inquiry that presents itself for further research is to include another experimental group. As the virtual world 2D lecture was effectively a real world lecture delivered in a virtual world, the addition of a real world participant group operating under the same constraints as the virtual world groups would provide an interesting control reference for virtual-real world comparison of outcomes. Providing the 2D presentation to real life participants may provide further insight to the differences of the virtual learning experience in addition providing a control group that was based around more traditional learning methods.
</div>
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
e0dc814868f08786a70ffb525e6714fceaef6213
VirtualWorldLearningReferences
0
284
316
315
2018-10-29T11:40:37Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Introduction=
The RiskWiki book and Thesis: "Real Learning in Virtual Worlds" by Dianne Bishop (2008) references an extensive list of works which is reproduced in its entirety here. The reference list also provides an outstanding bibliography about Virtual Worlds and the Virtual World Learning space. Students of these two areas are commended to explore the work of the authors listed below.
=References and Bibliography=
Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., et al. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of Educational Objectives. New York: Longman.
Anderson Research Group (n.d.). The Revised Bloom’s Taxonomy. Accessed: Jun, 2008 Retrieved from: www.andersonresearchgroup.com/reports/TPP2.ppt
Annetta, L. A., Murray, M. R., Laird, S. G., Bohr, S. C., & Park, J. C. (2006). Serious Games: Incorporating Video Games in the Classroom. EDUCAUSE Quarterly 29(3), Accessed: Jun, 2008 Retrieved from: http://connect.educause.edu/Library/EDUCAUSE+Quarterly/SeriousGamesIncorporating/39986
Arreguin, C. (2007). Reports from the Field: Second Life Community Convention 2007 Education Track Summary. Best Practices from the Second Life Community Convention Education Track 2007, Accessed: Jun, 2008 Retrieved from: http://www.holymeatballs.org/pdfs/VirtualWorldsforLearningRoadmap_012008.pdf
Axon, S. (2008). Massively's Visual History of MMORPGs, Part I. Massively, Accessed: Jun, 2008 Retrieved from: http://www.massively.com/2008/03/31/massivelys-visual-history-of-mmorpgs-part-i/
Bailenson, J. N., Yee, N., Blascovich, J., Beall, A. C., Lundblad, N., & Jin, M. (2007). The use of immersive virtual reality in the learning sciences: Digital transformations of teachers, students, and social context. The Journal of the Learning Sciences.
Bainbridge, W. S. (2007). The Scientific Research Potential of Virtual Worlds. Science, 317(5837), 472 - 476.
Bartle, R. (1990). Interactive Multi-User Computer Games. Accessed: Jun, 2008 Retrieved from: http://www.mud.co.uk/richard/imucg0.htm
Bartle, R. (2003). Designing Virtual Worlds. Indianapolis, USA: New Riders.
Beedle, J. B., & Wright, V. H. (2007). Perspectives from Multiplayer Video Gamers. In D. Gibson (Ed.), Games and Simulations in Online Learning: Research & Development Frameworks. Hershey PA, USA: Idea Group Inc
Bell, L. (2006). Dobbit Do program at Second Life Library. Second Life Library 2.0, Retrieved from: http://secondlifelibrary.blogspot.com/2006/06/dobbit-do-program-at-second-life.html
Bellman, K., & Landauer, C. (2000). Playing In The Mud: Virtual Worlds Are Real Places. Applied Artificial Intelligence, 14(1), 93-123.
Benford, S., Greenhalgh, C., Reynard, G., Brown, C., & Koleva, B. (1998). Understanding and constructing shared spaces with mixed-reality boundaries. ACM Transactions on Computer-Human Interaction 5(3), 185-223 Accessed: Jun, 2008 Retrieved from: http://www.crg.cs.nott.ac.uk/research/publications/papers/TOCHI98.pdf
Billinghurst, M., Kato, H., & Poupyrev, I. (2001). The MagicBook: Moving Seamlessly between Reality and Virtuality. IEEE Computer Graphics and Applications, 21(3), 6-8.
Biocca, F., & Delaney, B. (1995). Immersive virtual reality technology. In Communication in the age of virtual reality (pp. 57-124): Lawrence Erlbaum Associates, Inc.Accessed: May, 2008 Retrieved from: http://www.mindlab.org/images/d/DOC713.pdf
Blizzard Entertainment Inc (2008). World of Warcraft Surpasses 11 million Subscribers Worldwide. Retrieved from: http://www.blizzard.com/us/press/081028.html
Bloom, B. S., Englehart, M. D., Furst, M., Hill, E. J., & Krathwohl, D. R. (1956). Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook 1: Cognitive Domain. . New York: David McKay Company, Inc.
Bowery, J. (2001). Spasim (1974) The First First-Person-Shooter 3D Multiplayer Networked Game. Accessed: April, 2008 Retrieved from: http://www.geocities.com/jim_bowery/spasim.html
Briggs, J. C. (1996). The Promise of Virtual Reality. The Futurist 30(5), Accessed: May, 2008 Retrieved from: http://project.cyberpunk.ru/idb/virtualreality_promise.html
Brookhaven National Laboratory (n.d.). The First Video Game. Accessed: Jun, 2008 Retrieved from: http://www.bnl.gov/bnlweb/history/higinbotham.asp; also see article http://gamersquarter.com/tennisfortwo/
Brooks, F. P., Jr. (1999). What's real about virtual reality? Computer Graphics and Applications, IEEE, 19(6), 16-27.
Brown, J. D. (1997). Skewness and Kurtosis. Shiken: JALT Testing & Evaluation SIG Newsletter 1(1), 1-20 Accessed: Jan, 2009 Retrieved from: http://jalt.org/test/bro_1.htm
Budge, L. D., Strini, R. A., Dehncke, R. W., & Hunt, J. A. (1998). Synthetic Theater of War (STOW) 97 Overview (98S-SIW-086). Paper presented at the Spring Simulation Interoperability Workshop, Orlando, FL.Accessed: Jun, 2008 Retrieved from: http://www.sisostds.org/index.php?tg=articles&idx=More&topics=46&article=199
Bulkley, K. (2007). Today Second Life, tomorrow the world. Interview: Philip Rosedale. The Guardian, Accessed: Jun, 2008 Retrieved from: http://www.guardian.co.uk/technology/2007/may/17/media.newmedia2
Burdea, G. C., & Coiffet, P. (2003). Virtual Reality Technology (2nd ed.): Wiley-IEEE Press.
Burns, R. B. (2000). Introduction to Reserach Methods (4th ed.). Frenchs Forest, NSW, Australia: Longman.
Bye, C. (2008). Legends of the Industry: An Interview with Randy Farmer and Chip Morningstar. March 25th, 2008, Accessed: Jun, 2008 Retrieved from: http://www.tentonhammer.com/node/29292
Carless, S. (2006). Australian Defence Force Licenses Virtual Battlespace. Serious Games Source April 18, Accessed: Jun, 2008 Retrieved from: http://www.seriousgamessource.com/item.php?story=8955
Carlson, W. (2003). Section 17: Virtual Reality and Artificial Environments. In A Critical History of Computer Graphics and Animation: The Ohio State University.Accessed: May, 2008 Retrieved from: http://design.osu.edu/carlson/history/lessons.html
Carroll, L. (1865). Alice's Adventures in Wonderland. London: Macmillan.
Carroll, L. (1871). Through the Looking-Glass. London: Macmillan.
Castronova, E. (2001). Virtual Worlds: A First-Hand Account of Market and Society on the Cyberian Frontier. CESifo Working Paper Series No. 618, Accessed: May, 2008 Retrieved from: http://ssrn.com/paper=294828
Cavazza, F. (2007). Virtual Universes Landscape. Accessed: May, 2008 Retrieved from: http://www.fredcavazza.net/2007/10/04/virtual-universes-landscape/
Chesher, C. (1994). Colonizing Virtual Reality. Construction of the Discourse of Virtual Reality, 1984-1992. Cultronix (1), Retrieved from: http://cultronix.eserver.org/chesher/
Churches, A. (2008). Bloom's Taxonomy Blooms Digitally. Educators' eZine, Accessed: Jun, 2008 Retrieved from: http://www.techlearning.com/showArticle.php?articleID=196605124; or wiki http://edorigami.wikispaces.com/
Clark, R. E. (1983). Reconsidering Research on Learning from Media. Review of Educational Research 53(4),
Clark, R. E. (1994). Media Will Never Influence Learning. Educational Technology Research and Development, 42(2), 21-29.
Clark, S., & Maher, M. L. (2006). Collaborative Learning in A 3D Virtual Place: Investigating the Role of Place in a Virtual Learning Environment. Advanced Technology for Learning 3(4), Accessed: Jun, 2008 Retrieved from: http://web.arch.usyd.edu.au/~mary/Pubs/2006pdf/ATL_MLM_SC.pdf
Clarke, R. (2000). Robert Gagné's Nine Steps of Instruction. ISD - Development, Accessed: Jun, 2008 Retrieved from: http://www.nwlink.com/~donclark/hrd/learning/development.htm
Coleridge, S. T. (1817). Biographia Literaria (2nd edition ed.): Sara Coleridge.
Colley, S. (n.d.). Stories from the Maze War 30 Year Retrospective. Accessed: Jun, 2008 Retrieved from: http://www.digibarn.com/history/04-VCF7-MazeWar/stories/colley.html
Combs, N. (2004). A virtual world by any other name? , Accessed: 1 April 2008 Retrieved from: http://terranova.blogs.com/terra_nova/2004/06/a_virtual_world.html
CompuServe. (2007). MUD1. Accessed: Oct, 2007 Retrieved from: http://www.british-legends.com/
Computer History Museum. (n.d.). Spacewar! Accessed: Mar, 2008 Retrieved from: http://www.computerhistory.org/pdp-1/play_spacewar.html; Also see: http://www.wheels.org/spacewar/index.html
Corbit, M. (2002). Building Virtual Worlds for Informal Science Learning (SciCentr and SciFair) in the Active Worlds Educational Universe (AWEDU). Presence: Teleoperators & Virtual Environments, 11(1), 55-67.
Corry, M. (1996). Gagne's Theory of Instruction. Dr. Donald Cunningham Spring, 540 Accessed: Jun, 2008 Retrieved from: http://home.gwu.edu/~mcorry/corry1.htm
Cosby, L. N. (1999). SIMNET: An Insider's Perspective. SISO News 2(1g), Accessed: Jun, 2008 Retrieved from: http://www.sisostds.org/webletter/siso/Iss_39/art_202.htm
Dabbagh, N. (2006). The Instructional Design Knowledge Base. Instructional Technology Program, Accessed: Jun, 2008 Retrieved from: http://classweb.gmu.edu/ndabbagh/Resources/IDKB/models_theories.htm
Dalgarno, B. J. (2004). Characteristics of 3D Environments and Potential Contributions to Spatial Learning. University of Wollongong
Damer, B. (2004). First experimental post for March guest column. Accessed: Jun, 2008 Retrieved from: http://terranova.blogs.com/terra_nova/2007/03/march_topics.html
Damer, B. (2007). Meeting in the ether. Interactions, 14(5), 16-18.
Dave, R. H. (1967). Psychomotor Domain. Paper presented at the International Conference of Educational Testing.
Dave, R. H. (1970). Psychomotor levels In R. J. Armstrong (Ed.), Developing and Writing Behavioural Objectives. Tucson AZ: Educational Innovators Press
Dede, C. (1995). The Evolution of Constructivist Learning Environments: Immersion in Distributed, Virtual Worlds. Educational Technology, Research and Development, Vol. 35(No. 5), pp. 46-52.
Dede, C. (2004). Enabling Distributed Learning Communities Via EmergingTechnologies -- Part Two. T H E Journal, 32(3), 16-26.
Denzin, N. (1978). Sociological Methods: A Sourcebook.
Department of the Army (2008). America's Army: The Making Of. Accessed: Jun, 2008 Retrieved from: http://www.americasarmy.com/intel/makingof.php
Deuchar, S., & Nodder, C. (2003). The Impact of Avatars and 3D Virtual World Creation on Learning. Paper presented at the Proceedings of the 16th Annual NACCQ, Palmerston North New Zealand. Retrieved from: www.naccq.ac.nz
Dickey, M. D. (1999). 3D Virtual Worlds and Learning: An analysis of the impact of design affordances and limitations in Active Worlds, Blaxxun Interactive, and Onlive! Traveler; and a study of the implementation of Active Worlds for formal and informal education. Dissertation; The Ohio State University, from http://mchel.com/Research.htm
Dickey, M. D. (2003). Teaching in 3D: Pedagogical Affordances and Constraints of 3D Virtual Worlds for Synchronous Distance Learning. Distance Education, 24(1), 105-122.
Dickey, M. D. (2005). Three-dimensional virtual worlds and distance learning: Two case studies of Active Worlds as a medium for distance education. British Journal of Educational Technology, 36(2), 439.
DONCIO, OPNAV N79, CNET, Naval Postgraduate School, Marine Corps Training and Education Command, & Marine Corps Distance Learning Center (2008). Learning in a Virtual World, Accessed: Jun, 2008 Retrieved from: http://wiki.nasa.gov/cm/wiki/?id=2731
Edutech Wiki. (2009). The Media Debate. Accessed: Jan, 2009 Retrieved from: http://edutechwiki.unige.ch/en/The_media_debate
Electronic Arts (2007). Ultima Online: Kingdom Reborn FAQ. Accessed: May, 2008 Retrieved from: http://www.uo.com/uokr/UOKR/uokr_faq.shtml
Farmer, F. R. (1992). Social Dimensions of Habitat's Citizenry. In C. E. Loeffler & T. Anderson (Eds.), The Virtual Reality Casebook. New York: Van Nostrand Reinhold
Fielding, N. G., & Fielding, J. L. (1986). Linking Data: Qualitative and Qzrantitative Me/hods in Social Research. .
Fife-Schaw, C. (2007). How do I test the normality of a variable’s distribution? , Accessed: Jan, 2009 Retrieved from: http://www.psy.surrey.ac.uk/cfs/p8.htm
Foley, P., & Gifford, T. (2002). An Introduction to SEDRIS. Paper presented at the SEDRIS Technology Conference. Retrieved from: http://www.sedris.org/stc/2002/tu/intro/sld001.htm
Frary, R. B. (2008). Testing Memo 8: Reliability of Test Scores. Virginia Polytechnic Institute and State University (Jan, 2009), Retrieved from: http://www.testscoring.vt.edu/memo08.html
Friedl, M. (2002). Chapter One: Learning and Inspiration. In C. R. M. Inc (Ed.), Online Game Interactivity Theory. Hingham, Massachusetts
Gabrisch, C., & Burgess, G. (2005). The COA-Sim JSAF Environment in Support of Joint Military Training and Exercises. Paper presented at the SimTecT. Accessed: Jun, 2008 Retrieved from: http://www.siaa.asn.au/library_simtect_2005.html
Gagne, R. M. (1985). The Conditions of Learning and the Theory of Instruction (4 ed.). New York: Holt, Rinehart, and Winston.
Garson, G. D. (2000). The role of information technology in quality education. In Social dimensions of information technology: issues for the new millennium (pp. 177-197): IGI Publishing
Gartner (2007). Media Relations. Accessed: Oct, 2007 Retrieved from: http://www.gartner.com/it/page.jsp?id=503861
Gehorsam, R. (2003). The coming revolution in massively multiuser persistent worlds. Computer, 36(4), 93-95.
Gibson, W. (1984). Neuromancer. Canada: Ace Books.
Gikas, J., & Van Eck, R. (2004). Integrating video games in the classroom: Where to begin? . Paper presented at the National Learning Infrastructure Initiative 2004 Annual Meeting, San Diego, CA.Accessed: Many, 2008 Retrieved from: http://www.educause.edu/ir/library/pdf/NLI0431a.pdf
Goldberg, M. (2002). The History of Computer Gaming Part 5 - PLATO Ain't Just Greek. Classic Gamming, Accessed: April, 2008 Retrieved from: http://classicgaming.gamespy.com/View.php?view=Articles.Detail&id=324
Gonzalez, D. (2007). Second Life for Digital Entertainment Technology Education. Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Graphpad (2009). How useful are normality tests? , Accessed: Jan, 2009 Retrieved from: http://www.graphpad.com/library/BiostatsSpecial/article_197.htm
Grau, O. (1999). Into the Belly of the Image: Historical Aspects of Virtual Reality. Leonardo, 32(5), 365-371.
Grøstad, O. F. (2007). Define: virtual world. Accessed: Apr, 2008 Retrieved from: http://worldtheory.blogspot.com/2007/06/define-virtual-world.html
Hardy, D. R., Allen, E. C., Adams, K. P., Peters, C. B., Peterson, L. J., Cannon, M. A., et al. (2001). Advanced Distributed Simulation: Decade in Review and Future Challenges. Accessed: Jun, 2008 Retrieved from: http://stinet.dtic.mil/cgi-bin/GetTRDoc?AD=A434191&Location=U2&doc=GetTRDoc.pdf
Harrow, A. J. (1972). A taxonomy of the psychomotor domain. New York: David McKay Company, Inc.
Harvard's Berkman Center for Internet and Society. (2007). Cyber One: Law in the Court of Public Opinion. Accessed: 20/10/2007 Retrieved from: http://sleducation.wikispaces.com/educationaluses#distance
Heeter, C. (1992). Being there: The subjective experience of presence. Presence: Teleoperators & Virtual Environments, 1(2), 262– 271.
Heeter, C. (2003). Reflections on Real Presence. Presence: Teleoperators & Virtual Environments 12(4), 335-345 Accessed: Jun, 2008 Retrieved from: http://commtechlab.msu.edu/publications/files/presence2003.pdf
Heilig, M. (1955). The Cinema of the Future, reprinted. In R. Packer & K. Jordan (Eds.), Multimedia: From Wagner to Virtual Reality (expanded edition), 2002 (pp. 239-251). New York/London: W. W. Norton and Company
Holmberg, J. (2003). Ideals of Immersion in Early Cinema. Cinémas 14(1), 129-147 Retrieved from: http://www.erudit.org/revue/cine/2003/v14/n1/008961ar.pdf
Howard, R. E. (1932). The Phoenix on the Sword. In Weird Tales (Vol. December). Chicago: Popular Fiction Publishing Co
Hu, S.-Y., & Liao, G.-M. (2004). Network and System Support for Games: Scalable Peer-to-Peer Networked Virtual Environment. Paper presented at the 3rd ACM SIGCOMM workshop on Network and system support for games, Portland, Oregon, USA Accessed: Jun, 2008 Retrieved from: http://www.phys.sinica.edu.tw/~statphys/publications/2004_full_text/S_Y_Hu_Proc_ACM_SIGCOMM_2004_on_NetGame_p129-133(2004).pdf
Jacoby, J., & Matell, M. S. (1971). Three-Point Likert Scales Are Good Enough. Journal of Marketing Research, 8(4), 495-500.
Jamison, J. (2007). Two Years of Introducing Educators to Second Life in 60 Minutes, or: Tips for Dinosaur Wrangling. Paper presented at the Second Life Best Practices in Education: Teaching, Learning, and Research 2007 International Conference
Jennings, S. (2007). Virtually a World. Accessed: 1 April 2008 Retrieved from: http://brokentoys.org/2007/06/15/virtually-a-world/. Accessed: 2008-04-08.
Jones, G., & Hicks, J. (2004). 3D Online Learning Environments for Emergency Preparedness and Homeland Security Training. Paper presented at the World Conference on E-Learning in Corporate, Government, Healthcare, & Higher Education, Washington, D.C. Retrieved from: http://courseweb.unt.edu/gjones/pdf/Jones_elearn04.pdf
Joseph, B. (2007). Global Kids, Inc.’s Best Practices in Using Virtual Worlds For Education. Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Kearsley, G. (2008). Conditions of Learning (R. Gagne). Explorations in Learning & Instruction: The Theory Into Practice Database Accessed: Jun, 2008 Retrieved from: http://tip.psychology.org/gagne.html
Keegan, M. (1997). A Classification of MUDs. The Journal of Virtual Environments 2(2), Accessed: Mar, 2008 Retrieved from: http://www.brandeis.edu/pubs/jove/HTML/v2/keegan.html
Kelly, K. (1995). Singular Visionary. Wired June(3.06), Retrieved from: http://www.wired.com/wired/archive/3.06/vinge.html
King, B. (2003). Educators Turn to Games for Help. Wired, Accessed: Jun, 2008 Retrieved from: http://www.wired.com/gaming/gamingreviews/news/2003/08/59855
Kingdom of Drakkar. (1992-Current). Further Reading. Accessed: May, 2008 Retrieved from: Official: http://www.kingdomofdrakkar.com/; Historical: http://www.kingdomofdrakkar.com/forums/viewtopic.php?f=38&t=6197
Kish, S. (2007). Second Life: Virtual Worlds and the Enterprise. Accessed: May, 2008 Retrieved from: http://www.susankish.com/susan_kish/vw_secondlife.html
Klein, H. K., & Myers, M. D. (1999). A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems. MIS Quarterly, 23(1), 67-93.
Klich, R. (2007). Multimedia Theatre in the Virtual Age. University of New South Wales, Sydney, from http://www.library.unsw.edu.au/~thesis/adt-NUN/uploads/approved/adt-NUN20080304.114128/public/02whole.pdf
Kofi, B. A., Svihla, V., Gawel, D., & Bransford, D. J. (2007). Learning about Adaptive Expertise in a Multi-User Virtual Environment Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Koster, R. (2002). Online World Timeline. Accessed: Jun, 2008 Retrieved from: http://www.raphkoster.com/gaming/mudtimeline.shtml
Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development 42(2), 7-9
Krathwohl, D. R. (2002). A Revision of Bloom's Taxonomy: An Overview - Benjamin S. Bloom, University of Chicago. Theory Into Practice 41(4), 212-218 Accessed: Jun, 2008 Retrieved from: http://findarticles.com/p/articles/mi_m0NQM/is_4_41/ai_94872707
Krathwohl, D. R. (2002). A revision of Bloom's Taxonomy: an overview - Benjamin S. Bloom, University of Chicago. Theory Into Practice, Accessed: Jun, 2008 Retrieved from: http://findarticles.com/p/articles/mi_m0NQM/is_4_41/ai_94872707
Krathwohl, D. R., Bloom, B. S., & Masia, B. B. (1964). Taxonomy of Educational Objectives. The Classification of Educational Goals, Handbook II: Affective Domain. New York: David McKay Company, Inc.
Kribble, M. (2007). Getting a Second Life: Virtual Harvard. Law Library E-Newsletter January, Accessed: 20/10/2007 Retrieved from: http://www.nsulaw.nova.edu/library_tech/library/publications/bookdocket/2007/Jan2007.pdf
Kurt, S., Mike, B., Jamillah, M. G., & Thomas, H. (2004). Electromagnetism supercharged! Learning Physics with Digital Simulation Games, Proceedings of the 6th international conference on Learning sciences (pp. 513-520). Santa Monica, California: International Society of the Learning Sciences.
KZERO Research (2007). There.com vs Second Life: demographics. Accessed: Jun, 2008 Retrieved from: http://www.kzero.co.uk/blog/?p=961
Lang, T., Maclntyre, B., & Zugaza, I. J. (2008). Massively Multiplayer Online Worlds as a Platform for Augmented Reality Experiences. Paper presented at the Virtual Reality Conference, 2008. VR '08. IEEE.
Laurel, B. (1991). Computers as theatre. New York: Addison-Wesley.
Lee, A. (1991). Integrating Positivist And Interpretive Approaches To Organizational Research. Organization Science, 2(4), 342.
Lee, S.-Y., Kim, I.-J., Ahn, S. C., Lim, M.-T., & Kim, H.-G. (2005). Intelligent 3D Video Avatar for Immersive Telecommunication. In S. Zhang & R. Jarvis (Eds.), AI 2005 (pp. 726-735). Berlin Heidelberg: Springer-Verlag.Accessed: Jun, 2008 Retrieved from: http://www.imrc.kist.re.kr/~kij/LNCS_2005.pdf
Lenke, J. M., Wellens, B., & Oswald, J. (1977, Jan, 2009). Differences Between Kuder-Richardson Formula 20 and Formula 21 Reliability Coefficients for Short Tests with Different Item Variabilities. Paper presented at the Annual Meeting of the American Educational Research Association, New York, USA. Retrieved from: http://eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=ED141411&ERICExtSearch_SearchType_0=no&accno=ED141411
Lenoir, T. (2003). Programming Theatres of War: Gamemakers as Soldiers. In R. Latham (Ed.), In Bombs and Bandwidth: The Emerging Relationship Between IT and Security (pp. 175-198). New York: The New Press.Accessed: Jun, 2008 Retrieved from: http://www.stanford.edu/dept/HPS/TimLenoir/Publications/Lenoir_TheatresOfWar.pdf
Leonard, B. (Director), S. King, B. Leonard & G. Everett (Writer), G. Everett (Producer), (1992). The Lawnmower Man [Motion Picture]: New Line Cinema.
Levine, A. (2007). Avatars and Appearance: What’s your ‘dress code’? NMC Teachers Buzz. NMC Campus Observer, Retrieved from: http://sl.nmc.org/2007/07/19/dress-code/
Lewis, D. (2001). Objectivism vs. Constructivism: The Origins of this Debate and the Implications for Instructional Designers. EME 6613 Development of Technology-Based Instruction, Accessed: Jun, 2008 Retrieved from: http://www.coedu.usf.edu/agents/dlewis/publications/Objectivism_vs_Constructivism.htm
Linden, C., & Linden, P. (2008). Discussion on Education in Second Life, What’s Going On and How To Get Involved. “Inside the Lab” Podcast, a Discussion on Education in Second Life", Accessed: Jun, 2008 Retrieved from: http://blog.secondlife.com/2008/06/02/inside-the-lab-podcast-a-discussion-on-education-in-second-life/
Linden Lab (2008a). Economic Statistics: Graphs. Accessed: Jun, 2008 Retrieved from: http://secondlife.com/whatis/economy-graphs.php
Linden Lab (2008b). Second Life: Economic Statistics. Accessed: Dec, 2008 Retrieved from: http://secondlife.com/whatis/economy_stats.php
Linden Lab (2008c). Second Life: System Requirements. Accessed: Jun, 2008 Retrieved from: http://secondlife.com/support/sysreqs.php
Lisberger, S. (Director), S. Lisberger & B. MacBird (Writer), D. Kushner (Producer), (1982). Tron [Motion Picture]: Buena Vista Pictures.
Lord of the Rings Online (2007). Online Virtual World. Accessed: Jun, 2008 Retrieved from: http://www.lotro.com/
Lowood, H. E. (2008). Virtual Reality. Encyclopaedia Britannica Online, Accessed: Jun, 2008 Retrieved from: http://search.eb.com/eb/article-9001382
Macmillan, I. (Director), (Writer), (Producer), (2008). The Worlds of Fantasy: The Epic Imagination. England: Blast! Films Production for BBC.
Mania, K., & Chalmers, A. (2001). The Effects of Levels of Immersion on Memory and Presence in Virtual Environments: A Reality Centered Approach. CyberPsychology & Behavior, 4(2), 247-264.
Markowitz, M. (2000). Spacewar: The first computer video game. Really! , Retrieved from: http://www3.sympatico.ca/maury/games/space/spacewar.html
Martinez, L. M., Martinez, P., & Warkentin, G. (2007). A First Experience on Implementing a Lecture on Second Life Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Mazuryk, T., & Gervautz, M. (1996). Virtual Reality History, Applications, Technology and Future. Retrieved from: http://www.cg.tuwien.ac.at/research/publications/1996/mazuryk-1996-VRH
McLellan, H. (2004). Virtual realities. In D. H. Jonassen (Ed.), Handbook of Research on Educational Communications and Technology (2nd ed., pp. 745-784). Mahwah, NJ: Lawrence Erlbaum Associates.Accessed: Jun, 2008 Retrieved from: http://www.aect.org/edtech/17.pdf
Mergel, B. (1998). Instructional Design & Learning Theory. Accessed: Jun, 2008 Retrieved from: http://www.usask.ca/education/coursework/802papers/mergel/mergel.pdf
Meridian 59. (1996-2000 & 2002-Current). Futher Reading Retrieved from: Official Site: http://meridian59.neardeathstudios.com/; General: http://en.wikipedia.org/wiki/Meridian_59; http://www.massively.com/photos/massivelys-visual-history-of-mmorpgs-part-i/727035/
MetaMersion. Your in the Game. Retrieved from: http://www.metamersion.com/index.html
Milgram, P., & Kishino, F. (1994). A Taxonomy of Mixed Reality Visual Displays. E77-D(12), Accessed: May, 2008 Retrieved from: http://vered.rose.utoronto.ca/people/paul_dir/IEICE94/ieice.html
Miller, D. C., & Thorpe, J. A. (1995). SIMNET: The Advent Of Simulator Networking. Proceedings of the IEEE, 83(8), 1114-1123.
Monash University. (2008). Preparing Educational Objectives. Accessed: Jun, 2008 Retrieved from: http://www.calt.monash.edu.au/staff-teaching/support/objectives.html
Moriarty, D. (2008). StatCat (version 3.6). Accessed: Jan, 2009 Retrieved from: http://www.csupomona.edu/~djmoriarty/b211/index.html#statcat
Morningstar, C., & Farmer, R. (1990). The Lessons of Lucasfilm's Habitat. Paper presented at the The First International Conference on Cyberspace, University of Texas at Austin.Accessed: Oct, 2007 Retrieved from: http://www.fudco.com/chip/lessons.html
Mulligan, J. (2000). History of Online Games Part III. Imaginary Realities (April), Retrieved from: http://www.tharsis-gate.org/articles/imaginary/HISTOR~1.HTM
Mulligan, J. (2002). Talkin’ ‘bout My… Generation. Biting The Hand 17(22-JAN), Accessed: Jun, 2008 Retrieved from: http://www.skotos.net/articles/BTH_17.shtml
Nash, S. S. (2007). Behaviorism vs. Constructivism as Applied to Online Learning. XplanaZine, Accessed: Jun, 2008 Retrieved from: http://www.xplanazine.com/2007/09/behaviorism-vs-constructivism-as-applied-to-online-learning
Neuman, W. L. (2006). Social research methods (6th ed.). Boston Pearson Education, Inc
NeverWinter Nights (AOL). (1991-1997). Further Reading. Accessed: May, 2008 Retrieved from: http://en.wikipedia.org/wiki/Neverwinter_Nights_(AOL_game); http://www.bladekeep.com/nwn/index2.htm,
NIST (2006). e-Handbook of Statistical Methods: Levene Test for Equality of Variances. Accessed: Jan, 2009 Retrieved from: http://www.itl.nist.gov/div898/handbook/eda/section3/eda35a.htm
O'Donnell, D. (2003). The Annotated NetHack File. Retrieved from: http://www.spod-central.org/~psmith/nh/anhftime.html
Olsen, W. (2004). Triangulation in Social Research: Qualitative and Quantitative Methods Can Really Be Mixed. In Developments in Sociology: Causeway Press Retrieved from: http://www.ccsr.ac.uk/staff/triangulation.pdf
Onwuegbuzie, A. J. (2002). Why can't we all get along? Towards a framework for unifying research paradigms. Education, 122(3), 518-530.
Orlikowski, W. J., & Baroudi, J. J. (1991). Studying Information Technology in Organizations: Research Approaches and Assumptions. Information Systems Research, 2(1), 1-28.
Oxford Dictionary (Ed.) (1989) (2nd ed.). Oxford University Press.
Oxford Dictionary (Ed.) (1997) (Vols. 3). Oxford University Press.
Packer, R., & Jordan, K. (2002). Multimedia: From Wagner to Virtual Reality (Expanded ed.). New York: W. W. Norton and Company.
Patel, K., Bailenson, J., Jung, S., Diankov, R., & Bajcsy, R. (2006). The effects of fully immersive virtual reality on the learning of physical tasks. Paper presented at the International Workshop on Presence, Cleveland, Ohio, USA.Accessed: Jun, 2008 Retrieved from: http://www.cs.washington.edu/homes/kayur/papers/ispr06.pdf
Pearson, J. L. (2002). Shamanism and the Ancient Mind: A Cognitive Approach to Archaeology: AltaMira Press.
Pellett, D. Open letter to "Classic Gaming .com": Re: your web page, titled "The History of Computer Gaming". The Game of Dungeons (dnd): Gary Whisenhunt, Ray Wood, Dirk Pellett, and Flint Pellett's DND, Retrieved from: http://www.armory.com/~dlp/dnd1.html
Petrich, L. (n.d.). Real-Time-3D Game-Engine Taxonomy. Accessed: Jun, 2008 Retrieved from: http://homepage.mac.com/lpetrich/www/games/GET.html
Pimentel, K. K., & Teixeira, K. K. (1994). Virtual reality. New York, USA: Windcrest Books.
Purbrick, J., & Greenhalgh, C. (2002). An extensible event-based infrastructure for networked virtual worlds. Paper presented at the Virtual Reality, 2002. Proceedings. IEEE.
Ray, J. (2008). Backwards Compatible - How We Got Connected. ABC Good Games Stories, Accessed: April 2008 Retrieved from: http://www.abc.net.au/tv/goodgame/stories/s2171457.htm
Reynolds, R. (2008). VW Taxonomy Q1 ‘08. Accessed: 1 April 2008 Retrieved from: http://terranova.blogs.com/terra_nova/2008/03/vw-taxonomy-q1.html.
Rheingold, H. (1992). Virtual reality. London: Mandarin Paperback.
Rheingold, H. (1993). The virtual community: home-steading on the electronic frontier? : New York: Harper Collins. .
Richardson, H. (2005). Postmodernism: A Hobbit’s View of Information Systems Research Methodology. Paper presented at the 4th International Critical Management Studies Conference, University of Cambridge, Cambridge, UK.Accessed: 10/9/2007 Retrieved from: http://www.mngt.waikato.ac.nz/ejrot/cmsconference/2005/
Robson, S. (2008). US Army to Invest $50M in Combat Training Games. Stars and Stripes (Nov 2008), Retrieved from: http://www.stripes.com/article.asp?section=104&article=59009
Rolland, J., & Hua, H. (2005). Head-Mounted Display Systems. Encyclopedia of Optical Engineering, 1 - 14.
Rosenblum, L. J. (1995). Alice: rapid prototyping for virtual reality. Computer Graphics and Applications, IEEE, 15(3), 8-11.
Russell, T. L. (2001). No Significant Difference Phenomenon (5 ed.). North Carolina State University: IDECC.
Schmidt, M., Kinzer, C., & Greenbaum, I. (2007). Exploring Virtual Education: First Hand Account of 3 Second Life Classes Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Schroeder, R. (1997). Networked Worlds: Social Aspects of Multi-User Virtual Reality Technology. In S. R. Online (Ed.) (Vol. 2).
Schroeder, R. (2006). Being There Together and the Future of Connected Presence. Presence: Teleoperators & Virtual Environments, 15(4), 438-454.
Schuemie, M. J., Straaten, P. V. D., Krijn, M., & Mast, C. A. P. G. V. D. (2001). Research on Presence in Virtual Reality: A Survey. CyberPsychology & Behavior 4(2), Accessed: May, 2008 Retrieved from: http://graphics.tudelft.nl/~vrphobia/surveypub.pdf
Shadow of Yserbius. (1992-1996). Further Reading. Accessed: May, 2008 Retrieved from: http://www.syntax2000.co.uk/issues/; http://www.oldgames.nu/PC/Shadow_of_Yserbius/2085/; http://en.wikipedia.org/wiki/The_Shadow_of_Yserbius;
Sheridan, T. B. (1992). Musings on telepresence and virtual presence. Presence: Teleoperators & Virtual Environments, 1(1), 120-126.
Sheth, R. (2003). Avatar Technology: Giving a Face to the e-Learning Interface. [The eLearning Guild]. The eLearning Devlopers' Journal, August.
Siegle, D. (2008). The Principles and Methods of Educational Research. Accessed: Jan, 2009 Retrieved from: http://www.gifted.uconn.edu/Siegle/research/Instrument%20Reliability%20and%20Validity/Reliability.htm
Simpson, E. J. (1972). The classification of educational objectives in the psychomotor domain. The Psychomotor Domain (Vol. 3). Washington, DC: Gryphon House.
SimTeach. (2008). Universities, Colleges & Schools in Second Life. Accessed: Jun, 2008 Retrieved from: http://www.simteach.com/wiki/index.php?title=Institutions_and_Organizations_in_SL
Slater III, W. F. (2002). Internet History and Growth. Chicago Chapter of the Internet Society, Accessed: April, 2008 Retrieved from: http://www.isoc.org/internet/history/
Slater, M. (1999). Measuring Presence: A Response to the Witmer and Singer Presence Questionnaire. Presence: Teleoperators & Virtual Environments, 8(5), 560-565.
Slater, M., & Usoh, M. (1993). Presence in immersive virtual environments. Paper presented at the Virtual Reality Annual International Symposium, 1993., 1993 IEEE.
Slater, M., & Usoh, M. (1994). Representation Systems, Perceptual Position and Presence in Virtual Environments. Presence: Teleoperators & Virtual Environments, 2(3), 221–233.
Slater, M., & Wilbur, S. (1997). A Framework for Immersive Virtual Environments (FIVE): Speculations on the Role of Presence in Virtual Environments. Presence: Teleoperators & Virtual Environments, 6(6).
Slator, B. M., Borchert, O., Brandt, L., Chaput, H., Erickson, K., Groesbeck, G., et al. (2007). From Dungeons to Classrooms: The Evolution of MUDs as Learning Environments. In The Evolution of Teaching and Learning Paradigms (pp. 119-160): Springer-Verlag
Small, D., & Small, S. (1984). PLATO RISING. Online learning for Atarians 3(3), 36-87 Retrieved from: http://www.atarimagazines.com/v3n3/platorising.html
Smith, A. (1999). COLLABORATION: A Global Survey of Institutions and Programs in Virtual World Cyberspace. Retrieved from: http://www.ccon.org/vlearn/collab.htm
STATGRAPHICS Centurion (2009). Analysis Software. Retrieved from: http://www.statgraphics.com/
Stephenson, N. (1992). Snow Crash. New York: Bantam Spectra Book.
Steuer, J. (1992). Defining Virtual Reality: Dimensions Determining Telepresence. Journal of Communications, 42(4), 73-93.
Sun Microsystems (2008). Current Reality and Future Vision Open Virtual Worlds (White Paper). January, Accessed: 13 March 2008 Retrieved from: http://www.sun.com/service/applicationserversubscriptions/OpenVirtualWorld.pdf
Sutherland, I. (1965). The Ultimate Display. Paper presented at the International Federation of Information Processing. Retrieved from: http://www.cs.utah.edu/classes/cs6360/Readings/UltimateDisplay.pdf
Sutherland, I. (1968). A Head-Mounted Three-Dimensional Display. Paper presented at the Proceedings of the AFIPS Fall Joint Computer Conference, Washington, D.C.
Terdiman, D. (2007). Tech titans seek virtual-world interoperability. CNET News.com, Accessed: Jun, 2008 Retrieved from: http://news.cnet.com/Tech-titans-seek-virtual-world-interoperability/2100-1043_3-6213148.html
The New Media Consortium, & EDCAUSE (2007). The Horizon Report. Accessed: Nov, 2007 Retrieved from: http://www.nmc.org/pdf/2007_Horizon_Report.pdf
Tiernan, T. R. (1996). Synthetic Theater of War (STOW) Engineering Demonstration-1A (ED-1A) Analysis Report (ADA315093). Accessed: Jun, 2008 Retrieved from: http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA315093
Tolkien, J. R. R. (1937). The Hobbit. United Kingdom: Allen and Unwin.
Tolkien, J. R. R. (1954, 1955). The Lord of the Rings. United Kingdom: Allen and Unwin.
Ultima Online. (1997-Current). Further Reading. Accessed: Jun, 2008 Retrieved from: http://www.uoherald.com/news/;
Unger, J. M. (1979). Kanamajiri Editing and the Plato Computer-Based Education System. The Journal of the Association of Teachers of Japanese, 14(2), 141-156.
University of Washington (2008). Instructional Design Approaches. Accessed: Jun, 2008 Retrieved from: http://depts.washington.edu/eproject/Instructional%20Design%20Approaches.htm
US Joint Forces Command. (2008). Joint Semi-Automated Forces (JSAF). Accessed: Jun, 2008 Retrieved from: http://www.jfcom.mil/about/fact_jsaf.html
Van Dam, A., Forsberg, A. S., Laidlaw, D. H., LaViola, J. J. J., & Simpson, R. M. (2000). Immersive VR for scientific visualization: a progress report. Computer Graphics and Applications, IEEE, 20(6), 26-52.
VCampus Corporation. (2008). cyber1.org. Accessed: April, 2008 Retrieved from: http://www.cyber1.org/
Vinge, V. (1981). True Names: Binary Star Number 5, Dell Reprinted in True Names and Other Dangers, Vernor Vinge, Baen Books, 1987.
Vivekananda Centre (2008). Hinduism for Schools. Retrieved from: http://www.vivekananda.btinternet.co.uk/secondaryschoolspage1.htm
Wachowski, A., & Wachowski, L. (Director), (Writer), J. Silver (Producer), (1999). The Matrix [Motion Picture]: Warner Bros, Village Roadshow Pictures,.
Wagner, R. (1849). The Artwork of the Future (Das Kunstwerk der Zukunft), Accessed: April, 2008 Retrieved from: http://users.belgacom.net/wagnerlibrary/prose/wagartfut.htm
Walker, J. (1990). Through the Looking Glass. In L. Brenda (Ed.), The Art of Human-Computer Interface Design: Addison-Wesley
Walsham, G. (1995). The Emergence of Interpretivism in IS Research. Information Systems Research, 6(4), 376-394.
Wang, C.-S., & Tzeng, Y.-R. (2007). Framework for Bloom's Knowledge Placement in Computer Games. Paper presented at the Digital Game and Intelligent Toy Enhanced Learning, 2007. DIGITEL '07. The First IEEE International Workshop.
Weber, R. (2004). The Rhetoric of Positivism Versus Interpretivism: A Personal View. MIS Quarterly, 28(1), 3-xiii.
West Virginia University. (2008). The Looking Glass Project. Accessed: April, 2008 Retrieved from: http://clc.as.wvu.edu:8080/clc/projects/alice/document_view?month:int=5&year:int=2008
Wikipedia. (2008a). The Manhole. Accessed: Jun, 2008 Retrieved from: http://en.wikipedia.org/wiki/The_Manhole
Wikipedia. (2008b). PLATO (computer system). Retrieved from: http://en.wikipedia.org/wiki/PLATO
Wikipedia Doom. (2008). Doom Engine. Accessed: Jun, 2008 Retrieved from: http://doom.wikia.com/wiki/Vanilla_Doom#Fan_community_variants
Wikipedia Ultima (2008). Ultima Online: Third Dawn. Accessed: May, 2008 Retrieved from: http://ultima.wikia.com/wiki/Ultima_Online:_Third_Dawn
Wilson, N. (2007). The Problem with Virtual Worlds. Accessed: 1 April 2008 Retrieved from: http://metaversed.com/23-oct-2007/problem-virtual-words
Witmer, B. G., & Singer, M. J. (1998). Measuring Presence in Virtual Environments: A Presence Questionnaire. Presence: Teleoperators & Virtual Environments, 7(3), 225-240.
Woodcock, B. S. (2008, May, 2008). An Analysis of MMOG Subscription Growth. MMOGCHART.COM Retrieved from: http://www.mmogchart.com
Woolley, D. R. (1994). PLATO: The Emergence of On-Line Community. Computer-Mediated Communication Magazine, 1(3), 5.
Yee, N. (2006). The Demographics, Motivations, and Derived Experiences of Users of Massively Multi-User Online Graphical Environments. Presence: Teleoperators & Virtual Environments, 15(3), 309-329.
Youngblut, C. (1998). Education Uses of Virtual Reality Technology (pp. 131). Alexandria, VA: Institute for Defence Analyses.
Yount, W. R. (2006). Research Design & Statistical Analysis in Christian Ministry, Accessed: Dec, 2008 Retrieved from: http://www.napce.org/yount.html
Zakon, R. H. (2006). Hobbes' Internet Timeline v8.2. Accessed: April, 2008 Retrieved from: http://www.zakon.org/robert/internet/timeline/
Zyda, M. (2005). From Visual Simulation to Virtual Reality to Games. Computer, 38(9), 25-32.
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
ab2b67bad99f0d003b33f38e83cdb66c45970a6a
370
316
2018-10-29T12:02:36Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Introduction=
The RiskWiki book and Thesis: "Real Learning in Virtual Worlds" by Dianne Bishop (2008) references an extensive list of works which is reproduced in its entirety here. The reference list also provides an outstanding bibliography about Virtual Worlds and the Virtual World Learning space. Students of these two areas are commended to explore the work of the authors listed below.
=References and Bibliography=
Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., et al. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of Educational Objectives. New York: Longman.
Anderson Research Group (n.d.). The Revised Bloom’s Taxonomy. Accessed: Jun, 2008 Retrieved from: www.andersonresearchgroup.com/reports/TPP2.ppt
Annetta, L. A., Murray, M. R., Laird, S. G., Bohr, S. C., & Park, J. C. (2006). Serious Games: Incorporating Video Games in the Classroom. EDUCAUSE Quarterly 29(3), Accessed: Jun, 2008 Retrieved from: http://connect.educause.edu/Library/EDUCAUSE+Quarterly/SeriousGamesIncorporating/39986
Arreguin, C. (2007). Reports from the Field: Second Life Community Convention 2007 Education Track Summary. Best Practices from the Second Life Community Convention Education Track 2007, Accessed: Jun, 2008 Retrieved from: http://www.holymeatballs.org/pdfs/VirtualWorldsforLearningRoadmap_012008.pdf
Axon, S. (2008). Massively's Visual History of MMORPGs, Part I. Massively, Accessed: Jun, 2008 Retrieved from: http://www.massively.com/2008/03/31/massivelys-visual-history-of-mmorpgs-part-i/
Bailenson, J. N., Yee, N., Blascovich, J., Beall, A. C., Lundblad, N., & Jin, M. (2007). The use of immersive virtual reality in the learning sciences: Digital transformations of teachers, students, and social context. The Journal of the Learning Sciences.
Bainbridge, W. S. (2007). The Scientific Research Potential of Virtual Worlds. Science, 317(5837), 472 - 476.
Bartle, R. (1990). Interactive Multi-User Computer Games. Accessed: Jun, 2008 Retrieved from: http://www.mud.co.uk/richard/imucg0.htm
Bartle, R. (2003). Designing Virtual Worlds. Indianapolis, USA: New Riders.
Beedle, J. B., & Wright, V. H. (2007). Perspectives from Multiplayer Video Gamers. In D. Gibson (Ed.), Games and Simulations in Online Learning: Research & Development Frameworks. Hershey PA, USA: Idea Group Inc
Bell, L. (2006). Dobbit Do program at Second Life Library. Second Life Library 2.0, Retrieved from: http://secondlifelibrary.blogspot.com/2006/06/dobbit-do-program-at-second-life.html
Bellman, K., & Landauer, C. (2000). Playing In The Mud: Virtual Worlds Are Real Places. Applied Artificial Intelligence, 14(1), 93-123.
Benford, S., Greenhalgh, C., Reynard, G., Brown, C., & Koleva, B. (1998). Understanding and constructing shared spaces with mixed-reality boundaries. ACM Transactions on Computer-Human Interaction 5(3), 185-223 Accessed: Jun, 2008 Retrieved from: http://www.crg.cs.nott.ac.uk/research/publications/papers/TOCHI98.pdf
Billinghurst, M., Kato, H., & Poupyrev, I. (2001). The MagicBook: Moving Seamlessly between Reality and Virtuality. IEEE Computer Graphics and Applications, 21(3), 6-8.
Biocca, F., & Delaney, B. (1995). Immersive virtual reality technology. In Communication in the age of virtual reality (pp. 57-124): Lawrence Erlbaum Associates, Inc.Accessed: May, 2008 Retrieved from: http://www.mindlab.org/images/d/DOC713.pdf
Blizzard Entertainment Inc (2008). World of Warcraft Surpasses 11 million Subscribers Worldwide. Retrieved from: http://www.blizzard.com/us/press/081028.html
Bloom, B. S., Englehart, M. D., Furst, M., Hill, E. J., & Krathwohl, D. R. (1956). Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook 1: Cognitive Domain. . New York: David McKay Company, Inc.
Bowery, J. (2001). Spasim (1974) The First First-Person-Shooter 3D Multiplayer Networked Game. Accessed: April, 2008 Retrieved from: http://www.geocities.com/jim_bowery/spasim.html
Briggs, J. C. (1996). The Promise of Virtual Reality. The Futurist 30(5), Accessed: May, 2008 Retrieved from: http://project.cyberpunk.ru/idb/virtualreality_promise.html
Brookhaven National Laboratory (n.d.). The First Video Game. Accessed: Jun, 2008 Retrieved from: http://www.bnl.gov/bnlweb/history/higinbotham.asp; also see article http://gamersquarter.com/tennisfortwo/
Brooks, F. P., Jr. (1999). What's real about virtual reality? Computer Graphics and Applications, IEEE, 19(6), 16-27.
Brown, J. D. (1997). Skewness and Kurtosis. Shiken: JALT Testing & Evaluation SIG Newsletter 1(1), 1-20 Accessed: Jan, 2009 Retrieved from: http://jalt.org/test/bro_1.htm
Budge, L. D., Strini, R. A., Dehncke, R. W., & Hunt, J. A. (1998). Synthetic Theater of War (STOW) 97 Overview (98S-SIW-086). Paper presented at the Spring Simulation Interoperability Workshop, Orlando, FL.Accessed: Jun, 2008 Retrieved from: http://www.sisostds.org/index.php?tg=articles&idx=More&topics=46&article=199
Bulkley, K. (2007). Today Second Life, tomorrow the world. Interview: Philip Rosedale. The Guardian, Accessed: Jun, 2008 Retrieved from: http://www.guardian.co.uk/technology/2007/may/17/media.newmedia2
Burdea, G. C., & Coiffet, P. (2003). Virtual Reality Technology (2nd ed.): Wiley-IEEE Press.
Burns, R. B. (2000). Introduction to Reserach Methods (4th ed.). Frenchs Forest, NSW, Australia: Longman.
Bye, C. (2008). Legends of the Industry: An Interview with Randy Farmer and Chip Morningstar. March 25th, 2008, Accessed: Jun, 2008 Retrieved from: http://www.tentonhammer.com/node/29292
Carless, S. (2006). Australian Defence Force Licenses Virtual Battlespace. Serious Games Source April 18, Accessed: Jun, 2008 Retrieved from: http://www.seriousgamessource.com/item.php?story=8955
Carlson, W. (2003). Section 17: Virtual Reality and Artificial Environments. In A Critical History of Computer Graphics and Animation: The Ohio State University.Accessed: May, 2008 Retrieved from: http://design.osu.edu/carlson/history/lessons.html
Carroll, L. (1865). Alice's Adventures in Wonderland. London: Macmillan.
Carroll, L. (1871). Through the Looking-Glass. London: Macmillan.
Castronova, E. (2001). Virtual Worlds: A First-Hand Account of Market and Society on the Cyberian Frontier. CESifo Working Paper Series No. 618, Accessed: May, 2008 Retrieved from: http://ssrn.com/paper=294828
Cavazza, F. (2007). Virtual Universes Landscape. Accessed: May, 2008 Retrieved from: http://www.fredcavazza.net/2007/10/04/virtual-universes-landscape/
Chesher, C. (1994). Colonizing Virtual Reality. Construction of the Discourse of Virtual Reality, 1984-1992. Cultronix (1), Retrieved from: http://cultronix.eserver.org/chesher/
Churches, A. (2008). Bloom's Taxonomy Blooms Digitally. Educators' eZine, Accessed: Jun, 2008 Retrieved from: http://www.techlearning.com/showArticle.php?articleID=196605124; or wiki http://edorigami.wikispaces.com/
Clark, R. E. (1983). Reconsidering Research on Learning from Media. Review of Educational Research 53(4),
Clark, R. E. (1994). Media Will Never Influence Learning. Educational Technology Research and Development, 42(2), 21-29.
Clark, S., & Maher, M. L. (2006). Collaborative Learning in A 3D Virtual Place: Investigating the Role of Place in a Virtual Learning Environment. Advanced Technology for Learning 3(4), Accessed: Jun, 2008 Retrieved from: http://web.arch.usyd.edu.au/~mary/Pubs/2006pdf/ATL_MLM_SC.pdf
Clarke, R. (2000). Robert Gagné's Nine Steps of Instruction. ISD - Development, Accessed: Jun, 2008 Retrieved from: http://www.nwlink.com/~donclark/hrd/learning/development.htm
Coleridge, S. T. (1817). Biographia Literaria (2nd edition ed.): Sara Coleridge.
Colley, S. (n.d.). Stories from the Maze War 30 Year Retrospective. Accessed: Jun, 2008 Retrieved from: http://www.digibarn.com/history/04-VCF7-MazeWar/stories/colley.html
Combs, N. (2004). A virtual world by any other name? , Accessed: 1 April 2008 Retrieved from: http://terranova.blogs.com/terra_nova/2004/06/a_virtual_world.html
CompuServe. (2007). MUD1. Accessed: Oct, 2007 Retrieved from: http://www.british-legends.com/
Computer History Museum. (n.d.). Spacewar! Accessed: Mar, 2008 Retrieved from: http://www.computerhistory.org/pdp-1/play_spacewar.html; Also see: http://www.wheels.org/spacewar/index.html
Corbit, M. (2002). Building Virtual Worlds for Informal Science Learning (SciCentr and SciFair) in the Active Worlds Educational Universe (AWEDU). Presence: Teleoperators & Virtual Environments, 11(1), 55-67.
Corry, M. (1996). Gagne's Theory of Instruction. Dr. Donald Cunningham Spring, 540 Accessed: Jun, 2008 Retrieved from: http://home.gwu.edu/~mcorry/corry1.htm
Cosby, L. N. (1999). SIMNET: An Insider's Perspective. SISO News 2(1g), Accessed: Jun, 2008 Retrieved from: http://www.sisostds.org/webletter/siso/Iss_39/art_202.htm
Dabbagh, N. (2006). The Instructional Design Knowledge Base. Instructional Technology Program, Accessed: Jun, 2008 Retrieved from: http://classweb.gmu.edu/ndabbagh/Resources/IDKB/models_theories.htm
Dalgarno, B. J. (2004). Characteristics of 3D Environments and Potential Contributions to Spatial Learning. University of Wollongong
Damer, B. (2004). First experimental post for March guest column. Accessed: Jun, 2008 Retrieved from: http://terranova.blogs.com/terra_nova/2007/03/march_topics.html
Damer, B. (2007). Meeting in the ether. Interactions, 14(5), 16-18.
Dave, R. H. (1967). Psychomotor Domain. Paper presented at the International Conference of Educational Testing.
Dave, R. H. (1970). Psychomotor levels In R. J. Armstrong (Ed.), Developing and Writing Behavioural Objectives. Tucson AZ: Educational Innovators Press
Dede, C. (1995). The Evolution of Constructivist Learning Environments: Immersion in Distributed, Virtual Worlds. Educational Technology, Research and Development, Vol. 35(No. 5), pp. 46-52.
Dede, C. (2004). Enabling Distributed Learning Communities Via EmergingTechnologies -- Part Two. T H E Journal, 32(3), 16-26.
Denzin, N. (1978). Sociological Methods: A Sourcebook.
Department of the Army (2008). America's Army: The Making Of. Accessed: Jun, 2008 Retrieved from: http://www.americasarmy.com/intel/makingof.php
Deuchar, S., & Nodder, C. (2003). The Impact of Avatars and 3D Virtual World Creation on Learning. Paper presented at the Proceedings of the 16th Annual NACCQ, Palmerston North New Zealand. Retrieved from: www.naccq.ac.nz
Dickey, M. D. (1999). 3D Virtual Worlds and Learning: An analysis of the impact of design affordances and limitations in Active Worlds, Blaxxun Interactive, and Onlive! Traveler; and a study of the implementation of Active Worlds for formal and informal education. Dissertation; The Ohio State University, from http://mchel.com/Research.htm
Dickey, M. D. (2003). Teaching in 3D: Pedagogical Affordances and Constraints of 3D Virtual Worlds for Synchronous Distance Learning. Distance Education, 24(1), 105-122.
Dickey, M. D. (2005). Three-dimensional virtual worlds and distance learning: Two case studies of Active Worlds as a medium for distance education. British Journal of Educational Technology, 36(2), 439.
DONCIO, OPNAV N79, CNET, Naval Postgraduate School, Marine Corps Training and Education Command, & Marine Corps Distance Learning Center (2008). Learning in a Virtual World, Accessed: Jun, 2008 Retrieved from: http://wiki.nasa.gov/cm/wiki/?id=2731
Edutech Wiki. (2009). The Media Debate. Accessed: Jan, 2009 Retrieved from: http://edutechwiki.unige.ch/en/The_media_debate
Electronic Arts (2007). Ultima Online: Kingdom Reborn FAQ. Accessed: May, 2008 Retrieved from: http://www.uo.com/uokr/UOKR/uokr_faq.shtml
Farmer, F. R. (1992). Social Dimensions of Habitat's Citizenry. In C. E. Loeffler & T. Anderson (Eds.), The Virtual Reality Casebook. New York: Van Nostrand Reinhold
Fielding, N. G., & Fielding, J. L. (1986). Linking Data: Qualitative and Qzrantitative Me/hods in Social Research. .
Fife-Schaw, C. (2007). How do I test the normality of a variable’s distribution? , Accessed: Jan, 2009 Retrieved from: http://www.psy.surrey.ac.uk/cfs/p8.htm
Foley, P., & Gifford, T. (2002). An Introduction to SEDRIS. Paper presented at the SEDRIS Technology Conference. Retrieved from: http://www.sedris.org/stc/2002/tu/intro/sld001.htm
Frary, R. B. (2008). Testing Memo 8: Reliability of Test Scores. Virginia Polytechnic Institute and State University (Jan, 2009), Retrieved from: http://www.testscoring.vt.edu/memo08.html
Friedl, M. (2002). Chapter One: Learning and Inspiration. In C. R. M. Inc (Ed.), Online Game Interactivity Theory. Hingham, Massachusetts
Gabrisch, C., & Burgess, G. (2005). The COA-Sim JSAF Environment in Support of Joint Military Training and Exercises. Paper presented at the SimTecT. Accessed: Jun, 2008 Retrieved from: http://www.siaa.asn.au/library_simtect_2005.html
Gagne, R. M. (1985). The Conditions of Learning and the Theory of Instruction (4 ed.). New York: Holt, Rinehart, and Winston.
Garson, G. D. (2000). The role of information technology in quality education. In Social dimensions of information technology: issues for the new millennium (pp. 177-197): IGI Publishing
Gartner (2007). Media Relations. Accessed: Oct, 2007 Retrieved from: http://www.gartner.com/it/page.jsp?id=503861
Gehorsam, R. (2003). The coming revolution in massively multiuser persistent worlds. Computer, 36(4), 93-95.
Gibson, W. (1984). Neuromancer. Canada: Ace Books.
Gikas, J., & Van Eck, R. (2004). Integrating video games in the classroom: Where to begin? . Paper presented at the National Learning Infrastructure Initiative 2004 Annual Meeting, San Diego, CA.Accessed: Many, 2008 Retrieved from: http://www.educause.edu/ir/library/pdf/NLI0431a.pdf
Goldberg, M. (2002). The History of Computer Gaming Part 5 - PLATO Ain't Just Greek. Classic Gamming, Accessed: April, 2008 Retrieved from: http://classicgaming.gamespy.com/View.php?view=Articles.Detail&id=324
Gonzalez, D. (2007). Second Life for Digital Entertainment Technology Education. Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Graphpad (2009). How useful are normality tests? , Accessed: Jan, 2009 Retrieved from: http://www.graphpad.com/library/BiostatsSpecial/article_197.htm
Grau, O. (1999). Into the Belly of the Image: Historical Aspects of Virtual Reality. Leonardo, 32(5), 365-371.
Grøstad, O. F. (2007). Define: virtual world. Accessed: Apr, 2008 Retrieved from: http://worldtheory.blogspot.com/2007/06/define-virtual-world.html
Hardy, D. R., Allen, E. C., Adams, K. P., Peters, C. B., Peterson, L. J., Cannon, M. A., et al. (2001). Advanced Distributed Simulation: Decade in Review and Future Challenges. Accessed: Jun, 2008 Retrieved from: http://stinet.dtic.mil/cgi-bin/GetTRDoc?AD=A434191&Location=U2&doc=GetTRDoc.pdf
Harrow, A. J. (1972). A taxonomy of the psychomotor domain. New York: David McKay Company, Inc.
Harvard's Berkman Center for Internet and Society. (2007). Cyber One: Law in the Court of Public Opinion. Accessed: 20/10/2007 Retrieved from: http://sleducation.wikispaces.com/educationaluses#distance
Heeter, C. (1992). Being there: The subjective experience of presence. Presence: Teleoperators & Virtual Environments, 1(2), 262– 271.
Heeter, C. (2003). Reflections on Real Presence. Presence: Teleoperators & Virtual Environments 12(4), 335-345 Accessed: Jun, 2008 Retrieved from: http://commtechlab.msu.edu/publications/files/presence2003.pdf
Heilig, M. (1955). The Cinema of the Future, reprinted. In R. Packer & K. Jordan (Eds.), Multimedia: From Wagner to Virtual Reality (expanded edition), 2002 (pp. 239-251). New York/London: W. W. Norton and Company
Holmberg, J. (2003). Ideals of Immersion in Early Cinema. Cinémas 14(1), 129-147 Retrieved from: http://www.erudit.org/revue/cine/2003/v14/n1/008961ar.pdf
Howard, R. E. (1932). The Phoenix on the Sword. In Weird Tales (Vol. December). Chicago: Popular Fiction Publishing Co
Hu, S.-Y., & Liao, G.-M. (2004). Network and System Support for Games: Scalable Peer-to-Peer Networked Virtual Environment. Paper presented at the 3rd ACM SIGCOMM workshop on Network and system support for games, Portland, Oregon, USA Accessed: Jun, 2008 Retrieved from: http://www.phys.sinica.edu.tw/~statphys/publications/2004_full_text/S_Y_Hu_Proc_ACM_SIGCOMM_2004_on_NetGame_p129-133(2004).pdf
Jacoby, J., & Matell, M. S. (1971). Three-Point Likert Scales Are Good Enough. Journal of Marketing Research, 8(4), 495-500.
Jamison, J. (2007). Two Years of Introducing Educators to Second Life in 60 Minutes, or: Tips for Dinosaur Wrangling. Paper presented at the Second Life Best Practices in Education: Teaching, Learning, and Research 2007 International Conference
Jennings, S. (2007). Virtually a World. Accessed: 1 April 2008 Retrieved from: http://brokentoys.org/2007/06/15/virtually-a-world/. Accessed: 2008-04-08.
Jones, G., & Hicks, J. (2004). 3D Online Learning Environments for Emergency Preparedness and Homeland Security Training. Paper presented at the World Conference on E-Learning in Corporate, Government, Healthcare, & Higher Education, Washington, D.C. Retrieved from: http://courseweb.unt.edu/gjones/pdf/Jones_elearn04.pdf
Joseph, B. (2007). Global Kids, Inc.’s Best Practices in Using Virtual Worlds For Education. Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Kearsley, G. (2008). Conditions of Learning (R. Gagne). Explorations in Learning & Instruction: The Theory Into Practice Database Accessed: Jun, 2008 Retrieved from: http://tip.psychology.org/gagne.html
Keegan, M. (1997). A Classification of MUDs. The Journal of Virtual Environments 2(2), Accessed: Mar, 2008 Retrieved from: http://www.brandeis.edu/pubs/jove/HTML/v2/keegan.html
Kelly, K. (1995). Singular Visionary. Wired June(3.06), Retrieved from: http://www.wired.com/wired/archive/3.06/vinge.html
King, B. (2003). Educators Turn to Games for Help. Wired, Accessed: Jun, 2008 Retrieved from: http://www.wired.com/gaming/gamingreviews/news/2003/08/59855
Kingdom of Drakkar. (1992-Current). Further Reading. Accessed: May, 2008 Retrieved from: Official: http://www.kingdomofdrakkar.com/; Historical: http://www.kingdomofdrakkar.com/forums/viewtopic.php?f=38&t=6197
Kish, S. (2007). Second Life: Virtual Worlds and the Enterprise. Accessed: May, 2008 Retrieved from: http://www.susankish.com/susan_kish/vw_secondlife.html
Klein, H. K., & Myers, M. D. (1999). A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems. MIS Quarterly, 23(1), 67-93.
Klich, R. (2007). Multimedia Theatre in the Virtual Age. University of New South Wales, Sydney, from http://www.library.unsw.edu.au/~thesis/adt-NUN/uploads/approved/adt-NUN20080304.114128/public/02whole.pdf
Kofi, B. A., Svihla, V., Gawel, D., & Bransford, D. J. (2007). Learning about Adaptive Expertise in a Multi-User Virtual Environment Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Koster, R. (2002). Online World Timeline. Accessed: Jun, 2008 Retrieved from: http://www.raphkoster.com/gaming/mudtimeline.shtml
Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development 42(2), 7-9
Krathwohl, D. R. (2002). A Revision of Bloom's Taxonomy: An Overview - Benjamin S. Bloom, University of Chicago. Theory Into Practice 41(4), 212-218 Accessed: Jun, 2008 Retrieved from: http://findarticles.com/p/articles/mi_m0NQM/is_4_41/ai_94872707
Krathwohl, D. R. (2002). A revision of Bloom's Taxonomy: an overview - Benjamin S. Bloom, University of Chicago. Theory Into Practice, Accessed: Jun, 2008 Retrieved from: http://findarticles.com/p/articles/mi_m0NQM/is_4_41/ai_94872707
Krathwohl, D. R., Bloom, B. S., & Masia, B. B. (1964). Taxonomy of Educational Objectives. The Classification of Educational Goals, Handbook II: Affective Domain. New York: David McKay Company, Inc.
Kribble, M. (2007). Getting a Second Life: Virtual Harvard. Law Library E-Newsletter January, Accessed: 20/10/2007 Retrieved from: http://www.nsulaw.nova.edu/library_tech/library/publications/bookdocket/2007/Jan2007.pdf
Kurt, S., Mike, B., Jamillah, M. G., & Thomas, H. (2004). Electromagnetism supercharged! Learning Physics with Digital Simulation Games, Proceedings of the 6th international conference on Learning sciences (pp. 513-520). Santa Monica, California: International Society of the Learning Sciences.
KZERO Research (2007). There.com vs Second Life: demographics. Accessed: Jun, 2008 Retrieved from: http://www.kzero.co.uk/blog/?p=961
Lang, T., Maclntyre, B., & Zugaza, I. J. (2008). Massively Multiplayer Online Worlds as a Platform for Augmented Reality Experiences. Paper presented at the Virtual Reality Conference, 2008. VR '08. IEEE.
Laurel, B. (1991). Computers as theatre. New York: Addison-Wesley.
Lee, A. (1991). Integrating Positivist And Interpretive Approaches To Organizational Research. Organization Science, 2(4), 342.
Lee, S.-Y., Kim, I.-J., Ahn, S. C., Lim, M.-T., & Kim, H.-G. (2005). Intelligent 3D Video Avatar for Immersive Telecommunication. In S. Zhang & R. Jarvis (Eds.), AI 2005 (pp. 726-735). Berlin Heidelberg: Springer-Verlag.Accessed: Jun, 2008 Retrieved from: http://www.imrc.kist.re.kr/~kij/LNCS_2005.pdf
Lenke, J. M., Wellens, B., & Oswald, J. (1977, Jan, 2009). Differences Between Kuder-Richardson Formula 20 and Formula 21 Reliability Coefficients for Short Tests with Different Item Variabilities. Paper presented at the Annual Meeting of the American Educational Research Association, New York, USA. Retrieved from: http://eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=ED141411&ERICExtSearch_SearchType_0=no&accno=ED141411
Lenoir, T. (2003). Programming Theatres of War: Gamemakers as Soldiers. In R. Latham (Ed.), In Bombs and Bandwidth: The Emerging Relationship Between IT and Security (pp. 175-198). New York: The New Press.Accessed: Jun, 2008 Retrieved from: http://www.stanford.edu/dept/HPS/TimLenoir/Publications/Lenoir_TheatresOfWar.pdf
Leonard, B. (Director), S. King, B. Leonard & G. Everett (Writer), G. Everett (Producer), (1992). The Lawnmower Man [Motion Picture]: New Line Cinema.
Levine, A. (2007). Avatars and Appearance: What’s your ‘dress code’? NMC Teachers Buzz. NMC Campus Observer, Retrieved from: http://sl.nmc.org/2007/07/19/dress-code/
Lewis, D. (2001). Objectivism vs. Constructivism: The Origins of this Debate and the Implications for Instructional Designers. EME 6613 Development of Technology-Based Instruction, Accessed: Jun, 2008 Retrieved from: http://www.coedu.usf.edu/agents/dlewis/publications/Objectivism_vs_Constructivism.htm
Linden, C., & Linden, P. (2008). Discussion on Education in Second Life, What’s Going On and How To Get Involved. “Inside the Lab” Podcast, a Discussion on Education in Second Life", Accessed: Jun, 2008 Retrieved from: http://blog.secondlife.com/2008/06/02/inside-the-lab-podcast-a-discussion-on-education-in-second-life/
Linden Lab (2008a). Economic Statistics: Graphs. Accessed: Jun, 2008 Retrieved from: http://secondlife.com/whatis/economy-graphs.php
Linden Lab (2008b). Second Life: Economic Statistics. Accessed: Dec, 2008 Retrieved from: http://secondlife.com/whatis/economy_stats.php
Linden Lab (2008c). Second Life: System Requirements. Accessed: Jun, 2008 Retrieved from: http://secondlife.com/support/sysreqs.php
Lisberger, S. (Director), S. Lisberger & B. MacBird (Writer), D. Kushner (Producer), (1982). Tron [Motion Picture]: Buena Vista Pictures.
Lord of the Rings Online (2007). Online Virtual World. Accessed: Jun, 2008 Retrieved from: http://www.lotro.com/
Lowood, H. E. (2008). Virtual Reality. Encyclopaedia Britannica Online, Accessed: Jun, 2008 Retrieved from: http://search.eb.com/eb/article-9001382
Macmillan, I. (Director), (Writer), (Producer), (2008). The Worlds of Fantasy: The Epic Imagination. England: Blast! Films Production for BBC.
Mania, K., & Chalmers, A. (2001). The Effects of Levels of Immersion on Memory and Presence in Virtual Environments: A Reality Centered Approach. CyberPsychology & Behavior, 4(2), 247-264.
Markowitz, M. (2000). Spacewar: The first computer video game. Really! , Retrieved from: http://www3.sympatico.ca/maury/games/space/spacewar.html
Martinez, L. M., Martinez, P., & Warkentin, G. (2007). A First Experience on Implementing a Lecture on Second Life Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Mazuryk, T., & Gervautz, M. (1996). Virtual Reality History, Applications, Technology and Future. Retrieved from: http://www.cg.tuwien.ac.at/research/publications/1996/mazuryk-1996-VRH
McLellan, H. (2004). Virtual realities. In D. H. Jonassen (Ed.), Handbook of Research on Educational Communications and Technology (2nd ed., pp. 745-784). Mahwah, NJ: Lawrence Erlbaum Associates.Accessed: Jun, 2008 Retrieved from: http://www.aect.org/edtech/17.pdf
Mergel, B. (1998). Instructional Design & Learning Theory. Accessed: Jun, 2008 Retrieved from: http://www.usask.ca/education/coursework/802papers/mergel/mergel.pdf
Meridian 59. (1996-2000 & 2002-Current). Futher Reading Retrieved from: Official Site: http://meridian59.neardeathstudios.com/; General: http://en.wikipedia.org/wiki/Meridian_59; http://www.massively.com/photos/massivelys-visual-history-of-mmorpgs-part-i/727035/
MetaMersion. Your in the Game. Retrieved from: http://www.metamersion.com/index.html
Milgram, P., & Kishino, F. (1994). A Taxonomy of Mixed Reality Visual Displays. E77-D(12), Accessed: May, 2008 Retrieved from: http://vered.rose.utoronto.ca/people/paul_dir/IEICE94/ieice.html
Miller, D. C., & Thorpe, J. A. (1995). SIMNET: The Advent Of Simulator Networking. Proceedings of the IEEE, 83(8), 1114-1123.
Monash University. (2008). Preparing Educational Objectives. Accessed: Jun, 2008 Retrieved from: http://www.calt.monash.edu.au/staff-teaching/support/objectives.html
Moriarty, D. (2008). StatCat (version 3.6). Accessed: Jan, 2009 Retrieved from: http://www.csupomona.edu/~djmoriarty/b211/index.html#statcat
Morningstar, C., & Farmer, R. (1990). The Lessons of Lucasfilm's Habitat. Paper presented at the The First International Conference on Cyberspace, University of Texas at Austin.Accessed: Oct, 2007 Retrieved from: http://www.fudco.com/chip/lessons.html
Mulligan, J. (2000). History of Online Games Part III. Imaginary Realities (April), Retrieved from: http://www.tharsis-gate.org/articles/imaginary/HISTOR~1.HTM
Mulligan, J. (2002). Talkin’ ‘bout My… Generation. Biting The Hand 17(22-JAN), Accessed: Jun, 2008 Retrieved from: http://www.skotos.net/articles/BTH_17.shtml
Nash, S. S. (2007). Behaviorism vs. Constructivism as Applied to Online Learning. XplanaZine, Accessed: Jun, 2008 Retrieved from: http://www.xplanazine.com/2007/09/behaviorism-vs-constructivism-as-applied-to-online-learning
Neuman, W. L. (2006). Social research methods (6th ed.). Boston Pearson Education, Inc
NeverWinter Nights (AOL). (1991-1997). Further Reading. Accessed: May, 2008 Retrieved from: http://en.wikipedia.org/wiki/Neverwinter_Nights_(AOL_game); http://www.bladekeep.com/nwn/index2.htm,
NIST (2006). e-Handbook of Statistical Methods: Levene Test for Equality of Variances. Accessed: Jan, 2009 Retrieved from: http://www.itl.nist.gov/div898/handbook/eda/section3/eda35a.htm
O'Donnell, D. (2003). The Annotated NetHack File. Retrieved from: http://www.spod-central.org/~psmith/nh/anhftime.html
Olsen, W. (2004). Triangulation in Social Research: Qualitative and Quantitative Methods Can Really Be Mixed. In Developments in Sociology: Causeway Press Retrieved from: http://www.ccsr.ac.uk/staff/triangulation.pdf
Onwuegbuzie, A. J. (2002). Why can't we all get along? Towards a framework for unifying research paradigms. Education, 122(3), 518-530.
Orlikowski, W. J., & Baroudi, J. J. (1991). Studying Information Technology in Organizations: Research Approaches and Assumptions. Information Systems Research, 2(1), 1-28.
Oxford Dictionary (Ed.) (1989) (2nd ed.). Oxford University Press.
Oxford Dictionary (Ed.) (1997) (Vols. 3). Oxford University Press.
Packer, R., & Jordan, K. (2002). Multimedia: From Wagner to Virtual Reality (Expanded ed.). New York: W. W. Norton and Company.
Patel, K., Bailenson, J., Jung, S., Diankov, R., & Bajcsy, R. (2006). The effects of fully immersive virtual reality on the learning of physical tasks. Paper presented at the International Workshop on Presence, Cleveland, Ohio, USA.Accessed: Jun, 2008 Retrieved from: http://www.cs.washington.edu/homes/kayur/papers/ispr06.pdf
Pearson, J. L. (2002). Shamanism and the Ancient Mind: A Cognitive Approach to Archaeology: AltaMira Press.
Pellett, D. Open letter to "Classic Gaming .com": Re: your web page, titled "The History of Computer Gaming". The Game of Dungeons (dnd): Gary Whisenhunt, Ray Wood, Dirk Pellett, and Flint Pellett's DND, Retrieved from: http://www.armory.com/~dlp/dnd1.html
Petrich, L. (n.d.). Real-Time-3D Game-Engine Taxonomy. Accessed: Jun, 2008 Retrieved from: http://homepage.mac.com/lpetrich/www/games/GET.html
Pimentel, K. K., & Teixeira, K. K. (1994). Virtual reality. New York, USA: Windcrest Books.
Purbrick, J., & Greenhalgh, C. (2002). An extensible event-based infrastructure for networked virtual worlds. Paper presented at the Virtual Reality, 2002. Proceedings. IEEE.
Ray, J. (2008). Backwards Compatible - How We Got Connected. ABC Good Games Stories, Accessed: April 2008 Retrieved from: http://www.abc.net.au/tv/goodgame/stories/s2171457.htm
Reynolds, R. (2008). VW Taxonomy Q1 ‘08. Accessed: 1 April 2008 Retrieved from: http://terranova.blogs.com/terra_nova/2008/03/vw-taxonomy-q1.html.
Rheingold, H. (1992). Virtual reality. London: Mandarin Paperback.
Rheingold, H. (1993). The virtual community: home-steading on the electronic frontier? : New York: Harper Collins. .
Richardson, H. (2005). Postmodernism: A Hobbit’s View of Information Systems Research Methodology. Paper presented at the 4th International Critical Management Studies Conference, University of Cambridge, Cambridge, UK.Accessed: 10/9/2007 Retrieved from: http://www.mngt.waikato.ac.nz/ejrot/cmsconference/2005/
Robson, S. (2008). US Army to Invest $50M in Combat Training Games. Stars and Stripes (Nov 2008), Retrieved from: http://www.stripes.com/article.asp?section=104&article=59009
Rolland, J., & Hua, H. (2005). Head-Mounted Display Systems. Encyclopedia of Optical Engineering, 1 - 14.
Rosenblum, L. J. (1995). Alice: rapid prototyping for virtual reality. Computer Graphics and Applications, IEEE, 15(3), 8-11.
Russell, T. L. (2001). No Significant Difference Phenomenon (5 ed.). North Carolina State University: IDECC.
Schmidt, M., Kinzer, C., & Greenbaum, I. (2007). Exploring Virtual Education: First Hand Account of 3 Second Life Classes Paper presented at the Proceedings Second Life Community Convention Workshop at the SL Community Convention Boston.
Schroeder, R. (1997). Networked Worlds: Social Aspects of Multi-User Virtual Reality Technology. In S. R. Online (Ed.) (Vol. 2).
Schroeder, R. (2006). Being There Together and the Future of Connected Presence. Presence: Teleoperators & Virtual Environments, 15(4), 438-454.
Schuemie, M. J., Straaten, P. V. D., Krijn, M., & Mast, C. A. P. G. V. D. (2001). Research on Presence in Virtual Reality: A Survey. CyberPsychology & Behavior 4(2), Accessed: May, 2008 Retrieved from: http://graphics.tudelft.nl/~vrphobia/surveypub.pdf
Shadow of Yserbius. (1992-1996). Further Reading. Accessed: May, 2008 Retrieved from: http://www.syntax2000.co.uk/issues/; http://www.oldgames.nu/PC/Shadow_of_Yserbius/2085/; http://en.wikipedia.org/wiki/The_Shadow_of_Yserbius;
Sheridan, T. B. (1992). Musings on telepresence and virtual presence. Presence: Teleoperators & Virtual Environments, 1(1), 120-126.
Sheth, R. (2003). Avatar Technology: Giving a Face to the e-Learning Interface. [The eLearning Guild]. The eLearning Devlopers' Journal, August.
Siegle, D. (2008). The Principles and Methods of Educational Research. Accessed: Jan, 2009 Retrieved from: http://www.gifted.uconn.edu/Siegle/research/Instrument%20Reliability%20and%20Validity/Reliability.htm
Simpson, E. J. (1972). The classification of educational objectives in the psychomotor domain. The Psychomotor Domain (Vol. 3). Washington, DC: Gryphon House.
SimTeach. (2008). Universities, Colleges & Schools in Second Life. Accessed: Jun, 2008 Retrieved from: http://www.simteach.com/wiki/index.php?title=Institutions_and_Organizations_in_SL
Slater III, W. F. (2002). Internet History and Growth. Chicago Chapter of the Internet Society, Accessed: April, 2008 Retrieved from: http://www.isoc.org/internet/history/
Slater, M. (1999). Measuring Presence: A Response to the Witmer and Singer Presence Questionnaire. Presence: Teleoperators & Virtual Environments, 8(5), 560-565.
Slater, M., & Usoh, M. (1993). Presence in immersive virtual environments. Paper presented at the Virtual Reality Annual International Symposium, 1993., 1993 IEEE.
Slater, M., & Usoh, M. (1994). Representation Systems, Perceptual Position and Presence in Virtual Environments. Presence: Teleoperators & Virtual Environments, 2(3), 221–233.
Slater, M., & Wilbur, S. (1997). A Framework for Immersive Virtual Environments (FIVE): Speculations on the Role of Presence in Virtual Environments. Presence: Teleoperators & Virtual Environments, 6(6).
Slator, B. M., Borchert, O., Brandt, L., Chaput, H., Erickson, K., Groesbeck, G., et al. (2007). From Dungeons to Classrooms: The Evolution of MUDs as Learning Environments. In The Evolution of Teaching and Learning Paradigms (pp. 119-160): Springer-Verlag
Small, D., & Small, S. (1984). PLATO RISING. Online learning for Atarians 3(3), 36-87 Retrieved from: http://www.atarimagazines.com/v3n3/platorising.html
Smith, A. (1999). COLLABORATION: A Global Survey of Institutions and Programs in Virtual World Cyberspace. Retrieved from: http://www.ccon.org/vlearn/collab.htm
STATGRAPHICS Centurion (2009). Analysis Software. Retrieved from: http://www.statgraphics.com/
Stephenson, N. (1992). Snow Crash. New York: Bantam Spectra Book.
Steuer, J. (1992). Defining Virtual Reality: Dimensions Determining Telepresence. Journal of Communications, 42(4), 73-93.
Sun Microsystems (2008). Current Reality and Future Vision Open Virtual Worlds (White Paper). January, Accessed: 13 March 2008 Retrieved from: http://www.sun.com/service/applicationserversubscriptions/OpenVirtualWorld.pdf
Sutherland, I. (1965). The Ultimate Display. Paper presented at the International Federation of Information Processing. Retrieved from: http://www.cs.utah.edu/classes/cs6360/Readings/UltimateDisplay.pdf
Sutherland, I. (1968). A Head-Mounted Three-Dimensional Display. Paper presented at the Proceedings of the AFIPS Fall Joint Computer Conference, Washington, D.C.
Terdiman, D. (2007). Tech titans seek virtual-world interoperability. CNET News.com, Accessed: Jun, 2008 Retrieved from: http://news.cnet.com/Tech-titans-seek-virtual-world-interoperability/2100-1043_3-6213148.html
The New Media Consortium, & EDCAUSE (2007). The Horizon Report. Accessed: Nov, 2007 Retrieved from: http://www.nmc.org/pdf/2007_Horizon_Report.pdf
Tiernan, T. R. (1996). Synthetic Theater of War (STOW) Engineering Demonstration-1A (ED-1A) Analysis Report (ADA315093). Accessed: Jun, 2008 Retrieved from: http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA315093
Tolkien, J. R. R. (1937). The Hobbit. United Kingdom: Allen and Unwin.
Tolkien, J. R. R. (1954, 1955). The Lord of the Rings. United Kingdom: Allen and Unwin.
Ultima Online. (1997-Current). Further Reading. Accessed: Jun, 2008 Retrieved from: http://www.uoherald.com/news/;
Unger, J. M. (1979). Kanamajiri Editing and the Plato Computer-Based Education System. The Journal of the Association of Teachers of Japanese, 14(2), 141-156.
University of Washington (2008). Instructional Design Approaches. Accessed: Jun, 2008 Retrieved from: http://depts.washington.edu/eproject/Instructional%20Design%20Approaches.htm
US Joint Forces Command. (2008). Joint Semi-Automated Forces (JSAF). Accessed: Jun, 2008 Retrieved from: http://www.jfcom.mil/about/fact_jsaf.html
Van Dam, A., Forsberg, A. S., Laidlaw, D. H., LaViola, J. J. J., & Simpson, R. M. (2000). Immersive VR for scientific visualization: a progress report. Computer Graphics and Applications, IEEE, 20(6), 26-52.
VCampus Corporation. (2008). cyber1.org. Accessed: April, 2008 Retrieved from: http://www.cyber1.org/
Vinge, V. (1981). True Names: Binary Star Number 5, Dell Reprinted in True Names and Other Dangers, Vernor Vinge, Baen Books, 1987.
Vivekananda Centre (2008). Hinduism for Schools. Retrieved from: http://www.vivekananda.btinternet.co.uk/secondaryschoolspage1.htm
Wachowski, A., & Wachowski, L. (Director), (Writer), J. Silver (Producer), (1999). The Matrix [Motion Picture]: Warner Bros, Village Roadshow Pictures,.
Wagner, R. (1849). The Artwork of the Future (Das Kunstwerk der Zukunft), Accessed: April, 2008 Retrieved from: http://users.belgacom.net/wagnerlibrary/prose/wagartfut.htm
Walker, J. (1990). Through the Looking Glass. In L. Brenda (Ed.), The Art of Human-Computer Interface Design: Addison-Wesley
Walsham, G. (1995). The Emergence of Interpretivism in IS Research. Information Systems Research, 6(4), 376-394.
Wang, C.-S., & Tzeng, Y.-R. (2007). Framework for Bloom's Knowledge Placement in Computer Games. Paper presented at the Digital Game and Intelligent Toy Enhanced Learning, 2007. DIGITEL '07. The First IEEE International Workshop.
Weber, R. (2004). The Rhetoric of Positivism Versus Interpretivism: A Personal View. MIS Quarterly, 28(1), 3-xiii.
West Virginia University. (2008). The Looking Glass Project. Accessed: April, 2008 Retrieved from: http://clc.as.wvu.edu:8080/clc/projects/alice/document_view?month:int=5&year:int=2008
Wikipedia. (2008a). The Manhole. Accessed: Jun, 2008 Retrieved from: http://en.wikipedia.org/wiki/The_Manhole
Wikipedia. (2008b). PLATO (computer system). Retrieved from: http://en.wikipedia.org/wiki/PLATO
Wikipedia Doom. (2008). Doom Engine. Accessed: Jun, 2008 Retrieved from: http://doom.wikia.com/wiki/Vanilla_Doom#Fan_community_variants
Wikipedia Ultima (2008). Ultima Online: Third Dawn. Accessed: May, 2008 Retrieved from: http://ultima.wikia.com/wiki/Ultima_Online:_Third_Dawn
Wilson, N. (2007). The Problem with Virtual Worlds. Accessed: 1 April 2008 Retrieved from: http://metaversed.com/23-oct-2007/problem-virtual-words
Witmer, B. G., & Singer, M. J. (1998). Measuring Presence in Virtual Environments: A Presence Questionnaire. Presence: Teleoperators & Virtual Environments, 7(3), 225-240.
Woodcock, B. S. (2008, May, 2008). An Analysis of MMOG Subscription Growth. MMOGCHART.COM Retrieved from: http://www.mmogchart.com
Woolley, D. R. (1994). PLATO: The Emergence of On-Line Community. Computer-Mediated Communication Magazine, 1(3), 5.
Yee, N. (2006). The Demographics, Motivations, and Derived Experiences of Users of Massively Multi-User Online Graphical Environments. Presence: Teleoperators & Virtual Environments, 15(3), 309-329.
Youngblut, C. (1998). Education Uses of Virtual Reality Technology (pp. 131). Alexandria, VA: Institute for Defence Analyses.
Yount, W. R. (2006). Research Design & Statistical Analysis in Christian Ministry, Accessed: Dec, 2008 Retrieved from: http://www.napce.org/yount.html
Zakon, R. H. (2006). Hobbes' Internet Timeline v8.2. Accessed: April, 2008 Retrieved from: http://www.zakon.org/robert/internet/timeline/
Zyda, M. (2005). From Visual Simulation to Virtual Reality to Games. Computer, 38(9), 25-32.
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
ab2b67bad99f0d003b33f38e83cdb66c45970a6a
Real Learining in Virtual World - Selected Appendices
0
285
318
317
2018-10-29T11:40:38Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<div class="nonumtoc">
=Appendices=
==Appendix A: Terminology==
{| width="15%"
|-
|'''Term'''
|'''Description'''
|}
;Virtual World:
:An artificial environment that a person projects themself within. In the context this in which this term has been used mainly in this paper (unless otherwise stated) it is an environment that is technological built form software programs.
;In World:
:Artificial, where the person operates in the artificial virtual world.
;Real Word
:Reality, where the person operates in their physical world.
;Avatar:
:The digital representation of the person in the virtual world.
;Teleport
:A method of transport used in the virtual world that moves them from one location to another without having to walk to a location with their avatar.
;Presence:
:A subjective measure. The feeling of being in the virtual world that disconnects them from physical world around them.
;Immersion:
:An objective measure. The interface between the virtual world and the user that places the person in the virtual world.
;MMORPG:
:Massively Multiplayer Online Role Playing Game. The can used in various shortened abbreviations eg RPG, MMO
This term often is used to describe the latest generation of online virtual world technology. Many other terms are used such as MUVE, CVE etc.
;MUD:
:Multi User Dungeon. Early text based networked virtual worlds.
<P align=left >'''''Table 14 Terminology'''''</p >
==Appendix B: MMOG Analysis==
Bruce Woodcock (2008) is an independent writer and long time player of MMOGs that has dedicated his research to tracking subscriptions numbers of online MMOGs. These figures are obtained from source and public available material e.g. company financial reports, company media releases, media publications and in some cases an educated guess. These figures although not precise, allows us to do a comparison of MMOGs that would otherwise would not be available unless one was to undertaken the same type of analysis such as he has done over the years. If anything these figures would be underreported as they only are based upon user subscriptions and therefore do not include in the numbers of user that have free-access to their environments (included within the ones listed). These figures are current as at April 2008, for more information see http://www.mmogchart.com/.
Breakdown of MMOGs listed Chart.
{|border="1" width="40%" align=center
|-
|align=center |'''Name'''
|align=center |'''Current Active Subscriptions'''
|-
|align="right"|World of Warcraft
|align="right"|10,000,000
|-
|align="right"|RuneScape
|align="right"|1,200,000
|-
|align="right"|Lineage
|align="right"|1,056,177
|-
|align="right" |Lineage II
|align="right" |1,006,556
|-
|align="right" |Final Fantasy XI
|align="right" |500,000
|-
|align="right" |Dofus
|align="right" |452,000
|-
|align="right" |EVE Online
|align="right" |236,510
|-
|align="right" |EverQuest II
|align="right" |200,000
|-
|align="right" |EverQuest
|align="right" |175,000
|-
|align="right" |The Lord of the Rings Online
|align="right" |150,000
|-
|align="right" |City of Heroes / Villains
|align="right" |136,250
|-
|align="right" |Tibia
|align="right" |104,338
|-
|align="right" |Star Wars Galaxies
|align="right" |100,000
|-
|align="right" |Toontown Online
|align="right" |100,000
|-
|align="right" |Second Life
|align="right" |-91,531
|-
|align="right" |Tabula Rasa
|align="right" |75,000
|-
|align="right" |Ultima Online
|align="right" |75,000
|-
|align="right" |Pirates of the Burning Sea
|align="right" |65,000
|-
|align="right" |Dark Age of Camelot
|align="right" |45,000
|-
|align="right" |Dungeons & Dragons Online
|align="right" |45,000
|-
|align="right" |Vanguard: Saga of Heroes
|align="right" |40,000
|-
|align="right" |Yohoho! Puzzle Pirates
|align="right" |34,000
|-
|align="right" |EverQuest Online Adventures
|align="right" |30,000
|-
|align="right" |The Matrix Online
|align="right" |30,000
|-
|align="right" |Era of Eidolon
|align="right" |27,000
|-
|align="right" |PlanetSide
|align="right" |20,000
|-
|align="right" |Asheron's Call
|align="right" |15,000
|-
|align="right" |Sphere
|align="right" |15,000
|-
|align="right" |Anarchy Online
|align="right" |12,000
|-
|align="right" |The Realm Online
|align="right" |12,000
|-
|align="right" |World War II Online
|align="right" |12,000
|-
|align="right" |Pirates of the Caribbean Online
|align="right" |10,000
|-
|align="right" |Neocron 2
|align="right" |6,000
|-
|align="right" |Horizons
|align="right" |5,000
|-
|align="right" |Mankind
|align="right" |5,000
|-
|align="right" |A Tale in the Desert
|align="right" |1,054
|}
==Appendix I: Second Life Demographics==
{| align="center" width="50%" style="background-color:#ffffcc; "
|-
|align="left" colspan="3" |'''Second Life Virtual Economy<br />
Demographic Summary Information<br />
Through November 2008'''
|-bgcolor="lightblue"
|align="center" colspan="3" |'''Top 20 Countries by Active User Hours'''
|-bgcolor="wheat"
|align=center |'''Country'''
|align=center |'''Total Hours'''
|align=center |'''% of Total Hrs'''
|-
|align=right |United States
|align=right |14,451,180.28
|align=right |39.38%
|-
|align=right |Germany
|align=right | 3,505,103.93
|align=right | 9.55%
|-
|align=right | United Kingdom
|align=right | 2,424,987.88
|align=right | 6.61%
|-
|align=right | Japan
|align=right | 2,014,299.45
|align=right | 5.49%
|-
|align=right | France
|align=right | 1,972,875.00
|align=right | 5.38%
|-
|align=right |Netherlands
|align=right | 1,406,652.90
|align=right | 3.83%
|-
|align=right |Italy
|align=right | 1,397,571.12
|align=right | 3.81%
|-
|align=right |Brazil
|align=right | 1,361,741.72
|align=right | 3.71%
|-
|align=right |Canada
|align=right | 1,336,706.03
|align=right | 3.64%
|-
|align=right |Spain
|align=right | 1,083,716.70
|align=right | 2.95%
|-
|align=right |Australia
|align=right | 747,158.40
|align=right | 2.04%
|-
|align=right |Belgium
|align=right | 349,070.48
|align=right | 0.95%
|-
|align=right |Portugal
|align=right | 332,468.60
|align=right | 0.91%
|-
|align=right |Switzerland
|align=right | 277,448.60
|align=right | 0.76%
|-
|align=right |Poland
|align=right | 234,785.58
|align=right | 0.64%
|-
|align=right |Argentina
|align=right | 196,719.35
|align=right | 0.54%
|-
|align=right |Denmark
|align=right | 193,975.72
|align=right | 0.53%
|-
|align=right |Sweden
|align=right | 191,424.80
|align=right | 0.52%
|-
|align=right |Mexico
|align=right | 177,130.73
|align=right | 0.48%
|-
|align=right |Turkey
|align=right | 176,759.05
|align=right | 0.48%
|-
|align=right |Others
|align=right | 2,866,931.23
|align=right | 7.81%
|-
|align=center |'''Total'''
|align=right | '''36,698,707.57'''
|
|}
{| align="center" width="50%" style="background-color:#ffffcc; "
|-
|align="left" colspan="3" |'''Second Life Virtual Economy<br />
Demographic Summary Information<br />
Through November 2008'''
|-bgcolor="lightblue"
|align="center" colspan="3" |'''Usage hours by Age Band'''
|-bgcolor="wheat"
|align=center |'''Age'''
|align=center |'''% of Total Hrs'''
|-
|align=right |13-17 (Teen Grid)
|align=right |0.32%
|-
|align=right |18-24
|align=right | 15.07%
|-
|align=right |25-34
|align=right | 34.51%
|-
|align=right |35-44
|align=right | 28.51%
|-
|align=right |45 plus
|align=right | 21.14%
|-
|align=right |Unknown
|align=right | 0.45%
|-bgcolor="lightblue"
|align="center" colspan="3" |'''Usage hours by Gender'''
|-
|align=right |Male
|align=right | 58.72
|-
|align=right |Female
|align=right | 41.28
|}
<p align="center" >'''''Source: (Linden Lab, 2008b)'''''</p >
==Appendix J: Pre-Quiz Score Results==
This section discusses the pre-quiz scores significance test results.
===J.1 Remember Scores===
Figure 68 provides the pre-quiz results for Bloom’s ‘remember’ cognitive process.
Figure 68. Results: Pre-Quiz Remember - Histogram & Bell Curve
The pre-quiz ‘remember’ scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = -0.417, sek = -1.105, K2 p = 0.26747 and 3D: ses = -0.595 and sek = -1.54, K2 p = 0.2675) and the variance between the groups was not significant (F = 0.668, 2 tailed p = 0.140, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found no significant difference (t = 1.665, df = 109, two-tailed p = 0.0987, α = 0.05) between the results of the 2D (x1 = 2.44, s1 = 1.032) and 3D (x2 = 2.071, s2 = 1.263) pre-quiz ‘remember’ scores.
When tested using a one-tail test where µ1 – µ2 > 0.5 the results show that there is a significant different (t = 1.665, df = 109, one-tailed p = 0.0494, α = 0.05), thus the 2D pre-quiz scores were significantly higher than the 3D scores for the Bloom’s cognitive process of ‘remember’.
===J.2 Understand Scores===
Figure 69 provides the pre-quiz results for Bloom’s understand cognitive process.
Figure 69. Results: Pre-Quiz Understand - Histogram & Bell Curve
The pre-quiz ‘understand’ scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = 0.790, sek = -0.227, K2 p = 0.63248 and 3D: ses = 1.072, sek = 0.0563, K2 p = 0.50798) and the variance between the groups was not significant (F = 0.799, 2 tailed p = 0.410, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found a significant difference (t = -2.257, df = 109, two-tailed p = 0.0260, α = 0.05) between the results of the 2D (x1 = 1.254, s1 = 0.775) and 3D (x2 = 1.607, s2 = 0.867) pre-quiz ‘understand’ scores. The 3D pre-quiz scores were significantly greater than the 2D pre-quiz scores for the Bloom’s cognitive process of ‘understand’ (µ1 – µ2 < 0.5; t = -3.03167, df = 109, one-tailed p = 0.00138, α = 0.05).
===J.3 Summary Pre-Quiz Remember and Understand===
Figure 70 provides an inverse cumulative normal distribution graph for Bloom’s cognitive process ‘remember’ and ‘understand’ for the post-quiz scores. This graph displays what percentage of participants scored under a nominated score.
Figure 70. Results: Pre-Quiz Rem & Und - Inverse Cumulative Normal Distribution Graph
===J.4 Total Scores===
A graph of the results for the total score was provided in the main document in the Chapter, 4 Results, Pre- Quiz Results.
The pre-quiz total scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D, ses = 0.0218, sek = -1.087, K2 p = 0.49248 and 3D, ses = -0.574, sek = -0.425, K2 p = 0.671739) and the variance between the groups was not significant (F = 0.862, 2 tailed p = 0.586, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found a no significant difference (t = 0.0455, df = 109, two-tailed p = 0.964, α = 0.05) between the results of the 2D (x1 = 3.690, s1 = 1.372) and 3D (x2 = 3.679, s2 = 1.479) pre-quiz total scores.
==Appendix K: Post-Quiz Score Results==
A graph of the results for the post-quiz score was provided in the main document in the Chapter, 4 Results; Post Quiz Results, Hypothesis One and Two sections.
===K.1 Remember Scores===
The post-quiz ‘remember’ scores (H01) were tested using the non-parametric Mann-Whitney U Test as the results for the post-quiz ‘remember’ scores did not meet the assumptions for parametric testing which requires the scores to be normality distributed (2D: ses = -1.94259, sek = -1.10294, K2 p = 0.06976 and 3D: ses = -2.87371, sek = 1.02617, K2 p = 0.01161). The 3D scores failed the D’Agostino-Pearson (K2) normal distribution test (p = 0.01161 ie < 0.05) therefore the scores from this group deviate significantly from normal distribution.
The results of Mann-Whitney U Test when applied found that there was no significant difference between the 2D and 3D post-quiz ‘remember’ scores where the average ranked scores were 2D = 53.9364 and 3D = 58.0268 resulted in U = 1653.5, W = 113.5, 2 tailed p = 0.493107, α = 0.05.
===K.2 Understand Scores===
The post-quiz ‘understand’ scores (H02) were tested using the parametric independent t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = 0.204408, sek = - 0.8453, K2 p = and 3D: ses = 1.016, sek = 0.016, K2 p = ) and the variance between the groups was not significant (F = 1.028, 2 tailed p = 0.920, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
===K.3 Total Scores===
The post-quiz total scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = 0.158427, sek = -0.230644, K2 p = 0.8865884 and 3D: ses = -0.700083, sek = 0.404913, K2 p = 0.62133) and the variance between the groups was not significant (F = 1.10638, 2 tailed p = 0.70972, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found a no significant difference (t = -0.8212, df = 119, two-tailed p = 0.4133, α = 0.05) between the results of the 2D (x1 = 10.9818, s1 = 2.46825) and 3D (x2 = 11.3571, s2 = 2.34659) post-quiz total scores.
==Appendix L: Instrument Reliability Results==
Table 15 provides the results of the instrument reliability tests performed on the achievement quiz results. For the pre-quiz there were 4 questions each for the Bloom’s cognitive process of ‘remember’ (rem) and ‘understand’ (und) for a combined total of 8 and in the post-quiz 10 questions for a combined total of 20. The 2D group consisted of 55 participants and the 3D group 56.
{| align="center" width="60%" style="background-color:#ffffcc; "
|-
|colspan="5" align="center" |'''Post-Quiz Remember'''
|-
|align=center|
|align=center bgcolor="#DDADAF" colspan=2 |'''2D'''
|align=center bgcolor="lightblue" colspan=2 |'''3D'''
|-bgcolor="lightgrey"
|align=center|
|align=center bgcolor="lightgrey" |'''Rem'''
|align=center bgcolor="lightgrey" |'''Und'''
|align=center bgcolor="lightgrey" |'''Rem'''
|align=center bgcolor="lightgrey" |'''Und'''
|-
|align=right |'''Pre-Quiz KR20'''
|align=right | 0.14
|align=right | -0.46
|align=right | 0.48
|align=right | -0.01
|- bgcolor="lightgrey"
|align=right |'''Post-Quiz KR20'''
|align=right | 0.53
|align=right | -0.01
|align=right | 0.54
|align=right | 0.10
|}
<p align=center >'''''Table 15. Instrument Reiability: Acheivement Quiz'''''</p>
Table 16 provides the results of the instrument reliability tests performed on the post survey Likert scale results for questions 23, 24, 28 and 29.
{| align="center" width="60%" style="background-color:#ffffcc; "
|-
|colspan="3" align="center" |'''Post-Quiz Remember'''
|-
|align=center|
|align=center bgcolor="#DDADAF" |'''2D'''
|align=center bgcolor="lightblue" |'''3D'''
|-bgcolor="lightgrey"
|align=right |'''Cronbach's Alpha:'''
|align=right |0.73
|align=right |0.72
|}
<p align="center" >'''''Table 16. Instrument Reiability: Survey Likert Scales'''''</p>
Frary (2008) provides the following definitions for the measure of these reliability (r) results:
*r = .90 or higher - High reliability. Suitable for making a decision about an examinee based on a single test score.
*r = .80 to .89 - Good reliability. Suitable for use in evaluating individual examinees if averaged with a small number of other scores of similar reliability.
*r = .60 to .79 - Low to moderate reliability. Suitable for evaluating individuals only if averaged with several other scores of similar reliability.
*r = .40 to .59 - Doubtful reliability. Should be used only with caution in the evaluation of individual examinees. May be satisfactory for determination of average score differences between groups.
'''Discussion'''
Instrument reliability tests the correlation of answers within a data set. The assumptions for the KR-20 test is that test items are of equal, or near equal, difficulty and intercorrelation (Lenke, Wellens, & Oswald, 1977). Consistent with these assumptions, the tests performed were split into the Bloom’s cognitive processes of ‘remember’ and ‘understand’. Furthermore as we were measuring the difference between the achievement results of two groups that had distinctly different treatment methods the reliability tests were divided into 2D and 3D participant groups. These repeated divisions caused a problem for the application of the instrument reliability test as in each division the total number tested items is 10 or below. If the number of questions (or subjects) are too low within each group then the results of the test as put by Frary ‘should be taken with a grain of salt’. Frary (2008) provides further insight as to why:
“All reliability estimates are subject to considerable error when there are small numbers of examinees or test items. If there are fewer than, say, 25 examinees or 10 items, the reliability estimate must be "taken with a grain of salt." This phenomenon is especially noticeable when there are several scrambled forms of the test, each administered to a relatively small number of examinees. Then the KR20 coefficients may fluctuate considerably from one form to another.”
As we can see from the above results there was considerable fluctuation in the reliability test results between the two groups. With exception to the post-quiz ‘remember’ results the other figures varied considerably. These results seem to correlate to the results that are discussed in Chapter 5 Discussion and Conclusion chapter. Participants for both groups performed well for ‘remember’ but did not for Bloom’s ‘understand’.
Although, as Frary asserts, the test reliability measures under this research’s circumstances are inconclusive indicators.
==Appendix M: Qualitative Analysis: A Sample of Participants Comments==
===Virtual World Learning Experience===
*I found learning in world is a great way to find out about things you don’t normally think about finding out about
*You’re more likely to learn things in world than go to places to find out about things
*Things I usually don't take time to learn about, I can learn about them here
*I really felt as if I was sitting in a Room of Such listening to a lecturer
*It kind of felt personal.
*Kind of soothing but not putting me to sleep kind
*The lack of pressure that comes from a more traditional classroom atmosphere
*You can see if others are in the class with you
*Feel this way is better experience then the normal online way of taking classes
*I prefer learning alone and I would definitely prefer this type of learning to going to a classroom with other students.
*Seemed better than the typical classroom experience
*This is a fantastic experiment and I believe the potential to reach people with anything that will help them become better educated is a wonderful thing.
*Top idea to get people to learn about several topics
*I liked the idea; please invite me for more lessons
*By being part of this Survey Study, I have opened a door to seeking out further Studies, as well as Classes with SL
===Campus Experience===
*It was very easy to use
*It was very well laid out
*Easier to navigate through
*I liked the way different stages
*The environment was well set out
*Very user friendly
===Format===
*2D: Like liked the layout... it showed you a picture of the different types of bridges as well as giving you plenty of information on the subject then had a summary of all of it at the end
*2D: I wish that the pictures had been interactive so I could've clicked on the different sections of the bridges and gotten an individual description
*2D: Easy to follow slides
*2D: The presentation was actually enjoyable, however I believe that for this to be a truly effective learning tool the presentation speed must be made adjustable as people may find certain topics boring and just skip through them but may wish to spend longer periods of time on other material and wish to slow down to be more attentive.
*2D: the possibility to go back or control the slideshow
*3D: The mix of the audio and the bulleted points made it easier to follow for visual
*3D: Wonderfully laid out. The visuals were great! They conveyed the most important points very well.
*3D: lots of examples
*3D: Wish there was a way I could stop the presentation or lecture and go back to review what was just said.
===Information content===
*2D: Very informative and interesting
*2D: very easy to comprehend
*2D: It was not too technical
*2D: I never gave it much thought at to the Construction of Bridges, one droves on them, over them etc, and you certainly hear in recent years of the collapse of bridges etc, I found the topic informative although a lot to digest.
*3D: It was informative.
*3D: Need more infor need more infor need more infor
*3D: I have never stopped to think about bridges before. Now how am I going to drive over a bridge without thinking about what it is?
*3D: The theory of the subject was well thought out, even though to my knowledge the subject was well informative, it could have been explained in more of laymen terms for those who really don't understand the makeup of bridges.
*3D: I found myself getting lost a bit here and there with the terminology
*3D: Overall a bit too complicated for someone with no previous knowledge coming into the presentation, but still worthwhile.
*3D: I might have liked a little better explanation of how compression and tension work at the beginning so as I could understand the physics of it a little better.
===Learning===
*2D: I liked learning something new
*2D: I got to learn something I did not know.
*2D: Learned more about bridges
*2D: It was good to learn about the understanding of bridges
*3D: learn something about a subject I never knew something about
*3D: Suddenly, unforeseeably, I was studying the physics of bridges! I could never have guessed when I woke up today that I would learn this.
*3D: What a well thought out presentation, Now that I know something about bridges. I have something new to take to Real life with me
*3D: Combined my hobby with learning
===Facets of 3D Learning===
*3D: The way the bridges could actually be seen materialized and color coded was great.
*3D: It was visually appealing versus reading a book or listening to a live lecture.
*3D: It's a great learning key.
*3D: I liked the ability to see a 3D diagram of the topic.
*3D: The examples floating in space allowed for a better view of the material
*3D: The use of "real" object as opposed to drawings helped with any problems in understanding
*3D: The images were 3D making it a little easier to get an idea of what each bridge was.
*3D: The 3D rendered models illustrating the different types of bridges & how loads were carried were a great tool.
*3D: With the help of bridge models I was able to get a better understanding about what the lecture was talking about.
*3D: Just the fact that the examples where suspended in space, allowed me a better understanding from all angles.
*3D: While it may not quite stick on the first pass, I feel as if this method DEFINITELY provided a clear, direct delivery of the subject matter. I could see this type of presentation doing much more for someone with at least a rudimentary knowledge of the subject matter.
===Instruction===
*2D: It would have been fun to have an "instructor" to ask questions of. :)
*2D: lack of contact or clarification of issues
*3D: There was no place to pause the instructor, or ask further questions about the subject matter
*3D: A live guide would've been very helpful to clear up any confusion along the way, though it isn't necessary.
*3D: The inability to ask for clarification or further explanation.
*3D: No interactive question-answer
===Focus===
====In world distractions====
*2D: Distracting avatars
*2D: my club shine glitzier owners tag got in the way
*2D: I was distracted by my own curiosity of the technology
*3D: Disruptions from others in chat
*3D: Noise or excessive gestures of certain people.
*3D: Some others in the room were very disruptive
*3D: Interruptions from people who don't take the education seriously.
*3D: It would be idea to separate people in the education process as some people make noises during the presentation that distracts from the education.
===Outside world distractions===
*2D: Just the fact it’s the weekend and so many distractions in the house
*2D: Thought it was interesting I may watch it again later, if it’s alright, my daughter kept talking to me during it and I kept getting distracted but I did try and pay attention.
*2D: I could do other things at my desk and could answer the phone!
*2D: I guess it’s not good to be able to talk to others during a class where you're supposed to learn something [yahoo messaging]
*2D: Could do things at my desk
*2D: "real life" interruptions the telephone ringing
*3D: Interruptions from real life
===Time===
*2D: Being new, it held my attention for the whole time
*2D: It went a bit slow.
*2D: Speed of the presentation was a little slow
*2D: The narrator was a bit monotone which caused me to get bored a couple of times.
*2D: I lost focus for a little.
*2D: found myself zoning out a little bit.
*2D: voice got monotonous
*3D: It actually held my attention! Quite the accomplishment if I do say so myself!
*3D: It was fast.
*3D: The soothing voice of the narrator kept me engaged.
*3D: Easy to stay concentrated
*3D: The images kept mind from wondering.
*3D: It was exceptionally quick
*3D: Just a little fast for me a time or two
*3D: There were a few times it went a little fast
===Navigation===
*2D: I didn't see the words the best way cause of the chair.
*2D: Seating made vision difference which had to be adjusted more than once
*3D: Hard to put screen right
*3D: Models that were rotating sometimes blocked the text
*3D: I had to situate my view to read the board
*3D: Display was blocked many times.
*3D: Had to peek round the 3D bridges to read the text
</div>
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
1916f50b633d914556a986b7bcf3de50d707f15d
372
318
2018-10-29T12:02:36Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<div class="nonumtoc">
=Appendices=
==Appendix A: Terminology==
{| width="15%"
|-
|'''Term'''
|'''Description'''
|}
;Virtual World:
:An artificial environment that a person projects themself within. In the context this in which this term has been used mainly in this paper (unless otherwise stated) it is an environment that is technological built form software programs.
;In World:
:Artificial, where the person operates in the artificial virtual world.
;Real Word
:Reality, where the person operates in their physical world.
;Avatar:
:The digital representation of the person in the virtual world.
;Teleport
:A method of transport used in the virtual world that moves them from one location to another without having to walk to a location with their avatar.
;Presence:
:A subjective measure. The feeling of being in the virtual world that disconnects them from physical world around them.
;Immersion:
:An objective measure. The interface between the virtual world and the user that places the person in the virtual world.
;MMORPG:
:Massively Multiplayer Online Role Playing Game. The can used in various shortened abbreviations eg RPG, MMO
This term often is used to describe the latest generation of online virtual world technology. Many other terms are used such as MUVE, CVE etc.
;MUD:
:Multi User Dungeon. Early text based networked virtual worlds.
<P align=left >'''''Table 14 Terminology'''''</p >
==Appendix B: MMOG Analysis==
Bruce Woodcock (2008) is an independent writer and long time player of MMOGs that has dedicated his research to tracking subscriptions numbers of online MMOGs. These figures are obtained from source and public available material e.g. company financial reports, company media releases, media publications and in some cases an educated guess. These figures although not precise, allows us to do a comparison of MMOGs that would otherwise would not be available unless one was to undertaken the same type of analysis such as he has done over the years. If anything these figures would be underreported as they only are based upon user subscriptions and therefore do not include in the numbers of user that have free-access to their environments (included within the ones listed). These figures are current as at April 2008, for more information see http://www.mmogchart.com/.
Breakdown of MMOGs listed Chart.
{|border="1" width="40%" align=center
|-
|align=center |'''Name'''
|align=center |'''Current Active Subscriptions'''
|-
|align="right"|World of Warcraft
|align="right"|10,000,000
|-
|align="right"|RuneScape
|align="right"|1,200,000
|-
|align="right"|Lineage
|align="right"|1,056,177
|-
|align="right" |Lineage II
|align="right" |1,006,556
|-
|align="right" |Final Fantasy XI
|align="right" |500,000
|-
|align="right" |Dofus
|align="right" |452,000
|-
|align="right" |EVE Online
|align="right" |236,510
|-
|align="right" |EverQuest II
|align="right" |200,000
|-
|align="right" |EverQuest
|align="right" |175,000
|-
|align="right" |The Lord of the Rings Online
|align="right" |150,000
|-
|align="right" |City of Heroes / Villains
|align="right" |136,250
|-
|align="right" |Tibia
|align="right" |104,338
|-
|align="right" |Star Wars Galaxies
|align="right" |100,000
|-
|align="right" |Toontown Online
|align="right" |100,000
|-
|align="right" |Second Life
|align="right" |-91,531
|-
|align="right" |Tabula Rasa
|align="right" |75,000
|-
|align="right" |Ultima Online
|align="right" |75,000
|-
|align="right" |Pirates of the Burning Sea
|align="right" |65,000
|-
|align="right" |Dark Age of Camelot
|align="right" |45,000
|-
|align="right" |Dungeons & Dragons Online
|align="right" |45,000
|-
|align="right" |Vanguard: Saga of Heroes
|align="right" |40,000
|-
|align="right" |Yohoho! Puzzle Pirates
|align="right" |34,000
|-
|align="right" |EverQuest Online Adventures
|align="right" |30,000
|-
|align="right" |The Matrix Online
|align="right" |30,000
|-
|align="right" |Era of Eidolon
|align="right" |27,000
|-
|align="right" |PlanetSide
|align="right" |20,000
|-
|align="right" |Asheron's Call
|align="right" |15,000
|-
|align="right" |Sphere
|align="right" |15,000
|-
|align="right" |Anarchy Online
|align="right" |12,000
|-
|align="right" |The Realm Online
|align="right" |12,000
|-
|align="right" |World War II Online
|align="right" |12,000
|-
|align="right" |Pirates of the Caribbean Online
|align="right" |10,000
|-
|align="right" |Neocron 2
|align="right" |6,000
|-
|align="right" |Horizons
|align="right" |5,000
|-
|align="right" |Mankind
|align="right" |5,000
|-
|align="right" |A Tale in the Desert
|align="right" |1,054
|}
==Appendix I: Second Life Demographics==
{| align="center" width="50%" style="background-color:#ffffcc; "
|-
|align="left" colspan="3" |'''Second Life Virtual Economy<br />
Demographic Summary Information<br />
Through November 2008'''
|-bgcolor="lightblue"
|align="center" colspan="3" |'''Top 20 Countries by Active User Hours'''
|-bgcolor="wheat"
|align=center |'''Country'''
|align=center |'''Total Hours'''
|align=center |'''% of Total Hrs'''
|-
|align=right |United States
|align=right |14,451,180.28
|align=right |39.38%
|-
|align=right |Germany
|align=right | 3,505,103.93
|align=right | 9.55%
|-
|align=right | United Kingdom
|align=right | 2,424,987.88
|align=right | 6.61%
|-
|align=right | Japan
|align=right | 2,014,299.45
|align=right | 5.49%
|-
|align=right | France
|align=right | 1,972,875.00
|align=right | 5.38%
|-
|align=right |Netherlands
|align=right | 1,406,652.90
|align=right | 3.83%
|-
|align=right |Italy
|align=right | 1,397,571.12
|align=right | 3.81%
|-
|align=right |Brazil
|align=right | 1,361,741.72
|align=right | 3.71%
|-
|align=right |Canada
|align=right | 1,336,706.03
|align=right | 3.64%
|-
|align=right |Spain
|align=right | 1,083,716.70
|align=right | 2.95%
|-
|align=right |Australia
|align=right | 747,158.40
|align=right | 2.04%
|-
|align=right |Belgium
|align=right | 349,070.48
|align=right | 0.95%
|-
|align=right |Portugal
|align=right | 332,468.60
|align=right | 0.91%
|-
|align=right |Switzerland
|align=right | 277,448.60
|align=right | 0.76%
|-
|align=right |Poland
|align=right | 234,785.58
|align=right | 0.64%
|-
|align=right |Argentina
|align=right | 196,719.35
|align=right | 0.54%
|-
|align=right |Denmark
|align=right | 193,975.72
|align=right | 0.53%
|-
|align=right |Sweden
|align=right | 191,424.80
|align=right | 0.52%
|-
|align=right |Mexico
|align=right | 177,130.73
|align=right | 0.48%
|-
|align=right |Turkey
|align=right | 176,759.05
|align=right | 0.48%
|-
|align=right |Others
|align=right | 2,866,931.23
|align=right | 7.81%
|-
|align=center |'''Total'''
|align=right | '''36,698,707.57'''
|
|}
{| align="center" width="50%" style="background-color:#ffffcc; "
|-
|align="left" colspan="3" |'''Second Life Virtual Economy<br />
Demographic Summary Information<br />
Through November 2008'''
|-bgcolor="lightblue"
|align="center" colspan="3" |'''Usage hours by Age Band'''
|-bgcolor="wheat"
|align=center |'''Age'''
|align=center |'''% of Total Hrs'''
|-
|align=right |13-17 (Teen Grid)
|align=right |0.32%
|-
|align=right |18-24
|align=right | 15.07%
|-
|align=right |25-34
|align=right | 34.51%
|-
|align=right |35-44
|align=right | 28.51%
|-
|align=right |45 plus
|align=right | 21.14%
|-
|align=right |Unknown
|align=right | 0.45%
|-bgcolor="lightblue"
|align="center" colspan="3" |'''Usage hours by Gender'''
|-
|align=right |Male
|align=right | 58.72
|-
|align=right |Female
|align=right | 41.28
|}
<p align="center" >'''''Source: (Linden Lab, 2008b)'''''</p >
==Appendix J: Pre-Quiz Score Results==
This section discusses the pre-quiz scores significance test results.
===J.1 Remember Scores===
Figure 68 provides the pre-quiz results for Bloom’s ‘remember’ cognitive process.
Figure 68. Results: Pre-Quiz Remember - Histogram & Bell Curve
The pre-quiz ‘remember’ scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = -0.417, sek = -1.105, K2 p = 0.26747 and 3D: ses = -0.595 and sek = -1.54, K2 p = 0.2675) and the variance between the groups was not significant (F = 0.668, 2 tailed p = 0.140, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found no significant difference (t = 1.665, df = 109, two-tailed p = 0.0987, α = 0.05) between the results of the 2D (x1 = 2.44, s1 = 1.032) and 3D (x2 = 2.071, s2 = 1.263) pre-quiz ‘remember’ scores.
When tested using a one-tail test where µ1 – µ2 > 0.5 the results show that there is a significant different (t = 1.665, df = 109, one-tailed p = 0.0494, α = 0.05), thus the 2D pre-quiz scores were significantly higher than the 3D scores for the Bloom’s cognitive process of ‘remember’.
===J.2 Understand Scores===
Figure 69 provides the pre-quiz results for Bloom’s understand cognitive process.
Figure 69. Results: Pre-Quiz Understand - Histogram & Bell Curve
The pre-quiz ‘understand’ scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = 0.790, sek = -0.227, K2 p = 0.63248 and 3D: ses = 1.072, sek = 0.0563, K2 p = 0.50798) and the variance between the groups was not significant (F = 0.799, 2 tailed p = 0.410, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found a significant difference (t = -2.257, df = 109, two-tailed p = 0.0260, α = 0.05) between the results of the 2D (x1 = 1.254, s1 = 0.775) and 3D (x2 = 1.607, s2 = 0.867) pre-quiz ‘understand’ scores. The 3D pre-quiz scores were significantly greater than the 2D pre-quiz scores for the Bloom’s cognitive process of ‘understand’ (µ1 – µ2 < 0.5; t = -3.03167, df = 109, one-tailed p = 0.00138, α = 0.05).
===J.3 Summary Pre-Quiz Remember and Understand===
Figure 70 provides an inverse cumulative normal distribution graph for Bloom’s cognitive process ‘remember’ and ‘understand’ for the post-quiz scores. This graph displays what percentage of participants scored under a nominated score.
Figure 70. Results: Pre-Quiz Rem & Und - Inverse Cumulative Normal Distribution Graph
===J.4 Total Scores===
A graph of the results for the total score was provided in the main document in the Chapter, 4 Results, Pre- Quiz Results.
The pre-quiz total scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D, ses = 0.0218, sek = -1.087, K2 p = 0.49248 and 3D, ses = -0.574, sek = -0.425, K2 p = 0.671739) and the variance between the groups was not significant (F = 0.862, 2 tailed p = 0.586, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found a no significant difference (t = 0.0455, df = 109, two-tailed p = 0.964, α = 0.05) between the results of the 2D (x1 = 3.690, s1 = 1.372) and 3D (x2 = 3.679, s2 = 1.479) pre-quiz total scores.
==Appendix K: Post-Quiz Score Results==
A graph of the results for the post-quiz score was provided in the main document in the Chapter, 4 Results; Post Quiz Results, Hypothesis One and Two sections.
===K.1 Remember Scores===
The post-quiz ‘remember’ scores (H01) were tested using the non-parametric Mann-Whitney U Test as the results for the post-quiz ‘remember’ scores did not meet the assumptions for parametric testing which requires the scores to be normality distributed (2D: ses = -1.94259, sek = -1.10294, K2 p = 0.06976 and 3D: ses = -2.87371, sek = 1.02617, K2 p = 0.01161). The 3D scores failed the D’Agostino-Pearson (K2) normal distribution test (p = 0.01161 ie < 0.05) therefore the scores from this group deviate significantly from normal distribution.
The results of Mann-Whitney U Test when applied found that there was no significant difference between the 2D and 3D post-quiz ‘remember’ scores where the average ranked scores were 2D = 53.9364 and 3D = 58.0268 resulted in U = 1653.5, W = 113.5, 2 tailed p = 0.493107, α = 0.05.
===K.2 Understand Scores===
The post-quiz ‘understand’ scores (H02) were tested using the parametric independent t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = 0.204408, sek = - 0.8453, K2 p = and 3D: ses = 1.016, sek = 0.016, K2 p = ) and the variance between the groups was not significant (F = 1.028, 2 tailed p = 0.920, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
===K.3 Total Scores===
The post-quiz total scores were tested using the parametric individual t-test as results met the assumptions for parametric testing. Both groups’ results were normally distributed (2D: ses = 0.158427, sek = -0.230644, K2 p = 0.8865884 and 3D: ses = -0.700083, sek = 0.404913, K2 p = 0.62133) and the variance between the groups was not significant (F = 1.10638, 2 tailed p = 0.70972, α = 0.05), therefore the parametric independent t-test of equal variance was used to test for significance.
The results of an independent t-test found a no significant difference (t = -0.8212, df = 119, two-tailed p = 0.4133, α = 0.05) between the results of the 2D (x1 = 10.9818, s1 = 2.46825) and 3D (x2 = 11.3571, s2 = 2.34659) post-quiz total scores.
==Appendix L: Instrument Reliability Results==
Table 15 provides the results of the instrument reliability tests performed on the achievement quiz results. For the pre-quiz there were 4 questions each for the Bloom’s cognitive process of ‘remember’ (rem) and ‘understand’ (und) for a combined total of 8 and in the post-quiz 10 questions for a combined total of 20. The 2D group consisted of 55 participants and the 3D group 56.
{| align="center" width="60%" style="background-color:#ffffcc; "
|-
|colspan="5" align="center" |'''Post-Quiz Remember'''
|-
|align=center|
|align=center bgcolor="#DDADAF" colspan=2 |'''2D'''
|align=center bgcolor="lightblue" colspan=2 |'''3D'''
|-bgcolor="lightgrey"
|align=center|
|align=center bgcolor="lightgrey" |'''Rem'''
|align=center bgcolor="lightgrey" |'''Und'''
|align=center bgcolor="lightgrey" |'''Rem'''
|align=center bgcolor="lightgrey" |'''Und'''
|-
|align=right |'''Pre-Quiz KR20'''
|align=right | 0.14
|align=right | -0.46
|align=right | 0.48
|align=right | -0.01
|- bgcolor="lightgrey"
|align=right |'''Post-Quiz KR20'''
|align=right | 0.53
|align=right | -0.01
|align=right | 0.54
|align=right | 0.10
|}
<p align=center >'''''Table 15. Instrument Reiability: Acheivement Quiz'''''</p>
Table 16 provides the results of the instrument reliability tests performed on the post survey Likert scale results for questions 23, 24, 28 and 29.
{| align="center" width="60%" style="background-color:#ffffcc; "
|-
|colspan="3" align="center" |'''Post-Quiz Remember'''
|-
|align=center|
|align=center bgcolor="#DDADAF" |'''2D'''
|align=center bgcolor="lightblue" |'''3D'''
|-bgcolor="lightgrey"
|align=right |'''Cronbach's Alpha:'''
|align=right |0.73
|align=right |0.72
|}
<p align="center" >'''''Table 16. Instrument Reiability: Survey Likert Scales'''''</p>
Frary (2008) provides the following definitions for the measure of these reliability (r) results:
*r = .90 or higher - High reliability. Suitable for making a decision about an examinee based on a single test score.
*r = .80 to .89 - Good reliability. Suitable for use in evaluating individual examinees if averaged with a small number of other scores of similar reliability.
*r = .60 to .79 - Low to moderate reliability. Suitable for evaluating individuals only if averaged with several other scores of similar reliability.
*r = .40 to .59 - Doubtful reliability. Should be used only with caution in the evaluation of individual examinees. May be satisfactory for determination of average score differences between groups.
'''Discussion'''
Instrument reliability tests the correlation of answers within a data set. The assumptions for the KR-20 test is that test items are of equal, or near equal, difficulty and intercorrelation (Lenke, Wellens, & Oswald, 1977). Consistent with these assumptions, the tests performed were split into the Bloom’s cognitive processes of ‘remember’ and ‘understand’. Furthermore as we were measuring the difference between the achievement results of two groups that had distinctly different treatment methods the reliability tests were divided into 2D and 3D participant groups. These repeated divisions caused a problem for the application of the instrument reliability test as in each division the total number tested items is 10 or below. If the number of questions (or subjects) are too low within each group then the results of the test as put by Frary ‘should be taken with a grain of salt’. Frary (2008) provides further insight as to why:
“All reliability estimates are subject to considerable error when there are small numbers of examinees or test items. If there are fewer than, say, 25 examinees or 10 items, the reliability estimate must be "taken with a grain of salt." This phenomenon is especially noticeable when there are several scrambled forms of the test, each administered to a relatively small number of examinees. Then the KR20 coefficients may fluctuate considerably from one form to another.”
As we can see from the above results there was considerable fluctuation in the reliability test results between the two groups. With exception to the post-quiz ‘remember’ results the other figures varied considerably. These results seem to correlate to the results that are discussed in Chapter 5 Discussion and Conclusion chapter. Participants for both groups performed well for ‘remember’ but did not for Bloom’s ‘understand’.
Although, as Frary asserts, the test reliability measures under this research’s circumstances are inconclusive indicators.
==Appendix M: Qualitative Analysis: A Sample of Participants Comments==
===Virtual World Learning Experience===
*I found learning in world is a great way to find out about things you don’t normally think about finding out about
*You’re more likely to learn things in world than go to places to find out about things
*Things I usually don't take time to learn about, I can learn about them here
*I really felt as if I was sitting in a Room of Such listening to a lecturer
*It kind of felt personal.
*Kind of soothing but not putting me to sleep kind
*The lack of pressure that comes from a more traditional classroom atmosphere
*You can see if others are in the class with you
*Feel this way is better experience then the normal online way of taking classes
*I prefer learning alone and I would definitely prefer this type of learning to going to a classroom with other students.
*Seemed better than the typical classroom experience
*This is a fantastic experiment and I believe the potential to reach people with anything that will help them become better educated is a wonderful thing.
*Top idea to get people to learn about several topics
*I liked the idea; please invite me for more lessons
*By being part of this Survey Study, I have opened a door to seeking out further Studies, as well as Classes with SL
===Campus Experience===
*It was very easy to use
*It was very well laid out
*Easier to navigate through
*I liked the way different stages
*The environment was well set out
*Very user friendly
===Format===
*2D: Like liked the layout... it showed you a picture of the different types of bridges as well as giving you plenty of information on the subject then had a summary of all of it at the end
*2D: I wish that the pictures had been interactive so I could've clicked on the different sections of the bridges and gotten an individual description
*2D: Easy to follow slides
*2D: The presentation was actually enjoyable, however I believe that for this to be a truly effective learning tool the presentation speed must be made adjustable as people may find certain topics boring and just skip through them but may wish to spend longer periods of time on other material and wish to slow down to be more attentive.
*2D: the possibility to go back or control the slideshow
*3D: The mix of the audio and the bulleted points made it easier to follow for visual
*3D: Wonderfully laid out. The visuals were great! They conveyed the most important points very well.
*3D: lots of examples
*3D: Wish there was a way I could stop the presentation or lecture and go back to review what was just said.
===Information content===
*2D: Very informative and interesting
*2D: very easy to comprehend
*2D: It was not too technical
*2D: I never gave it much thought at to the Construction of Bridges, one droves on them, over them etc, and you certainly hear in recent years of the collapse of bridges etc, I found the topic informative although a lot to digest.
*3D: It was informative.
*3D: Need more infor need more infor need more infor
*3D: I have never stopped to think about bridges before. Now how am I going to drive over a bridge without thinking about what it is?
*3D: The theory of the subject was well thought out, even though to my knowledge the subject was well informative, it could have been explained in more of laymen terms for those who really don't understand the makeup of bridges.
*3D: I found myself getting lost a bit here and there with the terminology
*3D: Overall a bit too complicated for someone with no previous knowledge coming into the presentation, but still worthwhile.
*3D: I might have liked a little better explanation of how compression and tension work at the beginning so as I could understand the physics of it a little better.
===Learning===
*2D: I liked learning something new
*2D: I got to learn something I did not know.
*2D: Learned more about bridges
*2D: It was good to learn about the understanding of bridges
*3D: learn something about a subject I never knew something about
*3D: Suddenly, unforeseeably, I was studying the physics of bridges! I could never have guessed when I woke up today that I would learn this.
*3D: What a well thought out presentation, Now that I know something about bridges. I have something new to take to Real life with me
*3D: Combined my hobby with learning
===Facets of 3D Learning===
*3D: The way the bridges could actually be seen materialized and color coded was great.
*3D: It was visually appealing versus reading a book or listening to a live lecture.
*3D: It's a great learning key.
*3D: I liked the ability to see a 3D diagram of the topic.
*3D: The examples floating in space allowed for a better view of the material
*3D: The use of "real" object as opposed to drawings helped with any problems in understanding
*3D: The images were 3D making it a little easier to get an idea of what each bridge was.
*3D: The 3D rendered models illustrating the different types of bridges & how loads were carried were a great tool.
*3D: With the help of bridge models I was able to get a better understanding about what the lecture was talking about.
*3D: Just the fact that the examples where suspended in space, allowed me a better understanding from all angles.
*3D: While it may not quite stick on the first pass, I feel as if this method DEFINITELY provided a clear, direct delivery of the subject matter. I could see this type of presentation doing much more for someone with at least a rudimentary knowledge of the subject matter.
===Instruction===
*2D: It would have been fun to have an "instructor" to ask questions of. :)
*2D: lack of contact or clarification of issues
*3D: There was no place to pause the instructor, or ask further questions about the subject matter
*3D: A live guide would've been very helpful to clear up any confusion along the way, though it isn't necessary.
*3D: The inability to ask for clarification or further explanation.
*3D: No interactive question-answer
===Focus===
====In world distractions====
*2D: Distracting avatars
*2D: my club shine glitzier owners tag got in the way
*2D: I was distracted by my own curiosity of the technology
*3D: Disruptions from others in chat
*3D: Noise or excessive gestures of certain people.
*3D: Some others in the room were very disruptive
*3D: Interruptions from people who don't take the education seriously.
*3D: It would be idea to separate people in the education process as some people make noises during the presentation that distracts from the education.
===Outside world distractions===
*2D: Just the fact it’s the weekend and so many distractions in the house
*2D: Thought it was interesting I may watch it again later, if it’s alright, my daughter kept talking to me during it and I kept getting distracted but I did try and pay attention.
*2D: I could do other things at my desk and could answer the phone!
*2D: I guess it’s not good to be able to talk to others during a class where you're supposed to learn something [yahoo messaging]
*2D: Could do things at my desk
*2D: "real life" interruptions the telephone ringing
*3D: Interruptions from real life
===Time===
*2D: Being new, it held my attention for the whole time
*2D: It went a bit slow.
*2D: Speed of the presentation was a little slow
*2D: The narrator was a bit monotone which caused me to get bored a couple of times.
*2D: I lost focus for a little.
*2D: found myself zoning out a little bit.
*2D: voice got monotonous
*3D: It actually held my attention! Quite the accomplishment if I do say so myself!
*3D: It was fast.
*3D: The soothing voice of the narrator kept me engaged.
*3D: Easy to stay concentrated
*3D: The images kept mind from wondering.
*3D: It was exceptionally quick
*3D: Just a little fast for me a time or two
*3D: There were a few times it went a little fast
===Navigation===
*2D: I didn't see the words the best way cause of the chair.
*2D: Seating made vision difference which had to be adjusted more than once
*3D: Hard to put screen right
*3D: Models that were rotating sometimes blocked the text
*3D: I had to situate my view to read the board
*3D: Display was blocked many times.
*3D: Had to peek round the 3D bridges to read the text
</div>
[[Category:Learning In Virtual Worlds]]
[[Category:Book - Real Learning in Virtual Worlds]]
<noinclude>
{{BackLinks}}
</noinclude>
1916f50b633d914556a986b7bcf3de50d707f15d
Business Process Reengineering - Introduction
0
286
320
319
2018-10-29T11:41:57Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this article. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Reengineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, the style and the detail provided, as the original was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time. As the charting method is a fairly involved, we will also be providing examples of systems charted using the method. This chapter is the introduction chapter, which provides a reasonably good overview of the approach.
</noinclude>
==Definition, Purposes & Outcomes==
=== Definition ===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:BusinessComponentObjectives.png]]
</div>
</td>
</tr>
</table>
Business Process Reengineering (BPR) ''is the method by which the infrastructure, policies, procedures and practices of an organisation are reviewed and redesigned to achieve some predefined purpose and objective.'' Purpose and Objective differ in that the purpose describes the reason for the process while the objective is the reason for the reengineering of the process. The objective is generally the optimisation of the quality - cost relationship, but may be any other objective(s) defined by the stakeholders of the processes revised.
Hammer, a popular author of reengineering texts defines reengineering as : The fundamental rethinking and radical redesign of business processes to bring about dramatic improvements in performance. Essentially, he argues that BPR is about major change in an organisation, yet perhaps this reflects a rather naive preoccupation with “big-is-better”. BPR can be about constrained well focussed small scale redesign as much as about monolithic reconstruction.
BPR is not new, although many consultants in the field try to claim otherwise. It is simply one more evolutionary step in a long stream of management change processes that includes Statistical Quality Control, TQM, Internal Audit, Work & Job Redesign, Goal Focussed Management, Workflow Management, Systems Analysis, etc. The theoretical foundation in BPR is quite old and can be seen particularly in the work in Systems Analysis undertaken at the University of Lancaster since 1969. What is new about BPR is its holistic view of the organisation and its attempt to capture the management philosophies that preceded it into a single integrated method.
Perhaps due in part to its conglomerate nature there is little standardisation among BPR approaches nor agreement on what is, or is not, BPR. With a few notable exceptions, the literature tends to be long on promises and case studies claiming stratospheric success but short on detail. This manual attempts to provide both a definition of BPR and an integrated strategy of analytic methods for performing it.
Although significantly different in approach from the work in systems analysis of the University of Lancaster, the development of our method owes a fundamental debt to the conceptual insight of that team. We have borrowed concepts, however, from a wide domain of disciplines ranging from accounting to computer science, and from psychology to marketing. It is not intended that the analytic tools of the method be cast in stone by this manual. No approach is perfect, and if this method is not seen to embrace its own continuous improvement then it will be as flawed as the business systems it purports to improve.
=== Purpose of BPR ===
In a BPR exercise we consider all aspects of managerial responsibility - from the organisation design through to the procedures and practices adopted. The BPR project does not attempt to define the purpose or the objectives of its systems of the organisation, rather once defined, it provides the machine to deliver that purpose and objective(s).
The method used in the reengineering process must deliver a complete description of that machine. This include the organisational structure, the behavioural paradigm, duties, controls, performance indicators, policies, procedures, data management, continuous improvement procedures, computer systems, etc, etc.
It is easy to confuse the activity of BPR with that of computer systems implementation, since many of the forces driving a BPR exercise beg computerisation as the easiest way to achieve apparently dramatic improvement. This is a mistake. Implementing computerised solutions is not the purpose of BPR, although a computerised solution is one of the tools a reengineer may use to implement some part of the processes and a BPR component .
Nor should we rely on computer solutions to all cases. While it is often true that the computerisation of a process will deliver significant improvement in the ratio of output volume and quality to human effort (input), when viewed from a holistic perspective (which includes infrastructure, investment, opportunity cost, and solution responsiveness to change) the computerised solution may not always be as attractive as first thought. Not withstanding these comments, a planned change in information systems provides a common and sensible catalyst for the BPR programme.
Essentially, the purpose of BPR is to build business systems able to deliver the organisation’s mission while optimising some given combination of objectives. In building the system, we must apply appropriate analytic techniques and appropriate implementation strategies. The weaker the constraints on the process applied by management - ie the wider the range of options left on the table for consideration - the more successful (in terms of optimising the objectives) the outcome is likely to be. The purpose of the system either will or will not be satisfied by the system design options made available - the quality of that delivery is measured by the objectives.
=== Outcomes ===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:BPR Components.png]]
</div>
</td>
</tr>
</table>
The result of the BPR project is a working system tuned to optimise some combination of objectives in delivering the stakeholder’s purpose. It is defined by a set of system descriptions, or views of the system, which consider, categorise and structure the matter from a number of angles.
Illustrated in Figure are the key components of a system description produced by the BPR method detailed in this manual. There are many differences between the approach presented here and the convention literature on BPR both in method and outcome. Henceforth we shall refer to this approach as the Bishop BPR (or BBPR) method.
The method produces a process and organisational rework that is naturally integrated with risk and compliance governance systems and (in its detailed delivery) uses a unique charting system which blends computational and human processes together in a common stuctured and testable form.
We have used and progressively improved the method detailed in this text since the late 1980's and it has been applied in the delivery of consultancies to well over several hundred organisations covering non-profit, government and corporate sectors. It has been applied in its pure form as a process reengineering system, in reduced forms as an internal audit systems audit process, a business systems design model (for design and development of business computing systems), and with various strategy enhancements as a business strategy planning tool. While this author has brought it to each consulting organisation with he has worked or lead over the years, it has benefitted from the ideas and contributions of many collegues.
We shall explore the BBPR method throughout this text and provide the tools and techniques necessary to deliver the BBPR system description. Here we provide a brief introduction to the ten key descriptive outputs in the figure:
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Descriptive Output </th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
Key Performance Indicators & Benchmarks / Targets
</td>
<td>
Performance management - how we directly manage and monitor the achievement of the system’s purposes
</td>
</tr>
<tr>
<td>
Active Control Management System
</td>
<td>
Internal System Integrity - how we directly monitor and manage the achievement of the system’s objectives.
</td>
</tr>
<tr>
<td>
Organisation Design
</td>
<td>
The objects/entities and their roles with their managerial, behavioural and reporting relationships identified.
</td>
</tr>
<tr>
<td>
Decision Tree
</td>
<td>
The tree (or Information Map) charts the decisions required by entities in the system, the relationships between the decisions and their information needs
</td>
</tr>
<tr>
<td>
Process & Workflow Charts
</td>
<td>
The sequence of activities making up the functional components of a system.
</td>
</tr>
<tr>
<td>
Event Calendar
</td>
<td>
The timing of events and their cycles and the processes they trigger
</td>
</tr>
<tr>
<td>
Client Provider Service Agreements
</td>
<td>
The objects/entities comprising the system seen as pairs of clients and providers (of services, data, goods, etc) emphasising their respective duties. The approach establishes notional contracts or service agreements which outline each entity’s responsibilities in the client provider relationship.
</td>
</tr>
<tr>
<td>
Data Management
</td>
<td>
The data stores in the system, what the data represents and how this data is managed
</td>
</tr>
<tr>
<td>
Continuous Improvement System
</td>
<td>
The strategy for delivering system improvement on a continuous basis.
</td>
</tr>
<tr>
<td>
Implementation and Change Strategy
</td>
<td>
The approach to managing the implementation of the reengineered system in the organisation and particularly managing people through the change process.
</td>
</tr>
</table>
The system description is only the ‘record’ of the real outcome of the BBPR approach - that of business performance improvement through better business processes. The ABPR method produces a system designed to optimise certain predefined objectives (such as cost of inputs to quality) while the system description attempts to formalise that system and provide the mechanisms for monitoring performance, and maintaining and tuning that system.
In the model organisation, the approach starts with the strategic plan of the organisation (or unit) being reviewed and uses that plan’s components (vision, mission, key result areas, critical success factors, strategies, key performance indicators, targets and timeframe) to focus the design effort with purpose and objectives. In the real organisation, planning is generally something less than perfect, so we must employ a wider net in defining the focus of the BPR exercise. Once armed with a focus, a wide variety of sources and analytic tools are employed to build a business system which will best achieve management’s plans.
==The Analytic Method & Its Tools==
===The Structure===
At the heart of the ABPR method is a set of ‘analytic tools’ (methods) that help define views of a system that highlight the particular properties in which we are interested. The key components are illustrated in Figure 1
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1"
width="100%" >
<tr>
<td>
<div class="center">
[[Image:BPRAnalyticStructure.png]]
</div>
</td>
<td>
<table >
<tr>
<td>
The analytic method is based on a simple premise:
A System is comprised of Recursive Objects only. Any system can be described by four types of Objects: Entities, Data Stores, Maps (Processes), and Quality Managers (Control/Performance Criteria).
The simple dataflow diagram of Figure 3 shows a basic system. Entity A provides data to Entity B via a single process (under the control of Entity C) which maps the data from one data store to another. The performance of the mapping process is managed by the quality control process under the control of Entity D. The quality control process is approximately equivalent to an engineering feedback loop.
</td>
</tr>
<tr>
<td>
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:BPC4KeyChartObj.png]]
</div>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
</table>
Mapping is a computational and mathematical term which describes the mechanism by which data is transformed from one state or form to another. In a business process that transformation might be a simple as the act of transcribing an invoice from its physical (eg. paper based) state to an electronic record in an accounts parable system through the process of data entry. The data in its input state may be said to have been mapped to another state through some process of transformation.
The computer engineering reader will recognise the similarity of the diagram to a dataflow model.
The logical starting point of a BPR exercise may seem to be the Performance Criteria definition (assuming that the overall purpose if the system being improved is already known), but it is important to note that each of the four definition activities should continue concurrently throughout the project. It is not unusual for the Performance Assumptions to change as a result of the other BPR activities, and virtually certain where the project is a Strategic Planning exercise.
This mixing of strategy planning and BPR may at first seem a little unusual, but in the impact of the BPR analysis can be to cause a fundamental rethink of the business strategy itself. Where the focus is merely to re-design a specific, targetted transactional process such a strategic impact is, perhaps less likely, but where the targeted business process is the core of the business, such an impact is surprisingly common.
In particular the KPI definition both commences and completes a project. The table lists these key analytic tools and provides an overview of the activity. These tool classes are typical of those employed, but not necessarily the only ones appropriate to any given project.
===The Modelling Tools===
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Analytic Tool Class </th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
KPI Assessment
</td>
<td>
In an ideal organisation the planning documents establish the focus for all activity. Our search for the focus of the BPR must therefore begin with the planning and policy role of management. Where available, sources to be reviewed include:
<ul>
<li> Statement of System Objectives
<li> Corporate Plan
<li> Budgets
<li> Benchmarks
</ul>
Organisations are rarely ideal and other techniques will need to be applied, depending on the culture of the organisation being reviewed. Such techniques may include SWOTC (Strengths, Weaknesses, Opportunities, Threats, and Constraints) analysis, benchmarking, corporate goal setting, interview, etc and may need to be undertaken to establish the purpose and key objectives of the system being reviewed.
Armed with this information the first view of Key Performance Indicators (KPI) appropriate to the system should be definable. In a sense, the KPI’s are like the gauges and alarms of an airplane, car or any other mechanical device. They alert the system’s ‘pilot’ to the status of the machinery, and allow rapid identification and adjustment of the system if anything ‘goes wrong’. In this sense the selection of the correct KPI’s is critical: if there is no gauge for a problem occurring it may not be detected until the problem is obvious without the help of a gauge - and possibly too late to be repaired.
In this first, top level assessment the KPIs will generally be whole-of-system measures. As other components of the ABPR are resolved (such as the Process Mapping and the Client Provider Analysis) the process detail level will emerge which becomes organisation’s operational ‘alarm system’. The ABPR has a specific design paradigm called Active Control Management to implement this KPI based control system in a cost efficient manner.
</td>
</tr>
<tr>
<td>
Client-Provider Analysis
</td>
<td>
A technique adopted from TQM which classifies the entities creating, managing and consuming data in the system as clients (data recipients) or providers (data suppliers) of one another. In performing the analysis we turn to information sources such as:
<ul>
<li> External Clients & Providers
<li> Internal Clients & Providers
<li> Organisation Structure
<li> Roles & Duty Statements
<li> Implied Contracts
</ul>
While it is important to understand the organisational structure as it stands - because, among other things, it dictates the client-provider relationships, it should not necessarilly bind the designer. An organisational model reflects legislative, cultural and historic traditions that may be critical to retain, as well as (possibly) many years of legitimate experience among the management team in the industry and market in which you are working. It must not simply be disregarded in the BPR process in favour of radical change.
Indeed, the author generally advises against too ambitious an organisational change, unless change is part of the culture or intended management strategy. In some organisations, frequent re-organisation is part of the management ethos, and such an approach is as legitimate and successful a management model as any other. One must, nevertheless, be careful in taking the existing structure (or management ethos!) as a given - particularly where the organisation is seeking a competitive edge beyond mere marginal improvement in efficiency or quality.
The BBPR method uses its own method of analysing organisational structures called The Organisational Community Network Model (which is one of the reasons that the BPR method frequently impacts organisational design). This approach is appropriate, even where the organisation will aubstantially retain it's orginal shape after the BPR project as it leads to a highly efficient and focussed "desk top" test process architecture, and where the option for organisational redesign is on the table, can lead to a very radical outcome.
</td>
</tr>
<tr>
<td>
Stakeholder Analysis
</td>
<td>
The direct stakeholders are addressed in the Client Provider analysis, while the indirect stakeholders are addressed here - in the Stakeholder Analysis.
Essentially the indirect stakeholders provide the organisation with drivers & constraints. Typical sources include:
<ul>
<li> Legislative Obligations
<li> Cultural Expectations
<li> Reporting Obligations
</ul>
</td>
</tr>
<tr>
<td>
Data Store Catalogue
</td>
<td>
The catalogue is the BPR equivalent to a data base administrator’s data dictionary. It describes all the data stored by the system, and the data stores themselves. It specifies the access rights, custodianship rules, data integrity standards and the static relationships between data stores.
Data stores include all the data managed by the system and methods of temporary or permanent storage. Data stores include electronic (abstract) and physical storage such as documents, files, filing cabinets, in trays, bins, etc.
Data Integrity Standards must be established system wide to which data stores adhere. The standards should be consistent with those applied by quality managers.
</td>
</tr>
<tr>
<td>
Process Mapping
</td>
<td>
Perhaps the most involved of all the activities of the BPR exercise. Process mapping is a general name for a variety of procedural analysis and design activities. The information sources include:
<ul>
<li> Functional Description
<li> Cradle to Grave Tracing - System Walkthrough
<li> Manuals
<li> List of Data Sources & Destinations
<li> Client / Provider Mapping
<li> Data Load Analysis (transaction volumes, processing rates, etc)
</ul>
The key activity during process mapping is the production of the Data flow diagrams and supporting documentation. This is done in two streams simultaneously:
<ol>
<li> Existing systems
<li> Redesigned Systems
</ol>
The data flow charts form the basis to the reengineering. They combine all aspects of the other analytic tools and describe the algorithm of the system.
In process mapping we treat all processes of a system as operating concurrently and control their timing and behaviour through messages, which take the form of either data or events.
The process map is not complete until the system data loading has been assessed for each process. The data load analysis involves examining data volumes and processing times, throughput assessment, reliability rates, etc.
</td>
</tr>
<tr>
<td>
Decision Tree / Information Mapping
</td>
<td>
The system handles not just data but information. Data becomes information when it exhibits certain quality characteristics. Information data must be appropriate to its purpose and reliable (where reliability implies standards of timeliness, accuracy, completeness, etc). Information mapping involves matching the data managed by a system to the decisions that must be made in operating that system. It requires, in part, the construction of a detailed decision tree spanning the entities in the system over time.
Necessarily, it also implies the existence of an events calendar which should link into the data flow diagrams. The information map includes a the information needs of the quality managers, and may be expressed in whole or in part through the Active Control Management design paradigm detailed later in this text.
The information map will require consideration of issue including:
<ul>
<li> Information Requirements
<li> Event Calendar
<li> Reporting Obligations
<li> Performance Control Management System (eg ACM)
</ul>
</td>
</tr>
</table>
===Organisational Representation (Introduction)===
When we think of organisational representation, we traditionally think of the heirarchical organisational chart. Resenbling an inverted tree, the organisational chart provided by almost all charting packages represents a cross between a reproesentation of physical or geographic position and reporting lines - and tells us very little about how a business organisation is really organised. At best it leads to a bureaucratic semi-accurate organisational view, and at worst, it is wildely incorrect such as in Matrix organisations.
As with many traditional diagramming systems it is horrendously inadequate for all but the grossest simplification of an organisation.
In the BBPR, we us a Community Network model which provides far richer analysis and directly represents the positioning of an organisation with its market and community.
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
Community domains can be defined as required for the purpose of the analysis, but in the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
You can read more about the Stakeholder Community Network Model and the Community Network Theory of organisational design and analysis [[The Stakeholder Community Network Model|Stakeholder Community Network Model and the Community Network Theory of organisational design and analysis here]].
===The Process Representation (Introduction)===
The full process charting model forms a language that can be represented either diagrammatically or descriptively. There are a number of different symbols and descriptive encoding rules, but in essence many of thesee enhancements are for diagramtic efficiency. The core of the charting system revolves around only a few symbols and the full model merely expands on these to provide a richer descriptive set, and more analytic detail with fewer diagramatic elements. The full model is described in [[Business Process Reengineering - Process Charting|advanced charting]].
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:DataFlow.png]]
</div>
</td>
</tr>
</table>
In the figure, '''''data flows''''' along, and in the direction of, the arrows between the entities, data stores and maps while control data flows principally into, and out, of the quality manager. The crossed-rectangular shapes are entities while the open ended rectangular shapes are (file) data stores. The maps and quality managers are shown by circles.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:Entity.png]]
</div>
</td>
</tr>
</table>
'''''Entities''''' are equivalent to people, machines, or processes external to the system being examined. In a sense they are givens in the system analysis, in that their functioning is assumed of a fixed standard and excluded from redesign. Those aspects of behaviour that can be redesigned are represented by the other three objects types.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:DataStore.png]]
</div>
</td>
</tr>
</table>
'''''Data Stores''''' are objects in which data resides from time to time. The stores are not the actual data itself, merely a representation of it. In the ‘object oriented analysis world’, data exists in the form of messages between objects. For example, Two people (entities) talking to each other (exchanging messages). Messages are essentially transient and so, for data available to be available for any length of time, it must be stored. Data Stores include documents, files, database records, and desk in-trays, etc.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:Map.png]]
</div>
</td>
</tr>
</table>
'''''Maps''''' are objects which perform an operation on data other than storing it. They transport data, change data, analyse data, update a database record, produce a report, authorise a transaction, etc. The term ‘map’ means ‘mapping data from one state to another’. Maps perform the transformations of a system, but they are concerned with data. For data to become information it must have the added dimension of quality.
'''''Quality Managers''''' are objects which administer the performance of the system. The quality manager does not transform the data handled by the system, but rather manages the system itself. Quality managers rely on the KPIs of the system and its component parts measuring variance from plan and performing the appropriate remedial action such as tuning Map parameters or escalating the problem.
In one sense the '''''Quality Manager''''' is a kind of process, but its responsibility is to modify the behaviour of the system in accordance with the purpose and objectives of the system and is therefore fundamentally different from a Map which represents the embodiment of that purpose. In another sense the Quality Manager is a kind of reactive data store - it both stores data and responds to it. The quality manager deals principally with control data, although this is by no means exclusive or necessary.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:RecursiveShapes.png]]
</div>
</td>
</tr>
</table>
'''''Objects are recursive''''', and therefore may contain more objects of the same or different type. For example, a file contain documents (both data stores), a document contains fields (more data stores), an organisation may contain people (both entities), an organisation (entity) may contain functions (maps) while a business cycle such as Purchasing (a map) may contain an entire system of roles (entities), procedures (maps), KPI’s measures (quality managers) and documents (data stores).
Processes (maps & quality managers) are concurrent. This means that, unless restrained by a lack of input (data to process) or awaiting an event, each process is trying to operate at the same time as every other process. This reflects reality - people do not follow a neat sequential order when interacting with one another unless explicitly constrained to do so. Instead, they operate simultaneously, at different speeds to one another, and in self chosen patterns. To model the world correctly we must also model this behaviour
You can read more about the process charting method in [[Business Process Reengineering - Process Charting]]
===The Analysis Tools===
The designed system will be documented with data flow charts, client-provider “performance agreements”, ACM control checklists, a decision to data source matrix, task schedule sheets cross-referenced to the data-flow diagrams. These facilities can be provided both electronically or on paper as desire by the client. The degree to which the processes and documentation can be automated is restricted only by the client’s computer system capabilities and software.
====Process Representation Using Software====
There are a number of practical charting tools that can be used. For 2D representation, we recommend either ABC Flowcharter or Visio, while for 3D client walkthrough of a designed system we recommend a MMORG such as SecondLife (http://SecondLife.com), or TrueSpace (http://www.caligari.com/).
With respest to to the 2D tools, both suggested tools have their strengths and weaknesses. Visio has excellent microsoft integration desktop application, and is directly supported by a number finance and business applications as a business process modelling environment. ABC Flowcharter, has (in our view) a shorter learning curve) and and excellent interface, and good integration into MS documentation tools.
In choosing a 2D tool you should consider:
The tool should support diagrammes:
* consisting of many linked pages
* with recursive (self referential) structures
* graphic object drill through (ie. you can select an object such as a process which summarises many sub-processes and link to one or more pages that represent the steps in the process
* containing graphic objects with unique id's, text descriptions, and other user defined data attributes that can be stored with them (eg transaction volumes, costs, probabilities, risk assessment, etc)
* editable splines for connecting shapes (bendable curved lines)
* with point and click editing
* with user defined shapes and image import
* that represent the Bishop Phillips Process Modelling shapes.
* containing URL links at at least the graphical object (including lines) level (ie. linking an object to a internet/intranet page)
* that can be imported into text documentation and presentation tools (MS Word / MS PowerPoint, etc) compatible with your business environment (standard desktop)
* that ideally can be scripted with a scripting language that allows active simulation or calculations of events and transactions occurring (optional - but a good idea)
* that can be generated driectly from an electronic drafting whiteboard (optional, but saves you a lot of time).
3D tools are a much newer approach. The biggest advantage of a 3D modelling tool is that you can 'walk' the client through the business process. Possibly the only practical & right-priced ones available at the moment are SecondLife and Caligari TrueSpace. Over the years we have tried a number of approaches to this idea, until the advent of SecondLife, we built our 3D models in TrueSpace. TrueSpace is a serious 3D modelling environment, and while simple to learn, as 3D graphical modelling environments go, it is not a tool for novices, although it does produce spectacular 3D models, it is not so suited to walking the client through the model as presenting a canned 3D visualisation of the business model. Recently it has gained a MORG add-on/representation and linked with one of a number of games engines it can be used quite successfully as a walk-through environment.
With the advent of second life (and the growing number of similar MORG systems that are either appearing now or soon to appear on the market), and more practical and faster solution is available (all be it, less visually stunning in production quality). A SecondLife based model allows you and your client to literally enter the model as people and walk or fly arround the components of your system, watching transactions visual flow through the process, event occur, control systems filter errors, and output being produced at varying transaction rates. The building interface is fast and simple to learn, and the scripting environment allows you to rapidly simulate many different scenarios.
With such an approach you can literally have your client see the transactions flow through a virtual representation of a system (a bit like the movie 'Tron'), or build a representation of their physical environment (such as a building, or office floor) and simulate the behaviour of the people and the control system operating. The world-wide scale of MORG users means you can contract the development work to inexpensive professional builders, instead of building it yourself.
The great weakness of these environments is that they are not yet real time in terms of construction (where as a 2D chart can (almost) be built in real-time as your client describes their processes, and documentation in conventional 2D media is not a natural consequence of a 3D simulation (whereas 2D charts can be included in text based documentation with ease).
In choosing a 3D tool you should consider:
* speed of construction of 3D elements (ideally you will need a 'primitive' rather than 'mesh' or 'nurbs' based building solution for speed.
* scripting language and partical system support (essential)
* ability to script primitives (objects) concurrently on a massive scale
* message passing support
* ability to create avatars (or primitars) that can interact with the model (ie. walk around inside it)
* availability of low cost developers/builders
* ease of installation of appropriate client software
* ownership and permanence of the 3D models built
* support for importation of textures (graphic images), sounds, animations, 3D objects, movies, etc.
* real time in-world multi participant speech support
* simplicity of visitor navigation (i.e. how hard is it for a first-time user to just walk around in the 3D environment)
* URL (web page) linking
* URL (web server data sending and receiving - eg Can you request and receive data from an off system database).
* web page display on objects (not commonly available)
====Analysis Support====
A number of analytic tools or design paradigms are incorporated into the ABPR. A few of these are introduced in the table:
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Analytic Tool Or Design Paradigm
</th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
Data Flow analysis
</td>
<td>
A method of charting systems enhanced by BPC with concepts drawn from process mapping, predicate calculus, TQM, CPM (Operations Research), Entity-Relationship modelling, and a number of other analytic methods. This method excels at depicting simply, complex data flows and process interactions. It traps control issues, timing constraints, events and information flows.
</td>
</tr>
<tr>
<td>
Active Control Management System
</td>
<td>
Although not critical to the process, ACM provides significant advantages in process efficiency. An BPC specific control design philosophy based on experiences in the areas of Corporate Governance and organisations adopting control devolution and/or multi-skilling. ACM represents a significant shift from the control paradigm of periodic audit review with heavy transaction based testing conventionally adopted by Internal Audit and traditional views of control system design relying on segregation of duties.
To build an ACM control system, we begin by expanding the definition of controls beyond accuracy, authorisation, completeness (,etc) to include process timeliness, achievement of business plan targets and other business objectives. Next we identify the controls appropriate for monitoring and we collect all the associated control data into a common recording format (and ideally automated storage system - such as MS-Access). Lastly we build a reporting framework for system performance monitoring built on the quality managers.
ACM produces control compliance information in a steady stream for the senior executive and board rather than intermittent or cyclic audit reviews often used. The compliance component of any Internal Audit unit is re-focussed to ensuring the ongoing reliability of the control compliance reports. The control system is integrated into the business processes using the Client-Provider model developed at the start of the project. ACM reporting can be automated, if desired.
</td>
</tr>
<tr>
<td>
Network Organisation Reduction
</td>
<td>
The process of defining the organisation into the community network structure forces the reduction of many diverse strategies and procedures into a clearly identifiable set of activities required for one of 11 broad service communities. The networks implie the stakeholders in an enumeratable set of collective Client Provider Service Agreements.
</td>
</tr>
<tr>
<td>
Process Dictionary
</td>
<td>
Used to assist in the identification of opportunities for streamlining cross and intra organisation systems, the a Process Dictionary catalogues and describes each process within any business function in accordance with an agreed selection of descriptive terms.
In this way, assists in highlighting common processes and assess whether it is possible and appropriate for these to be combined or shared in some suitable form.
</td>
</tr>
</table>
==Summary: Characteristics of the BBPR Method==
Business Process Reengineering (BPR) is the method by which the infrastructure, policies, procedures and practices of an organisation are reviewed and redesigned to achieve some predefined purpose and objective. This chapter has provided an introduction to the concept of BPR and an over view of the ABPR method. Both of these will be developed throughout the text.
Essentially BPR represents the focussing of an enormous body of theory and expertise underpinning management science into a single - all powerful redesign strategy. Such a panacea does not exist, and we must be careful to use BPR where the fundamental organisational characteristics are present. These might include:
<ul>
<li> A discernible consistent set of purpose(s) and objective(s) exist
<li> Design options are not restricted out of the solution set (ie. an acceptable solution is achievable despite imposed constraints)
<li> Senior management authorise and staff support the project and the process
<li> The analytic tools match the problem set
<li> BPR Consultant has credibility with the staff
</ul>
The BPR process is best seen as a framework encompassing a wide array of analytic tools and organisation/management design paradigms. Many of these tools and paradigms can be expected to change over time as management theory is revised, while some are central to the BBPR framework. The central tools and paradigms include:
<ul>
<li> KPI’s & Quality Management
<li> Data Flow Analysis
<li> Object Oriented Process Engineering
<li> Client Provider Analysis
<li> Information Mapping
<li> Data Cataloguing
</ul>
As an extremely simplified explanation, the BBPR method uses KPI’s to focus the system, and classifies the proponents in the system as clients and/or providers of data (etc) to one another. The client/provider relationships, are revised using a separate information (decision) map reflecting the information needs of the direct and indirect stakeholders. With the revised client/provider relationships defined and the data and information needs catalogued, process maps can be defined which reflect only what is needed to implement the system.
For the sake of clarity, in this introductory chapter, we have excluded many of the more complex issues facing BPR. One of these is the positioning of organisation design in a BPR exercise. It is a significant issue as it is inextricably linked to the culture of the organisation being reengineered. I t usually included to some extent in the design options, but rarely is the organisation design entirely at the discretion of the reengineer. Accordingly we must treat it as both a given structural component of the client provider analysis and an output of the process mapping (design phase).
Clearly the process mapping will impact the organisation structure which will in turn affect the client provider relationships while the client provider relationships affect the process mapping, etc. It is for this reason and a number of similar circular relationships among analytic components that necessitates the simultaneous analysis & design activity of the ABPR method.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
{{BackLinks}}
</noinclude>
c126b1fba94204e4a7ae2559e26d3d67bbc90f9e
332
320
2018-10-29T11:57:30Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this article. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Reengineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, the style and the detail provided, as the original was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time. As the charting method is a fairly involved, we will also be providing examples of systems charted using the method. This chapter is the introduction chapter, which provides a reasonably good overview of the approach.
</noinclude>
==Definition, Purposes & Outcomes==
=== Definition ===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:BusinessComponentObjectives.png]]
</div>
</td>
</tr>
</table>
Business Process Reengineering (BPR) ''is the method by which the infrastructure, policies, procedures and practices of an organisation are reviewed and redesigned to achieve some predefined purpose and objective.'' Purpose and Objective differ in that the purpose describes the reason for the process while the objective is the reason for the reengineering of the process. The objective is generally the optimisation of the quality - cost relationship, but may be any other objective(s) defined by the stakeholders of the processes revised.
Hammer, a popular author of reengineering texts defines reengineering as : The fundamental rethinking and radical redesign of business processes to bring about dramatic improvements in performance. Essentially, he argues that BPR is about major change in an organisation, yet perhaps this reflects a rather naive preoccupation with “big-is-better”. BPR can be about constrained well focussed small scale redesign as much as about monolithic reconstruction.
BPR is not new, although many consultants in the field try to claim otherwise. It is simply one more evolutionary step in a long stream of management change processes that includes Statistical Quality Control, TQM, Internal Audit, Work & Job Redesign, Goal Focussed Management, Workflow Management, Systems Analysis, etc. The theoretical foundation in BPR is quite old and can be seen particularly in the work in Systems Analysis undertaken at the University of Lancaster since 1969. What is new about BPR is its holistic view of the organisation and its attempt to capture the management philosophies that preceded it into a single integrated method.
Perhaps due in part to its conglomerate nature there is little standardisation among BPR approaches nor agreement on what is, or is not, BPR. With a few notable exceptions, the literature tends to be long on promises and case studies claiming stratospheric success but short on detail. This manual attempts to provide both a definition of BPR and an integrated strategy of analytic methods for performing it.
Although significantly different in approach from the work in systems analysis of the University of Lancaster, the development of our method owes a fundamental debt to the conceptual insight of that team. We have borrowed concepts, however, from a wide domain of disciplines ranging from accounting to computer science, and from psychology to marketing. It is not intended that the analytic tools of the method be cast in stone by this manual. No approach is perfect, and if this method is not seen to embrace its own continuous improvement then it will be as flawed as the business systems it purports to improve.
=== Purpose of BPR ===
In a BPR exercise we consider all aspects of managerial responsibility - from the organisation design through to the procedures and practices adopted. The BPR project does not attempt to define the purpose or the objectives of its systems of the organisation, rather once defined, it provides the machine to deliver that purpose and objective(s).
The method used in the reengineering process must deliver a complete description of that machine. This include the organisational structure, the behavioural paradigm, duties, controls, performance indicators, policies, procedures, data management, continuous improvement procedures, computer systems, etc, etc.
It is easy to confuse the activity of BPR with that of computer systems implementation, since many of the forces driving a BPR exercise beg computerisation as the easiest way to achieve apparently dramatic improvement. This is a mistake. Implementing computerised solutions is not the purpose of BPR, although a computerised solution is one of the tools a reengineer may use to implement some part of the processes and a BPR component .
Nor should we rely on computer solutions to all cases. While it is often true that the computerisation of a process will deliver significant improvement in the ratio of output volume and quality to human effort (input), when viewed from a holistic perspective (which includes infrastructure, investment, opportunity cost, and solution responsiveness to change) the computerised solution may not always be as attractive as first thought. Not withstanding these comments, a planned change in information systems provides a common and sensible catalyst for the BPR programme.
Essentially, the purpose of BPR is to build business systems able to deliver the organisation’s mission while optimising some given combination of objectives. In building the system, we must apply appropriate analytic techniques and appropriate implementation strategies. The weaker the constraints on the process applied by management - ie the wider the range of options left on the table for consideration - the more successful (in terms of optimising the objectives) the outcome is likely to be. The purpose of the system either will or will not be satisfied by the system design options made available - the quality of that delivery is measured by the objectives.
=== Outcomes ===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:BPR Components.png]]
</div>
</td>
</tr>
</table>
The result of the BPR project is a working system tuned to optimise some combination of objectives in delivering the stakeholder’s purpose. It is defined by a set of system descriptions, or views of the system, which consider, categorise and structure the matter from a number of angles.
Illustrated in Figure are the key components of a system description produced by the BPR method detailed in this manual. There are many differences between the approach presented here and the convention literature on BPR both in method and outcome. Henceforth we shall refer to this approach as the Bishop BPR (or BBPR) method.
The method produces a process and organisational rework that is naturally integrated with risk and compliance governance systems and (in its detailed delivery) uses a unique charting system which blends computational and human processes together in a common stuctured and testable form.
We have used and progressively improved the method detailed in this text since the late 1980's and it has been applied in the delivery of consultancies to well over several hundred organisations covering non-profit, government and corporate sectors. It has been applied in its pure form as a process reengineering system, in reduced forms as an internal audit systems audit process, a business systems design model (for design and development of business computing systems), and with various strategy enhancements as a business strategy planning tool. While this author has brought it to each consulting organisation with he has worked or lead over the years, it has benefitted from the ideas and contributions of many collegues.
We shall explore the BBPR method throughout this text and provide the tools and techniques necessary to deliver the BBPR system description. Here we provide a brief introduction to the ten key descriptive outputs in the figure:
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Descriptive Output </th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
Key Performance Indicators & Benchmarks / Targets
</td>
<td>
Performance management - how we directly manage and monitor the achievement of the system’s purposes
</td>
</tr>
<tr>
<td>
Active Control Management System
</td>
<td>
Internal System Integrity - how we directly monitor and manage the achievement of the system’s objectives.
</td>
</tr>
<tr>
<td>
Organisation Design
</td>
<td>
The objects/entities and their roles with their managerial, behavioural and reporting relationships identified.
</td>
</tr>
<tr>
<td>
Decision Tree
</td>
<td>
The tree (or Information Map) charts the decisions required by entities in the system, the relationships between the decisions and their information needs
</td>
</tr>
<tr>
<td>
Process & Workflow Charts
</td>
<td>
The sequence of activities making up the functional components of a system.
</td>
</tr>
<tr>
<td>
Event Calendar
</td>
<td>
The timing of events and their cycles and the processes they trigger
</td>
</tr>
<tr>
<td>
Client Provider Service Agreements
</td>
<td>
The objects/entities comprising the system seen as pairs of clients and providers (of services, data, goods, etc) emphasising their respective duties. The approach establishes notional contracts or service agreements which outline each entity’s responsibilities in the client provider relationship.
</td>
</tr>
<tr>
<td>
Data Management
</td>
<td>
The data stores in the system, what the data represents and how this data is managed
</td>
</tr>
<tr>
<td>
Continuous Improvement System
</td>
<td>
The strategy for delivering system improvement on a continuous basis.
</td>
</tr>
<tr>
<td>
Implementation and Change Strategy
</td>
<td>
The approach to managing the implementation of the reengineered system in the organisation and particularly managing people through the change process.
</td>
</tr>
</table>
The system description is only the ‘record’ of the real outcome of the BBPR approach - that of business performance improvement through better business processes. The ABPR method produces a system designed to optimise certain predefined objectives (such as cost of inputs to quality) while the system description attempts to formalise that system and provide the mechanisms for monitoring performance, and maintaining and tuning that system.
In the model organisation, the approach starts with the strategic plan of the organisation (or unit) being reviewed and uses that plan’s components (vision, mission, key result areas, critical success factors, strategies, key performance indicators, targets and timeframe) to focus the design effort with purpose and objectives. In the real organisation, planning is generally something less than perfect, so we must employ a wider net in defining the focus of the BPR exercise. Once armed with a focus, a wide variety of sources and analytic tools are employed to build a business system which will best achieve management’s plans.
==The Analytic Method & Its Tools==
===The Structure===
At the heart of the ABPR method is a set of ‘analytic tools’ (methods) that help define views of a system that highlight the particular properties in which we are interested. The key components are illustrated in Figure 1
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1"
width="100%" >
<tr>
<td>
<div class="center">
[[Image:BPRAnalyticStructure.png]]
</div>
</td>
<td>
<table >
<tr>
<td>
The analytic method is based on a simple premise:
A System is comprised of Recursive Objects only. Any system can be described by four types of Objects: Entities, Data Stores, Maps (Processes), and Quality Managers (Control/Performance Criteria).
The simple dataflow diagram of Figure 3 shows a basic system. Entity A provides data to Entity B via a single process (under the control of Entity C) which maps the data from one data store to another. The performance of the mapping process is managed by the quality control process under the control of Entity D. The quality control process is approximately equivalent to an engineering feedback loop.
</td>
</tr>
<tr>
<td>
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:BPC4KeyChartObj.png]]
</div>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
</table>
Mapping is a computational and mathematical term which describes the mechanism by which data is transformed from one state or form to another. In a business process that transformation might be a simple as the act of transcribing an invoice from its physical (eg. paper based) state to an electronic record in an accounts parable system through the process of data entry. The data in its input state may be said to have been mapped to another state through some process of transformation.
The computer engineering reader will recognise the similarity of the diagram to a dataflow model.
The logical starting point of a BPR exercise may seem to be the Performance Criteria definition (assuming that the overall purpose if the system being improved is already known), but it is important to note that each of the four definition activities should continue concurrently throughout the project. It is not unusual for the Performance Assumptions to change as a result of the other BPR activities, and virtually certain where the project is a Strategic Planning exercise.
This mixing of strategy planning and BPR may at first seem a little unusual, but in the impact of the BPR analysis can be to cause a fundamental rethink of the business strategy itself. Where the focus is merely to re-design a specific, targetted transactional process such a strategic impact is, perhaps less likely, but where the targeted business process is the core of the business, such an impact is surprisingly common.
In particular the KPI definition both commences and completes a project. The table lists these key analytic tools and provides an overview of the activity. These tool classes are typical of those employed, but not necessarily the only ones appropriate to any given project.
===The Modelling Tools===
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Analytic Tool Class </th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
KPI Assessment
</td>
<td>
In an ideal organisation the planning documents establish the focus for all activity. Our search for the focus of the BPR must therefore begin with the planning and policy role of management. Where available, sources to be reviewed include:
<ul>
<li> Statement of System Objectives
<li> Corporate Plan
<li> Budgets
<li> Benchmarks
</ul>
Organisations are rarely ideal and other techniques will need to be applied, depending on the culture of the organisation being reviewed. Such techniques may include SWOTC (Strengths, Weaknesses, Opportunities, Threats, and Constraints) analysis, benchmarking, corporate goal setting, interview, etc and may need to be undertaken to establish the purpose and key objectives of the system being reviewed.
Armed with this information the first view of Key Performance Indicators (KPI) appropriate to the system should be definable. In a sense, the KPI’s are like the gauges and alarms of an airplane, car or any other mechanical device. They alert the system’s ‘pilot’ to the status of the machinery, and allow rapid identification and adjustment of the system if anything ‘goes wrong’. In this sense the selection of the correct KPI’s is critical: if there is no gauge for a problem occurring it may not be detected until the problem is obvious without the help of a gauge - and possibly too late to be repaired.
In this first, top level assessment the KPIs will generally be whole-of-system measures. As other components of the ABPR are resolved (such as the Process Mapping and the Client Provider Analysis) the process detail level will emerge which becomes organisation’s operational ‘alarm system’. The ABPR has a specific design paradigm called Active Control Management to implement this KPI based control system in a cost efficient manner.
</td>
</tr>
<tr>
<td>
Client-Provider Analysis
</td>
<td>
A technique adopted from TQM which classifies the entities creating, managing and consuming data in the system as clients (data recipients) or providers (data suppliers) of one another. In performing the analysis we turn to information sources such as:
<ul>
<li> External Clients & Providers
<li> Internal Clients & Providers
<li> Organisation Structure
<li> Roles & Duty Statements
<li> Implied Contracts
</ul>
While it is important to understand the organisational structure as it stands - because, among other things, it dictates the client-provider relationships, it should not necessarilly bind the designer. An organisational model reflects legislative, cultural and historic traditions that may be critical to retain, as well as (possibly) many years of legitimate experience among the management team in the industry and market in which you are working. It must not simply be disregarded in the BPR process in favour of radical change.
Indeed, the author generally advises against too ambitious an organisational change, unless change is part of the culture or intended management strategy. In some organisations, frequent re-organisation is part of the management ethos, and such an approach is as legitimate and successful a management model as any other. One must, nevertheless, be careful in taking the existing structure (or management ethos!) as a given - particularly where the organisation is seeking a competitive edge beyond mere marginal improvement in efficiency or quality.
The BBPR method uses its own method of analysing organisational structures called The Organisational Community Network Model (which is one of the reasons that the BPR method frequently impacts organisational design). This approach is appropriate, even where the organisation will aubstantially retain it's orginal shape after the BPR project as it leads to a highly efficient and focussed "desk top" test process architecture, and where the option for organisational redesign is on the table, can lead to a very radical outcome.
</td>
</tr>
<tr>
<td>
Stakeholder Analysis
</td>
<td>
The direct stakeholders are addressed in the Client Provider analysis, while the indirect stakeholders are addressed here - in the Stakeholder Analysis.
Essentially the indirect stakeholders provide the organisation with drivers & constraints. Typical sources include:
<ul>
<li> Legislative Obligations
<li> Cultural Expectations
<li> Reporting Obligations
</ul>
</td>
</tr>
<tr>
<td>
Data Store Catalogue
</td>
<td>
The catalogue is the BPR equivalent to a data base administrator’s data dictionary. It describes all the data stored by the system, and the data stores themselves. It specifies the access rights, custodianship rules, data integrity standards and the static relationships between data stores.
Data stores include all the data managed by the system and methods of temporary or permanent storage. Data stores include electronic (abstract) and physical storage such as documents, files, filing cabinets, in trays, bins, etc.
Data Integrity Standards must be established system wide to which data stores adhere. The standards should be consistent with those applied by quality managers.
</td>
</tr>
<tr>
<td>
Process Mapping
</td>
<td>
Perhaps the most involved of all the activities of the BPR exercise. Process mapping is a general name for a variety of procedural analysis and design activities. The information sources include:
<ul>
<li> Functional Description
<li> Cradle to Grave Tracing - System Walkthrough
<li> Manuals
<li> List of Data Sources & Destinations
<li> Client / Provider Mapping
<li> Data Load Analysis (transaction volumes, processing rates, etc)
</ul>
The key activity during process mapping is the production of the Data flow diagrams and supporting documentation. This is done in two streams simultaneously:
<ol>
<li> Existing systems
<li> Redesigned Systems
</ol>
The data flow charts form the basis to the reengineering. They combine all aspects of the other analytic tools and describe the algorithm of the system.
In process mapping we treat all processes of a system as operating concurrently and control their timing and behaviour through messages, which take the form of either data or events.
The process map is not complete until the system data loading has been assessed for each process. The data load analysis involves examining data volumes and processing times, throughput assessment, reliability rates, etc.
</td>
</tr>
<tr>
<td>
Decision Tree / Information Mapping
</td>
<td>
The system handles not just data but information. Data becomes information when it exhibits certain quality characteristics. Information data must be appropriate to its purpose and reliable (where reliability implies standards of timeliness, accuracy, completeness, etc). Information mapping involves matching the data managed by a system to the decisions that must be made in operating that system. It requires, in part, the construction of a detailed decision tree spanning the entities in the system over time.
Necessarily, it also implies the existence of an events calendar which should link into the data flow diagrams. The information map includes a the information needs of the quality managers, and may be expressed in whole or in part through the Active Control Management design paradigm detailed later in this text.
The information map will require consideration of issue including:
<ul>
<li> Information Requirements
<li> Event Calendar
<li> Reporting Obligations
<li> Performance Control Management System (eg ACM)
</ul>
</td>
</tr>
</table>
===Organisational Representation (Introduction)===
When we think of organisational representation, we traditionally think of the heirarchical organisational chart. Resenbling an inverted tree, the organisational chart provided by almost all charting packages represents a cross between a reproesentation of physical or geographic position and reporting lines - and tells us very little about how a business organisation is really organised. At best it leads to a bureaucratic semi-accurate organisational view, and at worst, it is wildely incorrect such as in Matrix organisations.
As with many traditional diagramming systems it is horrendously inadequate for all but the grossest simplification of an organisation.
In the BBPR, we us a Community Network model which provides far richer analysis and directly represents the positioning of an organisation with its market and community.
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
Community domains can be defined as required for the purpose of the analysis, but in the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
You can read more about the Stakeholder Community Network Model and the Community Network Theory of organisational design and analysis [[The Stakeholder Community Network Model|Stakeholder Community Network Model and the Community Network Theory of organisational design and analysis here]].
===The Process Representation (Introduction)===
The full process charting model forms a language that can be represented either diagrammatically or descriptively. There are a number of different symbols and descriptive encoding rules, but in essence many of thesee enhancements are for diagramtic efficiency. The core of the charting system revolves around only a few symbols and the full model merely expands on these to provide a richer descriptive set, and more analytic detail with fewer diagramatic elements. The full model is described in [[Business Process Reengineering - Process Charting|advanced charting]].
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:DataFlow.png]]
</div>
</td>
</tr>
</table>
In the figure, '''''data flows''''' along, and in the direction of, the arrows between the entities, data stores and maps while control data flows principally into, and out, of the quality manager. The crossed-rectangular shapes are entities while the open ended rectangular shapes are (file) data stores. The maps and quality managers are shown by circles.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:Entity.png]]
</div>
</td>
</tr>
</table>
'''''Entities''''' are equivalent to people, machines, or processes external to the system being examined. In a sense they are givens in the system analysis, in that their functioning is assumed of a fixed standard and excluded from redesign. Those aspects of behaviour that can be redesigned are represented by the other three objects types.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:DataStore.png]]
</div>
</td>
</tr>
</table>
'''''Data Stores''''' are objects in which data resides from time to time. The stores are not the actual data itself, merely a representation of it. In the ‘object oriented analysis world’, data exists in the form of messages between objects. For example, Two people (entities) talking to each other (exchanging messages). Messages are essentially transient and so, for data available to be available for any length of time, it must be stored. Data Stores include documents, files, database records, and desk in-trays, etc.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:Map.png]]
</div>
</td>
</tr>
</table>
'''''Maps''''' are objects which perform an operation on data other than storing it. They transport data, change data, analyse data, update a database record, produce a report, authorise a transaction, etc. The term ‘map’ means ‘mapping data from one state to another’. Maps perform the transformations of a system, but they are concerned with data. For data to become information it must have the added dimension of quality.
'''''Quality Managers''''' are objects which administer the performance of the system. The quality manager does not transform the data handled by the system, but rather manages the system itself. Quality managers rely on the KPIs of the system and its component parts measuring variance from plan and performing the appropriate remedial action such as tuning Map parameters or escalating the problem.
In one sense the '''''Quality Manager''''' is a kind of process, but its responsibility is to modify the behaviour of the system in accordance with the purpose and objectives of the system and is therefore fundamentally different from a Map which represents the embodiment of that purpose. In another sense the Quality Manager is a kind of reactive data store - it both stores data and responds to it. The quality manager deals principally with control data, although this is by no means exclusive or necessary.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:RecursiveShapes.png]]
</div>
</td>
</tr>
</table>
'''''Objects are recursive''''', and therefore may contain more objects of the same or different type. For example, a file contain documents (both data stores), a document contains fields (more data stores), an organisation may contain people (both entities), an organisation (entity) may contain functions (maps) while a business cycle such as Purchasing (a map) may contain an entire system of roles (entities), procedures (maps), KPI’s measures (quality managers) and documents (data stores).
Processes (maps & quality managers) are concurrent. This means that, unless restrained by a lack of input (data to process) or awaiting an event, each process is trying to operate at the same time as every other process. This reflects reality - people do not follow a neat sequential order when interacting with one another unless explicitly constrained to do so. Instead, they operate simultaneously, at different speeds to one another, and in self chosen patterns. To model the world correctly we must also model this behaviour
You can read more about the process charting method in [[Business Process Reengineering - Process Charting]]
===The Analysis Tools===
The designed system will be documented with data flow charts, client-provider “performance agreements”, ACM control checklists, a decision to data source matrix, task schedule sheets cross-referenced to the data-flow diagrams. These facilities can be provided both electronically or on paper as desire by the client. The degree to which the processes and documentation can be automated is restricted only by the client’s computer system capabilities and software.
====Process Representation Using Software====
There are a number of practical charting tools that can be used. For 2D representation, we recommend either ABC Flowcharter or Visio, while for 3D client walkthrough of a designed system we recommend a MMORG such as SecondLife (http://SecondLife.com), or TrueSpace (http://www.caligari.com/).
With respest to to the 2D tools, both suggested tools have their strengths and weaknesses. Visio has excellent microsoft integration desktop application, and is directly supported by a number finance and business applications as a business process modelling environment. ABC Flowcharter, has (in our view) a shorter learning curve) and and excellent interface, and good integration into MS documentation tools.
In choosing a 2D tool you should consider:
The tool should support diagrammes:
* consisting of many linked pages
* with recursive (self referential) structures
* graphic object drill through (ie. you can select an object such as a process which summarises many sub-processes and link to one or more pages that represent the steps in the process
* containing graphic objects with unique id's, text descriptions, and other user defined data attributes that can be stored with them (eg transaction volumes, costs, probabilities, risk assessment, etc)
* editable splines for connecting shapes (bendable curved lines)
* with point and click editing
* with user defined shapes and image import
* that represent the Bishop Phillips Process Modelling shapes.
* containing URL links at at least the graphical object (including lines) level (ie. linking an object to a internet/intranet page)
* that can be imported into text documentation and presentation tools (MS Word / MS PowerPoint, etc) compatible with your business environment (standard desktop)
* that ideally can be scripted with a scripting language that allows active simulation or calculations of events and transactions occurring (optional - but a good idea)
* that can be generated driectly from an electronic drafting whiteboard (optional, but saves you a lot of time).
3D tools are a much newer approach. The biggest advantage of a 3D modelling tool is that you can 'walk' the client through the business process. Possibly the only practical & right-priced ones available at the moment are SecondLife and Caligari TrueSpace. Over the years we have tried a number of approaches to this idea, until the advent of SecondLife, we built our 3D models in TrueSpace. TrueSpace is a serious 3D modelling environment, and while simple to learn, as 3D graphical modelling environments go, it is not a tool for novices, although it does produce spectacular 3D models, it is not so suited to walking the client through the model as presenting a canned 3D visualisation of the business model. Recently it has gained a MORG add-on/representation and linked with one of a number of games engines it can be used quite successfully as a walk-through environment.
With the advent of second life (and the growing number of similar MORG systems that are either appearing now or soon to appear on the market), and more practical and faster solution is available (all be it, less visually stunning in production quality). A SecondLife based model allows you and your client to literally enter the model as people and walk or fly arround the components of your system, watching transactions visual flow through the process, event occur, control systems filter errors, and output being produced at varying transaction rates. The building interface is fast and simple to learn, and the scripting environment allows you to rapidly simulate many different scenarios.
With such an approach you can literally have your client see the transactions flow through a virtual representation of a system (a bit like the movie 'Tron'), or build a representation of their physical environment (such as a building, or office floor) and simulate the behaviour of the people and the control system operating. The world-wide scale of MORG users means you can contract the development work to inexpensive professional builders, instead of building it yourself.
The great weakness of these environments is that they are not yet real time in terms of construction (where as a 2D chart can (almost) be built in real-time as your client describes their processes, and documentation in conventional 2D media is not a natural consequence of a 3D simulation (whereas 2D charts can be included in text based documentation with ease).
In choosing a 3D tool you should consider:
* speed of construction of 3D elements (ideally you will need a 'primitive' rather than 'mesh' or 'nurbs' based building solution for speed.
* scripting language and partical system support (essential)
* ability to script primitives (objects) concurrently on a massive scale
* message passing support
* ability to create avatars (or primitars) that can interact with the model (ie. walk around inside it)
* availability of low cost developers/builders
* ease of installation of appropriate client software
* ownership and permanence of the 3D models built
* support for importation of textures (graphic images), sounds, animations, 3D objects, movies, etc.
* real time in-world multi participant speech support
* simplicity of visitor navigation (i.e. how hard is it for a first-time user to just walk around in the 3D environment)
* URL (web page) linking
* URL (web server data sending and receiving - eg Can you request and receive data from an off system database).
* web page display on objects (not commonly available)
====Analysis Support====
A number of analytic tools or design paradigms are incorporated into the ABPR. A few of these are introduced in the table:
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Analytic Tool Or Design Paradigm
</th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
Data Flow analysis
</td>
<td>
A method of charting systems enhanced by BPC with concepts drawn from process mapping, predicate calculus, TQM, CPM (Operations Research), Entity-Relationship modelling, and a number of other analytic methods. This method excels at depicting simply, complex data flows and process interactions. It traps control issues, timing constraints, events and information flows.
</td>
</tr>
<tr>
<td>
Active Control Management System
</td>
<td>
Although not critical to the process, ACM provides significant advantages in process efficiency. An BPC specific control design philosophy based on experiences in the areas of Corporate Governance and organisations adopting control devolution and/or multi-skilling. ACM represents a significant shift from the control paradigm of periodic audit review with heavy transaction based testing conventionally adopted by Internal Audit and traditional views of control system design relying on segregation of duties.
To build an ACM control system, we begin by expanding the definition of controls beyond accuracy, authorisation, completeness (,etc) to include process timeliness, achievement of business plan targets and other business objectives. Next we identify the controls appropriate for monitoring and we collect all the associated control data into a common recording format (and ideally automated storage system - such as MS-Access). Lastly we build a reporting framework for system performance monitoring built on the quality managers.
ACM produces control compliance information in a steady stream for the senior executive and board rather than intermittent or cyclic audit reviews often used. The compliance component of any Internal Audit unit is re-focussed to ensuring the ongoing reliability of the control compliance reports. The control system is integrated into the business processes using the Client-Provider model developed at the start of the project. ACM reporting can be automated, if desired.
</td>
</tr>
<tr>
<td>
Network Organisation Reduction
</td>
<td>
The process of defining the organisation into the community network structure forces the reduction of many diverse strategies and procedures into a clearly identifiable set of activities required for one of 11 broad service communities. The networks implie the stakeholders in an enumeratable set of collective Client Provider Service Agreements.
</td>
</tr>
<tr>
<td>
Process Dictionary
</td>
<td>
Used to assist in the identification of opportunities for streamlining cross and intra organisation systems, the a Process Dictionary catalogues and describes each process within any business function in accordance with an agreed selection of descriptive terms.
In this way, assists in highlighting common processes and assess whether it is possible and appropriate for these to be combined or shared in some suitable form.
</td>
</tr>
</table>
==Summary: Characteristics of the BBPR Method==
Business Process Reengineering (BPR) is the method by which the infrastructure, policies, procedures and practices of an organisation are reviewed and redesigned to achieve some predefined purpose and objective. This chapter has provided an introduction to the concept of BPR and an over view of the ABPR method. Both of these will be developed throughout the text.
Essentially BPR represents the focussing of an enormous body of theory and expertise underpinning management science into a single - all powerful redesign strategy. Such a panacea does not exist, and we must be careful to use BPR where the fundamental organisational characteristics are present. These might include:
<ul>
<li> A discernible consistent set of purpose(s) and objective(s) exist
<li> Design options are not restricted out of the solution set (ie. an acceptable solution is achievable despite imposed constraints)
<li> Senior management authorise and staff support the project and the process
<li> The analytic tools match the problem set
<li> BPR Consultant has credibility with the staff
</ul>
The BPR process is best seen as a framework encompassing a wide array of analytic tools and organisation/management design paradigms. Many of these tools and paradigms can be expected to change over time as management theory is revised, while some are central to the BBPR framework. The central tools and paradigms include:
<ul>
<li> KPI’s & Quality Management
<li> Data Flow Analysis
<li> Object Oriented Process Engineering
<li> Client Provider Analysis
<li> Information Mapping
<li> Data Cataloguing
</ul>
As an extremely simplified explanation, the BBPR method uses KPI’s to focus the system, and classifies the proponents in the system as clients and/or providers of data (etc) to one another. The client/provider relationships, are revised using a separate information (decision) map reflecting the information needs of the direct and indirect stakeholders. With the revised client/provider relationships defined and the data and information needs catalogued, process maps can be defined which reflect only what is needed to implement the system.
For the sake of clarity, in this introductory chapter, we have excluded many of the more complex issues facing BPR. One of these is the positioning of organisation design in a BPR exercise. It is a significant issue as it is inextricably linked to the culture of the organisation being reengineered. I t usually included to some extent in the design options, but rarely is the organisation design entirely at the discretion of the reengineer. Accordingly we must treat it as both a given structural component of the client provider analysis and an output of the process mapping (design phase).
Clearly the process mapping will impact the organisation structure which will in turn affect the client provider relationships while the client provider relationships affect the process mapping, etc. It is for this reason and a number of similar circular relationships among analytic components that necessitates the simultaneous analysis & design activity of the ABPR method.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
{{BackLinks}}
</noinclude>
c126b1fba94204e4a7ae2559e26d3d67bbc90f9e
374
332
2018-10-29T12:04:03Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this article. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Reengineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, the style and the detail provided, as the original was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time. As the charting method is a fairly involved, we will also be providing examples of systems charted using the method. This chapter is the introduction chapter, which provides a reasonably good overview of the approach.
</noinclude>
==Definition, Purposes & Outcomes==
=== Definition ===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:BusinessComponentObjectives.png]]
</div>
</td>
</tr>
</table>
Business Process Reengineering (BPR) ''is the method by which the infrastructure, policies, procedures and practices of an organisation are reviewed and redesigned to achieve some predefined purpose and objective.'' Purpose and Objective differ in that the purpose describes the reason for the process while the objective is the reason for the reengineering of the process. The objective is generally the optimisation of the quality - cost relationship, but may be any other objective(s) defined by the stakeholders of the processes revised.
Hammer, a popular author of reengineering texts defines reengineering as : The fundamental rethinking and radical redesign of business processes to bring about dramatic improvements in performance. Essentially, he argues that BPR is about major change in an organisation, yet perhaps this reflects a rather naive preoccupation with “big-is-better”. BPR can be about constrained well focussed small scale redesign as much as about monolithic reconstruction.
BPR is not new, although many consultants in the field try to claim otherwise. It is simply one more evolutionary step in a long stream of management change processes that includes Statistical Quality Control, TQM, Internal Audit, Work & Job Redesign, Goal Focussed Management, Workflow Management, Systems Analysis, etc. The theoretical foundation in BPR is quite old and can be seen particularly in the work in Systems Analysis undertaken at the University of Lancaster since 1969. What is new about BPR is its holistic view of the organisation and its attempt to capture the management philosophies that preceded it into a single integrated method.
Perhaps due in part to its conglomerate nature there is little standardisation among BPR approaches nor agreement on what is, or is not, BPR. With a few notable exceptions, the literature tends to be long on promises and case studies claiming stratospheric success but short on detail. This manual attempts to provide both a definition of BPR and an integrated strategy of analytic methods for performing it.
Although significantly different in approach from the work in systems analysis of the University of Lancaster, the development of our method owes a fundamental debt to the conceptual insight of that team. We have borrowed concepts, however, from a wide domain of disciplines ranging from accounting to computer science, and from psychology to marketing. It is not intended that the analytic tools of the method be cast in stone by this manual. No approach is perfect, and if this method is not seen to embrace its own continuous improvement then it will be as flawed as the business systems it purports to improve.
=== Purpose of BPR ===
In a BPR exercise we consider all aspects of managerial responsibility - from the organisation design through to the procedures and practices adopted. The BPR project does not attempt to define the purpose or the objectives of its systems of the organisation, rather once defined, it provides the machine to deliver that purpose and objective(s).
The method used in the reengineering process must deliver a complete description of that machine. This include the organisational structure, the behavioural paradigm, duties, controls, performance indicators, policies, procedures, data management, continuous improvement procedures, computer systems, etc, etc.
It is easy to confuse the activity of BPR with that of computer systems implementation, since many of the forces driving a BPR exercise beg computerisation as the easiest way to achieve apparently dramatic improvement. This is a mistake. Implementing computerised solutions is not the purpose of BPR, although a computerised solution is one of the tools a reengineer may use to implement some part of the processes and a BPR component .
Nor should we rely on computer solutions to all cases. While it is often true that the computerisation of a process will deliver significant improvement in the ratio of output volume and quality to human effort (input), when viewed from a holistic perspective (which includes infrastructure, investment, opportunity cost, and solution responsiveness to change) the computerised solution may not always be as attractive as first thought. Not withstanding these comments, a planned change in information systems provides a common and sensible catalyst for the BPR programme.
Essentially, the purpose of BPR is to build business systems able to deliver the organisation’s mission while optimising some given combination of objectives. In building the system, we must apply appropriate analytic techniques and appropriate implementation strategies. The weaker the constraints on the process applied by management - ie the wider the range of options left on the table for consideration - the more successful (in terms of optimising the objectives) the outcome is likely to be. The purpose of the system either will or will not be satisfied by the system design options made available - the quality of that delivery is measured by the objectives.
=== Outcomes ===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:BPR Components.png]]
</div>
</td>
</tr>
</table>
The result of the BPR project is a working system tuned to optimise some combination of objectives in delivering the stakeholder’s purpose. It is defined by a set of system descriptions, or views of the system, which consider, categorise and structure the matter from a number of angles.
Illustrated in Figure are the key components of a system description produced by the BPR method detailed in this manual. There are many differences between the approach presented here and the convention literature on BPR both in method and outcome. Henceforth we shall refer to this approach as the Bishop BPR (or BBPR) method.
The method produces a process and organisational rework that is naturally integrated with risk and compliance governance systems and (in its detailed delivery) uses a unique charting system which blends computational and human processes together in a common stuctured and testable form.
We have used and progressively improved the method detailed in this text since the late 1980's and it has been applied in the delivery of consultancies to well over several hundred organisations covering non-profit, government and corporate sectors. It has been applied in its pure form as a process reengineering system, in reduced forms as an internal audit systems audit process, a business systems design model (for design and development of business computing systems), and with various strategy enhancements as a business strategy planning tool. While this author has brought it to each consulting organisation with he has worked or lead over the years, it has benefitted from the ideas and contributions of many collegues.
We shall explore the BBPR method throughout this text and provide the tools and techniques necessary to deliver the BBPR system description. Here we provide a brief introduction to the ten key descriptive outputs in the figure:
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Descriptive Output </th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
Key Performance Indicators & Benchmarks / Targets
</td>
<td>
Performance management - how we directly manage and monitor the achievement of the system’s purposes
</td>
</tr>
<tr>
<td>
Active Control Management System
</td>
<td>
Internal System Integrity - how we directly monitor and manage the achievement of the system’s objectives.
</td>
</tr>
<tr>
<td>
Organisation Design
</td>
<td>
The objects/entities and their roles with their managerial, behavioural and reporting relationships identified.
</td>
</tr>
<tr>
<td>
Decision Tree
</td>
<td>
The tree (or Information Map) charts the decisions required by entities in the system, the relationships between the decisions and their information needs
</td>
</tr>
<tr>
<td>
Process & Workflow Charts
</td>
<td>
The sequence of activities making up the functional components of a system.
</td>
</tr>
<tr>
<td>
Event Calendar
</td>
<td>
The timing of events and their cycles and the processes they trigger
</td>
</tr>
<tr>
<td>
Client Provider Service Agreements
</td>
<td>
The objects/entities comprising the system seen as pairs of clients and providers (of services, data, goods, etc) emphasising their respective duties. The approach establishes notional contracts or service agreements which outline each entity’s responsibilities in the client provider relationship.
</td>
</tr>
<tr>
<td>
Data Management
</td>
<td>
The data stores in the system, what the data represents and how this data is managed
</td>
</tr>
<tr>
<td>
Continuous Improvement System
</td>
<td>
The strategy for delivering system improvement on a continuous basis.
</td>
</tr>
<tr>
<td>
Implementation and Change Strategy
</td>
<td>
The approach to managing the implementation of the reengineered system in the organisation and particularly managing people through the change process.
</td>
</tr>
</table>
The system description is only the ‘record’ of the real outcome of the BBPR approach - that of business performance improvement through better business processes. The ABPR method produces a system designed to optimise certain predefined objectives (such as cost of inputs to quality) while the system description attempts to formalise that system and provide the mechanisms for monitoring performance, and maintaining and tuning that system.
In the model organisation, the approach starts with the strategic plan of the organisation (or unit) being reviewed and uses that plan’s components (vision, mission, key result areas, critical success factors, strategies, key performance indicators, targets and timeframe) to focus the design effort with purpose and objectives. In the real organisation, planning is generally something less than perfect, so we must employ a wider net in defining the focus of the BPR exercise. Once armed with a focus, a wide variety of sources and analytic tools are employed to build a business system which will best achieve management’s plans.
==The Analytic Method & Its Tools==
===The Structure===
At the heart of the ABPR method is a set of ‘analytic tools’ (methods) that help define views of a system that highlight the particular properties in which we are interested. The key components are illustrated in Figure 1
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1"
width="100%" >
<tr>
<td>
<div class="center">
[[Image:BPRAnalyticStructure.png]]
</div>
</td>
<td>
<table >
<tr>
<td>
The analytic method is based on a simple premise:
A System is comprised of Recursive Objects only. Any system can be described by four types of Objects: Entities, Data Stores, Maps (Processes), and Quality Managers (Control/Performance Criteria).
The simple dataflow diagram of Figure 3 shows a basic system. Entity A provides data to Entity B via a single process (under the control of Entity C) which maps the data from one data store to another. The performance of the mapping process is managed by the quality control process under the control of Entity D. The quality control process is approximately equivalent to an engineering feedback loop.
</td>
</tr>
<tr>
<td>
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:BPC4KeyChartObj.png]]
</div>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
</table>
Mapping is a computational and mathematical term which describes the mechanism by which data is transformed from one state or form to another. In a business process that transformation might be a simple as the act of transcribing an invoice from its physical (eg. paper based) state to an electronic record in an accounts parable system through the process of data entry. The data in its input state may be said to have been mapped to another state through some process of transformation.
The computer engineering reader will recognise the similarity of the diagram to a dataflow model.
The logical starting point of a BPR exercise may seem to be the Performance Criteria definition (assuming that the overall purpose if the system being improved is already known), but it is important to note that each of the four definition activities should continue concurrently throughout the project. It is not unusual for the Performance Assumptions to change as a result of the other BPR activities, and virtually certain where the project is a Strategic Planning exercise.
This mixing of strategy planning and BPR may at first seem a little unusual, but in the impact of the BPR analysis can be to cause a fundamental rethink of the business strategy itself. Where the focus is merely to re-design a specific, targetted transactional process such a strategic impact is, perhaps less likely, but where the targeted business process is the core of the business, such an impact is surprisingly common.
In particular the KPI definition both commences and completes a project. The table lists these key analytic tools and provides an overview of the activity. These tool classes are typical of those employed, but not necessarily the only ones appropriate to any given project.
===The Modelling Tools===
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Analytic Tool Class </th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
KPI Assessment
</td>
<td>
In an ideal organisation the planning documents establish the focus for all activity. Our search for the focus of the BPR must therefore begin with the planning and policy role of management. Where available, sources to be reviewed include:
<ul>
<li> Statement of System Objectives
<li> Corporate Plan
<li> Budgets
<li> Benchmarks
</ul>
Organisations are rarely ideal and other techniques will need to be applied, depending on the culture of the organisation being reviewed. Such techniques may include SWOTC (Strengths, Weaknesses, Opportunities, Threats, and Constraints) analysis, benchmarking, corporate goal setting, interview, etc and may need to be undertaken to establish the purpose and key objectives of the system being reviewed.
Armed with this information the first view of Key Performance Indicators (KPI) appropriate to the system should be definable. In a sense, the KPI’s are like the gauges and alarms of an airplane, car or any other mechanical device. They alert the system’s ‘pilot’ to the status of the machinery, and allow rapid identification and adjustment of the system if anything ‘goes wrong’. In this sense the selection of the correct KPI’s is critical: if there is no gauge for a problem occurring it may not be detected until the problem is obvious without the help of a gauge - and possibly too late to be repaired.
In this first, top level assessment the KPIs will generally be whole-of-system measures. As other components of the ABPR are resolved (such as the Process Mapping and the Client Provider Analysis) the process detail level will emerge which becomes organisation’s operational ‘alarm system’. The ABPR has a specific design paradigm called Active Control Management to implement this KPI based control system in a cost efficient manner.
</td>
</tr>
<tr>
<td>
Client-Provider Analysis
</td>
<td>
A technique adopted from TQM which classifies the entities creating, managing and consuming data in the system as clients (data recipients) or providers (data suppliers) of one another. In performing the analysis we turn to information sources such as:
<ul>
<li> External Clients & Providers
<li> Internal Clients & Providers
<li> Organisation Structure
<li> Roles & Duty Statements
<li> Implied Contracts
</ul>
While it is important to understand the organisational structure as it stands - because, among other things, it dictates the client-provider relationships, it should not necessarilly bind the designer. An organisational model reflects legislative, cultural and historic traditions that may be critical to retain, as well as (possibly) many years of legitimate experience among the management team in the industry and market in which you are working. It must not simply be disregarded in the BPR process in favour of radical change.
Indeed, the author generally advises against too ambitious an organisational change, unless change is part of the culture or intended management strategy. In some organisations, frequent re-organisation is part of the management ethos, and such an approach is as legitimate and successful a management model as any other. One must, nevertheless, be careful in taking the existing structure (or management ethos!) as a given - particularly where the organisation is seeking a competitive edge beyond mere marginal improvement in efficiency or quality.
The BBPR method uses its own method of analysing organisational structures called The Organisational Community Network Model (which is one of the reasons that the BPR method frequently impacts organisational design). This approach is appropriate, even where the organisation will aubstantially retain it's orginal shape after the BPR project as it leads to a highly efficient and focussed "desk top" test process architecture, and where the option for organisational redesign is on the table, can lead to a very radical outcome.
</td>
</tr>
<tr>
<td>
Stakeholder Analysis
</td>
<td>
The direct stakeholders are addressed in the Client Provider analysis, while the indirect stakeholders are addressed here - in the Stakeholder Analysis.
Essentially the indirect stakeholders provide the organisation with drivers & constraints. Typical sources include:
<ul>
<li> Legislative Obligations
<li> Cultural Expectations
<li> Reporting Obligations
</ul>
</td>
</tr>
<tr>
<td>
Data Store Catalogue
</td>
<td>
The catalogue is the BPR equivalent to a data base administrator’s data dictionary. It describes all the data stored by the system, and the data stores themselves. It specifies the access rights, custodianship rules, data integrity standards and the static relationships between data stores.
Data stores include all the data managed by the system and methods of temporary or permanent storage. Data stores include electronic (abstract) and physical storage such as documents, files, filing cabinets, in trays, bins, etc.
Data Integrity Standards must be established system wide to which data stores adhere. The standards should be consistent with those applied by quality managers.
</td>
</tr>
<tr>
<td>
Process Mapping
</td>
<td>
Perhaps the most involved of all the activities of the BPR exercise. Process mapping is a general name for a variety of procedural analysis and design activities. The information sources include:
<ul>
<li> Functional Description
<li> Cradle to Grave Tracing - System Walkthrough
<li> Manuals
<li> List of Data Sources & Destinations
<li> Client / Provider Mapping
<li> Data Load Analysis (transaction volumes, processing rates, etc)
</ul>
The key activity during process mapping is the production of the Data flow diagrams and supporting documentation. This is done in two streams simultaneously:
<ol>
<li> Existing systems
<li> Redesigned Systems
</ol>
The data flow charts form the basis to the reengineering. They combine all aspects of the other analytic tools and describe the algorithm of the system.
In process mapping we treat all processes of a system as operating concurrently and control their timing and behaviour through messages, which take the form of either data or events.
The process map is not complete until the system data loading has been assessed for each process. The data load analysis involves examining data volumes and processing times, throughput assessment, reliability rates, etc.
</td>
</tr>
<tr>
<td>
Decision Tree / Information Mapping
</td>
<td>
The system handles not just data but information. Data becomes information when it exhibits certain quality characteristics. Information data must be appropriate to its purpose and reliable (where reliability implies standards of timeliness, accuracy, completeness, etc). Information mapping involves matching the data managed by a system to the decisions that must be made in operating that system. It requires, in part, the construction of a detailed decision tree spanning the entities in the system over time.
Necessarily, it also implies the existence of an events calendar which should link into the data flow diagrams. The information map includes a the information needs of the quality managers, and may be expressed in whole or in part through the Active Control Management design paradigm detailed later in this text.
The information map will require consideration of issue including:
<ul>
<li> Information Requirements
<li> Event Calendar
<li> Reporting Obligations
<li> Performance Control Management System (eg ACM)
</ul>
</td>
</tr>
</table>
===Organisational Representation (Introduction)===
When we think of organisational representation, we traditionally think of the heirarchical organisational chart. Resenbling an inverted tree, the organisational chart provided by almost all charting packages represents a cross between a reproesentation of physical or geographic position and reporting lines - and tells us very little about how a business organisation is really organised. At best it leads to a bureaucratic semi-accurate organisational view, and at worst, it is wildely incorrect such as in Matrix organisations.
As with many traditional diagramming systems it is horrendously inadequate for all but the grossest simplification of an organisation.
In the BBPR, we us a Community Network model which provides far richer analysis and directly represents the positioning of an organisation with its market and community.
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
Community domains can be defined as required for the purpose of the analysis, but in the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
You can read more about the Stakeholder Community Network Model and the Community Network Theory of organisational design and analysis [[The Stakeholder Community Network Model|Stakeholder Community Network Model and the Community Network Theory of organisational design and analysis here]].
===The Process Representation (Introduction)===
The full process charting model forms a language that can be represented either diagrammatically or descriptively. There are a number of different symbols and descriptive encoding rules, but in essence many of thesee enhancements are for diagramtic efficiency. The core of the charting system revolves around only a few symbols and the full model merely expands on these to provide a richer descriptive set, and more analytic detail with fewer diagramatic elements. The full model is described in [[Business Process Reengineering - Process Charting|advanced charting]].
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:DataFlow.png]]
</div>
</td>
</tr>
</table>
In the figure, '''''data flows''''' along, and in the direction of, the arrows between the entities, data stores and maps while control data flows principally into, and out, of the quality manager. The crossed-rectangular shapes are entities while the open ended rectangular shapes are (file) data stores. The maps and quality managers are shown by circles.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:Entity.png]]
</div>
</td>
</tr>
</table>
'''''Entities''''' are equivalent to people, machines, or processes external to the system being examined. In a sense they are givens in the system analysis, in that their functioning is assumed of a fixed standard and excluded from redesign. Those aspects of behaviour that can be redesigned are represented by the other three objects types.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:DataStore.png]]
</div>
</td>
</tr>
</table>
'''''Data Stores''''' are objects in which data resides from time to time. The stores are not the actual data itself, merely a representation of it. In the ‘object oriented analysis world’, data exists in the form of messages between objects. For example, Two people (entities) talking to each other (exchanging messages). Messages are essentially transient and so, for data available to be available for any length of time, it must be stored. Data Stores include documents, files, database records, and desk in-trays, etc.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="right">
<tr>
<td>
<div class="center">
[[Image:Map.png]]
</div>
</td>
</tr>
</table>
'''''Maps''''' are objects which perform an operation on data other than storing it. They transport data, change data, analyse data, update a database record, produce a report, authorise a transaction, etc. The term ‘map’ means ‘mapping data from one state to another’. Maps perform the transformations of a system, but they are concerned with data. For data to become information it must have the added dimension of quality.
'''''Quality Managers''''' are objects which administer the performance of the system. The quality manager does not transform the data handled by the system, but rather manages the system itself. Quality managers rely on the KPIs of the system and its component parts measuring variance from plan and performing the appropriate remedial action such as tuning Map parameters or escalating the problem.
In one sense the '''''Quality Manager''''' is a kind of process, but its responsibility is to modify the behaviour of the system in accordance with the purpose and objectives of the system and is therefore fundamentally different from a Map which represents the embodiment of that purpose. In another sense the Quality Manager is a kind of reactive data store - it both stores data and responds to it. The quality manager deals principally with control data, although this is by no means exclusive or necessary.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:RecursiveShapes.png]]
</div>
</td>
</tr>
</table>
'''''Objects are recursive''''', and therefore may contain more objects of the same or different type. For example, a file contain documents (both data stores), a document contains fields (more data stores), an organisation may contain people (both entities), an organisation (entity) may contain functions (maps) while a business cycle such as Purchasing (a map) may contain an entire system of roles (entities), procedures (maps), KPI’s measures (quality managers) and documents (data stores).
Processes (maps & quality managers) are concurrent. This means that, unless restrained by a lack of input (data to process) or awaiting an event, each process is trying to operate at the same time as every other process. This reflects reality - people do not follow a neat sequential order when interacting with one another unless explicitly constrained to do so. Instead, they operate simultaneously, at different speeds to one another, and in self chosen patterns. To model the world correctly we must also model this behaviour
You can read more about the process charting method in [[Business Process Reengineering - Process Charting]]
===The Analysis Tools===
The designed system will be documented with data flow charts, client-provider “performance agreements”, ACM control checklists, a decision to data source matrix, task schedule sheets cross-referenced to the data-flow diagrams. These facilities can be provided both electronically or on paper as desire by the client. The degree to which the processes and documentation can be automated is restricted only by the client’s computer system capabilities and software.
====Process Representation Using Software====
There are a number of practical charting tools that can be used. For 2D representation, we recommend either ABC Flowcharter or Visio, while for 3D client walkthrough of a designed system we recommend a MMORG such as SecondLife (http://SecondLife.com), or TrueSpace (http://www.caligari.com/).
With respest to to the 2D tools, both suggested tools have their strengths and weaknesses. Visio has excellent microsoft integration desktop application, and is directly supported by a number finance and business applications as a business process modelling environment. ABC Flowcharter, has (in our view) a shorter learning curve) and and excellent interface, and good integration into MS documentation tools.
In choosing a 2D tool you should consider:
The tool should support diagrammes:
* consisting of many linked pages
* with recursive (self referential) structures
* graphic object drill through (ie. you can select an object such as a process which summarises many sub-processes and link to one or more pages that represent the steps in the process
* containing graphic objects with unique id's, text descriptions, and other user defined data attributes that can be stored with them (eg transaction volumes, costs, probabilities, risk assessment, etc)
* editable splines for connecting shapes (bendable curved lines)
* with point and click editing
* with user defined shapes and image import
* that represent the Bishop Phillips Process Modelling shapes.
* containing URL links at at least the graphical object (including lines) level (ie. linking an object to a internet/intranet page)
* that can be imported into text documentation and presentation tools (MS Word / MS PowerPoint, etc) compatible with your business environment (standard desktop)
* that ideally can be scripted with a scripting language that allows active simulation or calculations of events and transactions occurring (optional - but a good idea)
* that can be generated driectly from an electronic drafting whiteboard (optional, but saves you a lot of time).
3D tools are a much newer approach. The biggest advantage of a 3D modelling tool is that you can 'walk' the client through the business process. Possibly the only practical & right-priced ones available at the moment are SecondLife and Caligari TrueSpace. Over the years we have tried a number of approaches to this idea, until the advent of SecondLife, we built our 3D models in TrueSpace. TrueSpace is a serious 3D modelling environment, and while simple to learn, as 3D graphical modelling environments go, it is not a tool for novices, although it does produce spectacular 3D models, it is not so suited to walking the client through the model as presenting a canned 3D visualisation of the business model. Recently it has gained a MORG add-on/representation and linked with one of a number of games engines it can be used quite successfully as a walk-through environment.
With the advent of second life (and the growing number of similar MORG systems that are either appearing now or soon to appear on the market), and more practical and faster solution is available (all be it, less visually stunning in production quality). A SecondLife based model allows you and your client to literally enter the model as people and walk or fly arround the components of your system, watching transactions visual flow through the process, event occur, control systems filter errors, and output being produced at varying transaction rates. The building interface is fast and simple to learn, and the scripting environment allows you to rapidly simulate many different scenarios.
With such an approach you can literally have your client see the transactions flow through a virtual representation of a system (a bit like the movie 'Tron'), or build a representation of their physical environment (such as a building, or office floor) and simulate the behaviour of the people and the control system operating. The world-wide scale of MORG users means you can contract the development work to inexpensive professional builders, instead of building it yourself.
The great weakness of these environments is that they are not yet real time in terms of construction (where as a 2D chart can (almost) be built in real-time as your client describes their processes, and documentation in conventional 2D media is not a natural consequence of a 3D simulation (whereas 2D charts can be included in text based documentation with ease).
In choosing a 3D tool you should consider:
* speed of construction of 3D elements (ideally you will need a 'primitive' rather than 'mesh' or 'nurbs' based building solution for speed.
* scripting language and partical system support (essential)
* ability to script primitives (objects) concurrently on a massive scale
* message passing support
* ability to create avatars (or primitars) that can interact with the model (ie. walk around inside it)
* availability of low cost developers/builders
* ease of installation of appropriate client software
* ownership and permanence of the 3D models built
* support for importation of textures (graphic images), sounds, animations, 3D objects, movies, etc.
* real time in-world multi participant speech support
* simplicity of visitor navigation (i.e. how hard is it for a first-time user to just walk around in the 3D environment)
* URL (web page) linking
* URL (web server data sending and receiving - eg Can you request and receive data from an off system database).
* web page display on objects (not commonly available)
====Analysis Support====
A number of analytic tools or design paradigms are incorporated into the ABPR. A few of these are introduced in the table:
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%" >
<tr > <th >Analytic Tool Or Design Paradigm
</th><th>Meaning & Purpose</th>
</tr>
<tr>
<td>
Data Flow analysis
</td>
<td>
A method of charting systems enhanced by BPC with concepts drawn from process mapping, predicate calculus, TQM, CPM (Operations Research), Entity-Relationship modelling, and a number of other analytic methods. This method excels at depicting simply, complex data flows and process interactions. It traps control issues, timing constraints, events and information flows.
</td>
</tr>
<tr>
<td>
Active Control Management System
</td>
<td>
Although not critical to the process, ACM provides significant advantages in process efficiency. An BPC specific control design philosophy based on experiences in the areas of Corporate Governance and organisations adopting control devolution and/or multi-skilling. ACM represents a significant shift from the control paradigm of periodic audit review with heavy transaction based testing conventionally adopted by Internal Audit and traditional views of control system design relying on segregation of duties.
To build an ACM control system, we begin by expanding the definition of controls beyond accuracy, authorisation, completeness (,etc) to include process timeliness, achievement of business plan targets and other business objectives. Next we identify the controls appropriate for monitoring and we collect all the associated control data into a common recording format (and ideally automated storage system - such as MS-Access). Lastly we build a reporting framework for system performance monitoring built on the quality managers.
ACM produces control compliance information in a steady stream for the senior executive and board rather than intermittent or cyclic audit reviews often used. The compliance component of any Internal Audit unit is re-focussed to ensuring the ongoing reliability of the control compliance reports. The control system is integrated into the business processes using the Client-Provider model developed at the start of the project. ACM reporting can be automated, if desired.
</td>
</tr>
<tr>
<td>
Network Organisation Reduction
</td>
<td>
The process of defining the organisation into the community network structure forces the reduction of many diverse strategies and procedures into a clearly identifiable set of activities required for one of 11 broad service communities. The networks implie the stakeholders in an enumeratable set of collective Client Provider Service Agreements.
</td>
</tr>
<tr>
<td>
Process Dictionary
</td>
<td>
Used to assist in the identification of opportunities for streamlining cross and intra organisation systems, the a Process Dictionary catalogues and describes each process within any business function in accordance with an agreed selection of descriptive terms.
In this way, assists in highlighting common processes and assess whether it is possible and appropriate for these to be combined or shared in some suitable form.
</td>
</tr>
</table>
==Summary: Characteristics of the BBPR Method==
Business Process Reengineering (BPR) is the method by which the infrastructure, policies, procedures and practices of an organisation are reviewed and redesigned to achieve some predefined purpose and objective. This chapter has provided an introduction to the concept of BPR and an over view of the ABPR method. Both of these will be developed throughout the text.
Essentially BPR represents the focussing of an enormous body of theory and expertise underpinning management science into a single - all powerful redesign strategy. Such a panacea does not exist, and we must be careful to use BPR where the fundamental organisational characteristics are present. These might include:
<ul>
<li> A discernible consistent set of purpose(s) and objective(s) exist
<li> Design options are not restricted out of the solution set (ie. an acceptable solution is achievable despite imposed constraints)
<li> Senior management authorise and staff support the project and the process
<li> The analytic tools match the problem set
<li> BPR Consultant has credibility with the staff
</ul>
The BPR process is best seen as a framework encompassing a wide array of analytic tools and organisation/management design paradigms. Many of these tools and paradigms can be expected to change over time as management theory is revised, while some are central to the BBPR framework. The central tools and paradigms include:
<ul>
<li> KPI’s & Quality Management
<li> Data Flow Analysis
<li> Object Oriented Process Engineering
<li> Client Provider Analysis
<li> Information Mapping
<li> Data Cataloguing
</ul>
As an extremely simplified explanation, the BBPR method uses KPI’s to focus the system, and classifies the proponents in the system as clients and/or providers of data (etc) to one another. The client/provider relationships, are revised using a separate information (decision) map reflecting the information needs of the direct and indirect stakeholders. With the revised client/provider relationships defined and the data and information needs catalogued, process maps can be defined which reflect only what is needed to implement the system.
For the sake of clarity, in this introductory chapter, we have excluded many of the more complex issues facing BPR. One of these is the positioning of organisation design in a BPR exercise. It is a significant issue as it is inextricably linked to the culture of the organisation being reengineered. I t usually included to some extent in the design options, but rarely is the organisation design entirely at the discretion of the reengineer. Accordingly we must treat it as both a given structural component of the client provider analysis and an output of the process mapping (design phase).
Clearly the process mapping will impact the organisation structure which will in turn affect the client provider relationships while the client provider relationships affect the process mapping, etc. It is for this reason and a number of similar circular relationships among analytic components that necessitates the simultaneous analysis & design activity of the ABPR method.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
{{BackLinks}}
</noinclude>
c126b1fba94204e4a7ae2559e26d3d67bbc90f9e
Business Process Reengineering - Project Plan
0
287
322
321
2018-10-29T11:41:58Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==A Simple Business Standard Process Reengineering Project Plan==
The activities in the project might include:
<ol>
<li> Detailed Planning
<ul>
<li> Familiarisation and detailed planning for the project.
</ul>
<li> Data Collection
<ul>
<li> Review of the organisation culture, organisation structure, business plans, relevant benchmarks, policy framework, quality objectives, controlling legislation and operating constraints to identify externally and internally imposed organisation objectives. Expressed in both qualitative and quantitative terms, these form the basis for decision information needs of management. The objectives are classified into either static ( permanent and intrinsic to the purpose of the organisation such as cost minimisation, timeliness, independence, etc) and dynamic ( short term and generally project based such as delivery of a specific service, or completion of a specific marketing activity).
<li> Vertical (top-down) and horizontal ( functional) review of current management decision information needs including:
<ul>
<li> Performance measures,
<li> Cost drivers
<li> Performance targets
<li> Reporting cycles
</ul>
<li> Review of the system’s decision support information facilities
<li> Decision requirements assessment and Process mapping of the operations for front and back office processes including:
<ul>
<li> A business process risk assessment to identify the key control objectives (and a Pareto Analysis if statistical control data is available for the existing system);
<li> A client-provider analysis in which the interaction of the various business functions are viewed as either receivers or providers of information to one another governed by “contractual” undertakings as to the quality of the data exchanged;
<li> A data-flow analysis in which we trace the movement and storage of data throughout the processes both on an off the computer system. The data flow analysis provides a detailed framework for:
</ul>
<li> Eliminating duplication of data handling and storage;
<li> Eliminating unnecessary data;
<li> Identifying data requirements for each process;
<li> Optimising controls to business risks
<li> Defining critical data paths between the initial creation of data (eg. the application clerk with whom the first point of contact is made) through to the ultimate use of that data in decision support (eg. the applicant whose business commencement is awaiting the approval, or the officer charged with the responsibility of maintaining application turn around times). The critical path is the longest route through which any component of the data in a decision must pass and therefore the path on which any delays are critical to performance. Time related performance objectives will be established and monitored for critical paths.
</ul>
<li> System Analysis
<ul>
<li> Analysis of information collected in the preceding steps and agree lists of:
<ul>
<li> global information requirements of the system
<li> global control objectives (including performance characteristics of accuracy, timeliness, reliability, privacy, completeness, and relevance, etc)
<li> organisational characteristics (behavioural model)
<li> targets for key information processing times and other performance objectives
<li> system client-provider(s) and their data dependency relationships
<li> processes (tasks)
</ul>
</ul>
<li> System Design
<ul>
<li> Establish appropriate preferred behavioural model for the control system framework .
<li> Design, chart and document the new front and back office processes including the Active Control Management (ACM) control system which provides the backbone for continuous performance management of the system. The ACM tracks the performance of the control system providing regular statistical data.
<li> Develop roll-out strategy for implementation of new process modules and identify change management risks and strategies.
<li> Propose new system and its roll-out strategy to management and staff and adjust until agreement is reached.
</ul>
<li> System Implementation
<ul>
<li> Implement and test automated support systems (if any).
<li> Commence Training of staff
<li> Implement and test new processes on a staggered basis.
<li> Use ACM performance reporting system to tune and modify the control system as appropriate.
</ul>
<li> Project Wrap-Up
<ul>
<li> Report to the CEO and the Board as to the success (or otherwise!) of the project, benefits achieved, key operational assumptions, built in performance measures (with their safe operating
</ul>
</ol>
<noinclude>
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
{{BackLinks}}
</noinclude>
0dc17bd6e421ae5d6a88bfab437948793ec33b37
380
322
2018-10-29T12:04:04Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==A Simple Business Standard Process Reengineering Project Plan==
The activities in the project might include:
<ol>
<li> Detailed Planning
<ul>
<li> Familiarisation and detailed planning for the project.
</ul>
<li> Data Collection
<ul>
<li> Review of the organisation culture, organisation structure, business plans, relevant benchmarks, policy framework, quality objectives, controlling legislation and operating constraints to identify externally and internally imposed organisation objectives. Expressed in both qualitative and quantitative terms, these form the basis for decision information needs of management. The objectives are classified into either static ( permanent and intrinsic to the purpose of the organisation such as cost minimisation, timeliness, independence, etc) and dynamic ( short term and generally project based such as delivery of a specific service, or completion of a specific marketing activity).
<li> Vertical (top-down) and horizontal ( functional) review of current management decision information needs including:
<ul>
<li> Performance measures,
<li> Cost drivers
<li> Performance targets
<li> Reporting cycles
</ul>
<li> Review of the system’s decision support information facilities
<li> Decision requirements assessment and Process mapping of the operations for front and back office processes including:
<ul>
<li> A business process risk assessment to identify the key control objectives (and a Pareto Analysis if statistical control data is available for the existing system);
<li> A client-provider analysis in which the interaction of the various business functions are viewed as either receivers or providers of information to one another governed by “contractual” undertakings as to the quality of the data exchanged;
<li> A data-flow analysis in which we trace the movement and storage of data throughout the processes both on an off the computer system. The data flow analysis provides a detailed framework for:
</ul>
<li> Eliminating duplication of data handling and storage;
<li> Eliminating unnecessary data;
<li> Identifying data requirements for each process;
<li> Optimising controls to business risks
<li> Defining critical data paths between the initial creation of data (eg. the application clerk with whom the first point of contact is made) through to the ultimate use of that data in decision support (eg. the applicant whose business commencement is awaiting the approval, or the officer charged with the responsibility of maintaining application turn around times). The critical path is the longest route through which any component of the data in a decision must pass and therefore the path on which any delays are critical to performance. Time related performance objectives will be established and monitored for critical paths.
</ul>
<li> System Analysis
<ul>
<li> Analysis of information collected in the preceding steps and agree lists of:
<ul>
<li> global information requirements of the system
<li> global control objectives (including performance characteristics of accuracy, timeliness, reliability, privacy, completeness, and relevance, etc)
<li> organisational characteristics (behavioural model)
<li> targets for key information processing times and other performance objectives
<li> system client-provider(s) and their data dependency relationships
<li> processes (tasks)
</ul>
</ul>
<li> System Design
<ul>
<li> Establish appropriate preferred behavioural model for the control system framework .
<li> Design, chart and document the new front and back office processes including the Active Control Management (ACM) control system which provides the backbone for continuous performance management of the system. The ACM tracks the performance of the control system providing regular statistical data.
<li> Develop roll-out strategy for implementation of new process modules and identify change management risks and strategies.
<li> Propose new system and its roll-out strategy to management and staff and adjust until agreement is reached.
</ul>
<li> System Implementation
<ul>
<li> Implement and test automated support systems (if any).
<li> Commence Training of staff
<li> Implement and test new processes on a staggered basis.
<li> Use ACM performance reporting system to tune and modify the control system as appropriate.
</ul>
<li> Project Wrap-Up
<ul>
<li> Report to the CEO and the Board as to the success (or otherwise!) of the project, benefits achieved, key operational assumptions, built in performance measures (with their safe operating
</ul>
</ol>
<noinclude>
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
{{BackLinks}}
</noinclude>
0dc17bd6e421ae5d6a88bfab437948793ec33b37
The Stakeholder Community Network Model
0
288
324
323
2018-10-29T11:41:58Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Introduction - What is the Stakeholder Community Network Model?=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this and other topics. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Re engineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, style and the detail provided, as the original text was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time.
'''''Author's Note:''''' The stakeholder community network concept was originally mapped out in the mid to late 1990's and reflected both my own search for a paradigm for online and virtual corporations. It effectively pre-dates the rise of cloud computing and social network sites as a component of business (for which it almost seems to have been designed) by some five to eight years. It did, however, benefit from existence of the fore-runners of these concepts. It was developed in the context of the observed behaviours of successful online ventures such as DELL and CISCO, the Victorian whole of government reform agenda, the tail end of the TQM experiment, shift from paper to online work flow both intra and inter business, rise of risk management, progressive adoption of balanced score cards, appearance of network trading organisations (groups of independent complementary businesses that traded together as a unit cross-feeding work and niching away from each other through specialisation - they flourished briefly locally in the mid-1990's), and the rise of on-line portals, peer managed corporate forums, application service providers, enterprise scale ERP and CRM systems, and web based B2B systems and the emergence of cataloguing standards. I have used it heavily over the years. It has been modified over time, to accommodate learnings from organisations that survived economic, technological social and political reversals and fertilised throughout by proven tactical and management philosophies, the stakeholder community network model would now seem to have come of age.
</noinclude>
==What and Why==
===What is the Community Network Theory of Organisations?===
====Organisational Community Network Theory====
'''''Organisational Community Network Theory premises that an organisation is a network of one or more communities existing in a network of other communities. The network links communities along lines of exchange such as communication, dependence, and obligation. Communities are collections autonomous agents and/or other communities that interact and share a sense of group identity, or share at least one purpose in common.'''''
Agents are essentially people, but the category could easily accommodate AI devices as these develop appropriate capabilities.
====Characteristics of a Community in Organisational Design====
Communities provide a natural, spontaneously-forming, self-organising, and evolving human organisational structure that forms because something is shared by the participants. Through the things the participants share in common, the community unit provides a framework for standardisation, streamlining, automating, and specialising in delivery of services and products to meet the shared purposes and operational needs of the individual community, and groups of communities.
Communities form initially because there is one or more needs in common among the participants (possibly only the need to identify and classify each other!). They are not inherently permanent structures, however there are some communities, that because of their survival through multiple generations or over multiple business cycles are effectively permanent. Such a list might include cities, countries, religions, professional associations, sporting clubs, and some government agencies, for example. At the other end of the continuum are communities that form spontaneously and last for little longer than the span of the first and only meeting. Examples might include emergency assemblies, concerts, demonstrations, staff inductions and rallies, etc.
Members of a community may be individuals or other communities. Communities contain eight non-exclusive classes of participant:
# Members - All participants are members, regardless of whether they are also members of the other classes.
# Beneficiaries - Information, goods and services consumers
# Suppliers - Information, goods and services providers
# Patrons - Funding providers who therefore also tend to direct
# Governors - Providers who administer, moderate, direct, control access, monitor, and tune.
# Custodians - Provide the infrastructure, durable assets, information warehouse, community tools.
# Partners - Provide compatible, complementary non competitive services or goods consumed by members in association with those of the community, but not as part of the community.
# Public - Comprised of potential participants, and participants who may also spontaneously form communities that compete with or otherwise influence the context of the community.
The more mature the community, the more clearly these roles are differentiated and actively operating. For a community to reach stability over an extended time the more important it is for these duties implied in these roles to be fulfilled.
Members of a community:
*share in a communal identity,
*have a shared purpose with other members,
*need similar access to information, and
*draw from a common set of tools.
The community will interact with other communities both individually and as a group. The more cohesive and mature the community is, however, the more likely it is that it will interact as community with other communities through nominated representatives.
The community is the fundamental building block of an organisation, but communities are structurally recursive and fluid. Communities themselves naturally subdivide into teams that service particular interests or needs of the community. These teams from their own communities, and together these internal communities form a network of interacting communities. The larger and more heterogeneous the parent community the more noticeable, numerous, segregated, larger and autonomous these internal communities become.
These internal communities may also interact directly with external communities, and have external participants in otherwise internal communities. The more predominant the external participation is, the more likely is the internal community to transition though the parent community boundary to become an external community (with respect to the originating parent community). Similarly the higher the proportion of community participation from a single community in an external community, the more likely that external community will transition to an internal contextually constrained community.
Each community is, therefore, comprised of a fluid network of communities contextually constrained by, and in some way supporting the activities of the parent community.
Community based organisational structures extend horizontally through unconstrained networks of interactions and vertically through community subdivision and absorption into constrained networks of specialised communities.
====Making and Strengthening a Community====
The longer a community survivies - the more mature it becomes - the more clearly the community identity, roles and rules become. For example, a group of people with a common interest in a playing of cricket meet by chance through visits to a local field - perhaps looking for a game being played. Over time they tend to arrive more regularly and predictably at around the same time in greater numbers. Some start bringing equipment and start a game, while others join in fielding or watching. As the predictability of the presence of other interested parties grows, participants start arriving in the expectation that others will also be present, while other participants bring supporting material - like refreshments, etc. Gradually, a community is forming with self nominated and perhaps suggested or allocated roles.
Eventually the group might suggest a common name - the Sometimes Cricket Club - and others might attempt to organise more sophisticated or permanent resources, and eventually the funding needs of the group might dictate an expansion in its membership and the need to more formally manage finances on behalf of the group, etc. Rules might initially be common-sense and unspoken (like not stealing the bat and ball from the guy that supplied it), others may be agreed through shared experience. Sharing or common interests and the need to improve predicatbility of participants in games will encourage the group members to share contact details and channels of communication. The more individuals invest their time, energy and resources on behalf of the group, the more they will expect later joining members to make a catch-up contribution for the existing investment - and the community may start placing barriers to entry in the form of membership criteria and fees.
As the group grows handshake agreements may need to be formally agreed and recorded, and individuals will be formally allocated roles and leadership agreed. Along the way as disagreements arise (like who should bat first) dispute resolution mechanisms will be required.
Thus a community has been formed and gradually self-organised. If the initial casual group fails to ever define roles, find equipment supplier(s), it will be most unlikely to ever get to the stage of even the first game. If it fails to agree its meeting place and times of meetings it will probably not achieve the second game. If it fails to identify its membership and establish an identity (and therefore a brand) and all the other functions of a cricket club it will be unlikely to last out a season.
To make an effective long term community we need to pay attention to the characteristics that form a community and ensure that these characteristics are serviced. From the simple example above we see that a community has:
*Members
*Shared resources
*Identity / Brand
*Communication
*Define and shared purpose
*Location - a meeting place (which may be virtual)
*Roles
*Rules
*Governance structure
*Barriers to entry (note this might be as small as deciding to participate)
*Patron (implied or formal)
We grow and strengthen a community by addressing these characteristics directly. Ignoring any one of these will result in the failure of the community over time. For a community that assembles for a single purpose for only a short period of time - such as a demonstration, or an entertainment event this may not be a concern. If we wish the community to have any kind of longevity we will need to consider how we enable the defining charcteristices of the community.
It is with some surprise that we note that when we look at the permanent communities within many organisations we will find that several of these characteristics are only weakly addressed - if at all - rarely understood, and even more rarely considered. Herein lies the key to the internal structural failure of many organisations that have grown much beyond the oversite of their founders splitting into many semi-autonomous communities.
====The Organisation as a Community====
Here we distinguish a physical organisation from the organisation of its operations and resources.
A physical organisation - such as a company, government agency, not-for-profit, or even a political party - is:
# a community containing a network of communities,
# a patron of both internal and external communities
# a custodian of information and provider of infrastructure for communities
# a governor of community mandate, direction, performance, and culture, etc.
The physical organisation is, by definition, a community, but its boundaries may be so fuzzily defined that as a community it is little more than a container for a network of communities, whose primary allegiances are directed outside of the physical organisational boundary. Some communities in the organisation's network are planned and facilitated communities, while others are not planned but facilitated (such as professional associations, unions, standards bodies) and others are neither planned nor facilitated (but, perhaps, accommodated) (such as schools, sporting clubs, arts groups, social movements, etc.).
As a patron the physical organisation plays it primary role. Patronage is provided through a funded pool of resources that can be applied to communities as participants and enablers of community infrastructure, and through direct funding of community operations, or through funding infrastructure provision, etc. Patronage is about funding, and every gift "in kind" of resources or equipment, etc is an implied gift of funding as well. Patronage is accompanied by some ability to influence direction - if only from the implied threat of future funding cessation.
As a custodian, the physical organisation will also provide services to communities of storing knowledge, providing and maintaining technical and physical infrastructure used by communities, and management of liquid assets, etc. These are called custodian functions because they are about the preservation of assets, wealth, capability and capacity.
In its governance function the physical organisation imposes accountability for patronage, standards, policy compliance, legal compliance, strategic direction, performance measurement, financial control and resource utilisation, etc
All organisations are simultaneously intersected by many special interest communities:
*The average workforce is riddled with communities some intersecting the organisation, some not - union(s), professional bodies, schools (if staff have school age children), political, sporting, social, OHS cases, divisional, project, etc.
*Industrial associations, standards committees, regulators, etc.
*The company is surrounded by public interest groups, political and semi political groups, consumer advocacy groups, and the public relations industries.
*Internally the organisation might have communities of buyers, marketing and sales, logistics, process & quality improvement, governance, safety, research and development, financial control, etc.
Communities do not respect the conventional boundaries of corporate or governmental agencies. Communities that interact with external stakeholders, for example, draw in members of the public and convert them into organisational stakeholders in the process, but not employees (at least in the conventional sense).
====The Advantages from using Communities to Model Organisations====
In some organisational theories, communities are represented as external and internal forces or drivers, but are not directly modelled into the organisational structure. The organisation is seen as a collection of consumer-provider relationships - whether those relationships are about transmitting instructions, funding, goods, services, resources, etc. The relationships are essentially hierarchical - even in matrix organisations - and feed back and feed forward control systems have to be imposed on the structures to make them work. Structural entropy gradually causes the structure to disassemble without constant maintenance on the organisation structure itself.
The community is an advance on the classic consumer-provider interactive model, because it:
*assumes most business relationships are multi-directional exchanges between the provider and the consumer and other providers and consumers extending over a period of time;
*recognises that all transactions between parties involve a series of micro exchanges going in both directions, not a single uni-directional exchange. For example, a purchase involves the consumer providing information (identity, location, preferences, competitor data, demand level, buying cycle, etc.) and possibly funding, a sales team matching the need to available offerings and defining and providing the promise, a legal team defining the obligations, a delivery team to deliver the good or service, a quality and support team providing quality management, logistics team providing transport, etc. All of these are participants of the same community involved in meeting client needs.
*delivers the benefits of the one-stop-shop process models, without the training cost, and inherent quality variability, by forming a community of specialists to collectively provide the single point solution.
*provides a model for structuring the online presence of an organisation.
*provides an organisational architecture that distributes the costs of providing and consuming goods and services across the community rather than exclusively concentrated in the larger party. For example, a buying community might assume some of the costs of sales by providing their details online directly into the client database, select from available product (by watching videos, reading information and product comparisons provided from central location), or submit special orders online, respond to questions from other clients in hosted forums, and advertise the organisation's products and quality in organised reviewer sites, or social networking sites.
*places the provider and consumer into the same "team" and positions them as jointly trying to meet a need. The community model facilitates all participants contributing jointly and sharing ownership of the outcome - rather than one party meeting the needs of the other.
Each community is a collection of participants (members) who share common operational characteristics, goals, interests and/or functional needs. The greater the extent to which the participants share characteristics, interests, needs and goals in common the greater the cohesion in and resilience of the community - in simple terms the community is active, "tight", involved, and the members share a sense of identity, belonging and, most importantly, ownership.
Communities are semi-autonomous, self-selecting, self directed, and inclusive. This does not mean communities are necessarily "open-access". In fact communities with higher barriers to entry often have the highest sense of cohesion because membership is something hard to attain and therefore something of value. Cohesion does not necessarily mean active, however, and lack of activity generally makes a community less interesting organisationally. Communities survive by exchanging things. The greater the volume of services, tangible goods or intangible goods (such as information), that flows through and around the community the stronger the community becomes. In the community model an organisation therefore benefits by fostering participation and particularly communication among all its members.
===What is the Stakeholder Community Network Model?===
'''''The stakeholder community network model is an organisational design and analysis paradigm that sees the organisation as a network of co-dependent stakeholder communities positioned in a larger network of interacting (but not necessarily co-dependent) communities. Within this paradigm, all of an organisation's services, functions and facilities exist to service the needs of the various stakeholder communities in the network.'''''
It should be noted from the outset, that co-dependent does not mean cooperative. As with domestic co-dependent relationships, the community network may include some positively destructive co-dependent community relationships.
The model defines an organisation as consisting of a network of operations that may extend beyond the boundaries of the organisation's body corporate. One such situation might arise in franchised operations or trading networks where an external entity provides critical services on which the corporate organisation depends.
The model works as an organisational design paradigm, a process design framework, an IT strategic design paradigm and a risk and performance analysis framework. It is directly suited to modern network, online, virtual, service operational models as well as bricks and mortar industries including utilities, government, general and project manufacturing, and education. It has not been tested in the resources sector or transport sector.
As an analysis tool identification and labelling of existing implicit and explicit communities and the physical and virtual flows between them against current planning, score cards, policies, performance measurement systems, service agreements, compliance frameworks, risk models, quality, control and feedback systems highlight areas of dysfunction, duplication, redundant effort, counter-productive strategies, missed opportunities, and structural inefficiency and ineffectiveness.
As a design tool it results in the alignment of organisation wide activities to identifiable purposes with targeted participants and measurable performance. It facilitates structurally many different and potentially divergent simultaneous strategies while painting a boundary and direction for such divergence. Such support in organisational design is essential for dealing in global, highly cyclic, or political markets where cultures, rules and geographic features may require the ability to operate as "her to him and him to her", and to retire and replace entire limbs rapidly.
As a customer, partner and supplier service process model it results in bound customers and suppliers and well integrated partners while distributing a significant portion of the organisations costs to the participants.
As an IT systems framework it provides an efficient protocol for defining shared services, community portal service architectures, intra-cloud and cloud services, virtualisation clusters, etc.
==Definitions==
===The Organisation===
Organisations are networks of communities. These communities are comprised of members drawn from inside and outside the organisation's corporate legal identity, and may include communities of which the organisation has no effective control (in traditional terms).
Under the stakeholder community network model we view an organisation as a community comprised exclusively of interconnected sub-communities of people providing and consuming goods and services. Each sub-community forms multiple sub-sub-communities within it, and the community subdivision continues recursively until the costs of organising communities out way the benefits gained from the additional community.
Contrast this view of an organisation with that of other models that classify organisations in terms of bureaucratic, divisional, matrix, and similar structures. Under the stakeholder network view all of these structures can coexist in an organisation simultaneously as they are simply overlapping communities defined around structural paradigms. The stakeholder community network model does not replace such paradigms - it absorbs them.
In the stakeholder community view an organisation is a free-flowing evolving network of teams forming and disbanding as required, with some acquiring near-permanent status, while others enjoy but a single day in the sunshine. Community membership is not exclusive and it is normal for members of one community to also be members of other communities.
===The Community===
The model first defines a structural unit (the community) that possesses identifiable and comparable characteristics, such as focus, information need, functional need, etc. Secondly, the model looks to the mechanisms of facilitating stakeholder communities in a cost effective and consistently reliable and predictable way, utilising common services designed to enable and utilise the shared or distinguishing characteristics. So initially, at least, the model is community structure agnostic.
Communities form for multiple reasons, including:
*shared geographic proximity
*shared heritage
*shared communications technology
*shared language
*shared interests
*shared skills
The things we share are like gravitational attractors around which people cluster in self organising social units we are calling communities.
As communities grow beyond a few members they form sub communities whose members service the parent community or concentrate in some specialised capacity in addition their other roles as members of the community.
The communities in which we are most commonly interested (in the general organisational performance improvement context) are those forming around shared interests and skills. Within an organisation the geographic, and language communities may be crucially important, and in some contexts would be directly accommodated but they will also usually need some form of communities formed around skills and interests (like, at the very least, consuming or providing something), in order to assist the organisation achieve its purpose.
Within each community formed around shared interests or skills are a further set shared interests such as membership, meeting space, information, branding, commercial services, engagement, arbitration, and support. As these needs are common (with minor variations) across all communities they are an attractive first target for shared service provision across all communities. In designing these shared services one should remember that a properly harnessed community can be self managing, peer supporting and self selecting. Shared service provided to communities should be designed to encourage this ownership by the community membership.
A community model assumes a multi-way conversation within the community among the community members - not a massively parallel bilateral conversation between the community members and the organisation. The latter is a client-supplier relationship and by excluding inter-member interaction it embeds the costly push model of marketing, sales and service delivery. By encouraging intra-community conversation we harness the consumers in the community into one or more of the many supply roles in the community. In a customer/client oriented community supply roles span such things as marketing assistance with reviews, discussions and forum participation to support assistance in peer help spaces, and even product improvement and testing such as in software Beta programmes. On the supplier and partner side, supplier side community roles include online supply of certifications, supplier self-registration of details, self selection of available contracts, online invoice entry directly by suppliers, and suppliers providing new product information feeds matching community standardised classifications and measures, etc.
===The Stakeholder Community===
A stakeholder community, is a collection of people, agencies, or units of an agency, that share three traits in common:
# They have an interest in the organisation being modelled or analysed (IE: they are stakeholders).
# As a group, they are co-dependent with other groups of the same organisation. (IE: the groups can not operate with complete autonomy as they depend on each other for their functioning and survival).
# They possess additional distinguishing dimensions of their interest in the organisation that allow them to be functionally separated from some members of the collection and similarly grouped with others (IE: they form an identifiable and functionally similar subgroup of stakeholders).
A stakeholder community of an organisation might be defined as geographically based, and representing all customers within a geographic area, or it might be an enterprise wide collection of staff injured in forklift truck accidents, or a worldwide extra net of ECL policy advisers, or suppliers and corporate buyers for raw materials,... or any one of a long list of possible organisation specific or related groupings.
We call the members of a community "Resources". A resource may be a person or another collection of resources such an organisation, a unit of an organisation, another community. In all cases where a collection of resources is a member of a community, that collection will participate through one or more "community representatives". So in a sense resources can be seen as ultimately comprising people (even though they may be members fulfilling constrained roles).
===The Stakeholder Community Network===
A stakeholder community network is a collection of stakeholder communities that form a network of loosely co-dependent communities.
The communities comprising the network preserve the rules of membership of a stakeholder community domain (as defined above). The links between member communities represent the co-dependencies. The dependencies are functional in nature and may be about information, goods or services - provision or supply, etc. They therefore represent the first layer of potential service level agreements in an organisation.
Technically speaking, the graph connecting all members of the stakeholder network is a digraph (directed graph) when the functional attribute of the network relationship is included in the inter-community link definition.
===The Well-formed Stakeholder Network===
In the universe consisting of all possible stakeholder communities of an organisation, a complete network would include all communities in the network topography. Such a network is said to be "theoretically complete".
Theoretical completeness is neither practical nor possible to achieve in practice. We can not know, and thus enumerate, every possible stakeholder community as each resource and every possible combination of two or more resources up to and including the entire membership of the organisation's stakeholder domain is potentially a community.
Another way of viewing completeness is to first test to ensure that all members of the stakeholder community are also members in one or more of the other communities in the network. This network is then complete in terms of a organisation's resource coverage.
It is worth noting that an organisation's stakeholder resource list may include both members of the public and entities that have no direct dealing with the organisation as well as staff, clients and suppliers (etc.) of an organisation.
===The Stakeholder Community Network Model===
The stakeholder community network model views an organisation in terms of stakeholder communities with shared needs, interests and/or purposes.
The model is a government and business meta-organisational model for organisational design, performance analysis and competitive strategy. It founded on a theory of operational design that embraces networked co-dependent business structures (such as outsourcing, join-ventures and social networking), while not mandating them. The step into communities, however, fundamentally changes the organisational focus from internal structure management to external service delivery. By rejecting all activity not designed to service an identifiable community it forces the entire enterprise to embrace a service culture at every level - everybody is a client of somebody else and in a stakeholder relationship (and usually responsible to someone, or responsible for something) with many other people.
The community structure inherently distributes some of the costs of marketing, sales and servicing, from the net providers to the net consumers within the community, but is effectively a premium willingly paid by community net consumers for greater influence over service form, more relevant and timely information, improved service speed, and risk perception confirmation (the role of public forums), etc.
Communities are essentially self determining and semi-autonomous so a community network modelled organisation naturally accommodates multiple value streams simultaneously. The ability for a community to recursively sub-divide into smaller overlapping specialised communities means the enclosing community structure can accommodate not only multiple value streams internally, but also multiple agendas. Thus financial performance can be enhanced, while quality improvement, social policy or research (and other long term strategies) are driven with equal priority. Further, new value streams can be added to the structure without compromising the integrity or culture of the existing structure.
The semi autonomous nature of communities means that both competitive and and non-competitive business architectures are compatible with the community network model.
We say it is a "meta-organisational model" because, while you might design your physical organisational structure around the model (particularly at the business unit level, or in the online context), it is more common to use it to redesign the roles, service agreements and strategies of existing organisational structures in an organisation. The meta-organisational model is one that floats through a physical organisation providing a new virtualisation of the organisation by re-engineering the service agreements, social networks and logistical networks in an organisation.
One way to think of this is that the impact of applying the community stakeholder thought process is to rearrange the plumbing, the lifts, the corridors and the internal doorways inside a heritage listed building. It is still the same building on the outside, but now you don't get lost inside it, and clients and customers start sharing your destination, not just what you do.
Sure you could tear down the building and replace it with a campus that modelled your stakeholder community structure exactly, but you do not need to do so to get the benefits, and in fact doing so might be counter productive to your market.
The model does tend to have certain organisational impacts - even as a thought exercise:
*The model encourages networked structures and specialisation of semi-autonomous co-dependent internal units.
*The communities share common servicing needs and efficiency dictates some form of shared service provision for these common needs. These structures imply additional cost, which in a zero-sum change process implies that resources will have to be transferred from somewhere else.
*The network model will tend to reach across multiple divisions of an organisation in defining communities.
In the normal entity (government or business) an individual or even business unit might participate in multiple stakeholder communities at once. So the communities are not necessarily defining an organisational structure as much as a set of interlocking co-dependence structures around which services can be consolidated and streamlined, duplication identified and removed, and context specific organisational purposes can be clearly articulated.
=Applying the Stakeholder Community Network in Practice=
==Step 1. Identifying and Defining Stakeholder Communities==
We must fist decide whether we a looking for a directed outcome such "quality improvement" or an undirected (normal) outcome. This impacts the design of every community.
In a directed outcome model the directed outcome becomes a community in its own right that is automatically a participant in every other community. This allows consideration of the requirements of the directed outcome community to be capture and implemented in every other community structure.
In the undirected model no such imposed membership is mandated and the community architecture is left to optimise the framework with which it has been equipped.
In most situations we use the undirected model for analysis and the directed model in conceptual design (refactoring into an undirected model once the directed redesign has been finished).
==Step 2. Identifying and Defining the Community Ennoblement Functions==
In the model, the central object of the organisation is to ensure communities are facilitated, serviced, and harnessed for the purposes of the organisation as best it can, or otherwise "actively managed". The model sees only communities - so every participant within and without the organisation must be able to be defined as falling into one or more stakeholder communities if the model is to be considered "well-formed" (read "complete").
Within the model, the aim of the enterprise is to facilitate communities (generally) and a defined set of communities specifically - which translates into:
*identifying stakeholder communities
*mapping new and existing stakeholder communities to organisations objectives, mandate and purpose as they change
*mapping inter-community work flows testing and identifying duplicated communities, duplicated flows, and under resourcing, etc
*seeding communities as required
*funding stakeholder communities (eg seed capital, cross charging, external billing, etc)
*organising stakeholder communities
*branding stakeholder communities
*fostering community participation and outcome ownership
*providing the and possibly managing the infrastructure for community self organisation
*liaising/interfacing between stakeholder communities (eg. client community versus customer community)
*delivering the community's requested service or goods
*harnessing community ownership of the service/product improvement process
*trapping and archiving expert knowledge from both internal (to the organisation) and external community participants over time
Within an organisation adopting the stakeholder community network paradigm operationally, the stakeholder community network must be actively managed. This means it must be facilitated, moderated and funded. Resourcing is required to make it fast and efficient to implement and equip new communities and retire existing ones. Part of equipping a community is establishing its charter, budget, performance measures, governance, operating rules (constitution), core membership, decision model, meeting space, common (shared) tools and specialised applications or services need.
This necessitates the creation of a new centralised or distributed role of community facilitator(s) and a central role of community registrar (manager). The former is about equipping and assisting new communities, identifying and seed communities as required and advising and improving existing communities. The latter is about containing, policing, funding, planning, judging and budgeting communities.
==Step 3. Considerations in Designing the Stakeholder Community Analytical Structure==
Once we have a standard definition of the community concept as it applies in our analysis and organisation, the next step is to define a framework of communities through which to analyse the organisation.
As each community shares facilities between their members, the fewer top level communities there are the better the efficiency gains in the entire model will be. Unless, of course, their are too few and the resulting groupings are not homogeneous over sufficient characteristics, or the communities are badly chosen with many shared characteristics between the groups rather than within the groups.
Secondly, the choice of communities can slant the servicing view internally or externally, or indeed could simply mirror existing organisation structures. None of these effects are likely to produce efficiency gains sufficient to justify the operational overhead of the stakeholder community support systems. The gain comes from achieving 100% coverage of participants, with communities comprised of both external and internal participants, with the minimum need for intra-community process or system customisation. By demanding the mixing of internal and external members aim to eliminate duplication between external and internal systems and processes servicing the same need.
So, ultimately, the choice of top level stakeholder communities proves to be crucial to the outcome of the model - on all fronts.
In our experience, if the model is well designed the chosen top level community groups will tend to be highly co-dependent which automatically provides a structure and focus for service level agreements, and intra-community risk profiles will be highly consistent.
The choice of stakeholder communities used is prima-facie up to the organisation and the purpose of the analysis. While generalisation is possible at the highest level, as the view descends through the communities into their member sub-communities the groupings become quite specific to an organisation.
After many years of using and refining the concept we have settled on a standard top level stakeholder community model we call SCNM03. It has proven to be work predictably in both government and commercial agencies in both physical (eg manufacturing) and virtual (eg software) organisations. Alternative models include the groupings under Porter's Theory of Competitive Advantage.
=Standard Stakeholder Community Network Model: SCNM03 in Practice=
==SCNM03: Bishop's Model Stakeholder Network==
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
In the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
==Risk and the Stakeholder Community Network Model==
Risk in the model tends to vary with time and the degree of influence the organsiation (the meta-community) has in the specific community being examined. This influence will vary over time.
Consequently, in the longer time frames (ie. the strategic time frame) the Public and Governance communities are usually the highest inherent strategic risk communities in the model. The organisation tends to have the least influence over the sub-communities contained there-in and may participate only as a guest (information receiver, price-taking customer, subject of legislation, etc.), or not at all. Public attitudes can swing against the activities of the organisation, and influence the legislaters, who, in turn, can legislate the marketplace or the organisation out of existance. COnsumer preferences can change as technology progresses, making the organisation's business model irrelevant. The stakeholder network model therefore naturally tends to encourage both lobbying and active public relations management (or the exact opposite: invisibility!), and participation in external communities for information gathering.
Where timeframes being considered are shorter, ie. from an operational or tactical risk perspective, Workforce will rank as one of the highest risk spaces. If we think of Workforce as being comprised of smaller communities - say contractors and employees, and then each of these in turn being comprised of even smaller communities - say divisions, units and ultimately individuals we see that the more we subdivide the group the closer we get to a community of one member - the individual. In the very short term humans thus represent a highly variable factor.
In the micro-community of one person. the only member of the community that exists inside the employee's head is him or her self. All the risk minimisation and behaviour modification controls naturally present in a larger commnunity are dependent on that one member. In that community one person fulfills all the roles of the multi-member community. Strategies such as training, and standard processes work over an extended time frame to reduce the probability of incidents and create predictability across the workforce as a group, but in the very short or immediate timeframe the individual is still entirely responsible for each action with little chance for other community members to intercede (because there aren't any!). In the instant, this micro-community can make an unsafe decision that impacts the well being of the larger organisation (as well as themselves). Planning, thorough and extended training, careful member selection, 'idiot-proof' machine and user interface design will improve the predictability of the individual - but all these strategies take time to design, implement and achieve their effects. So, over the shortest unit of time - say, a second into the future - the individual can make a very bad decision with disastrous outcomes. This is a technical way of saying that people do dumb things that can be prevented with enough preparation and training - but only if enough time is available.
==Competition and the Stakeholder Community Network Model==
The SCNM03 model captures a deliberately divergent view of competitive strategy from that presented by many earlier authors. In this model, competitors are seen as potential suppliers, partners, clients, customers or workforce and strategies to bring them into one or more of those communities would be pursued.
Crucial to understanding the SCNM03 stakeholder model is that, purely applied, the model sees the entire universe in terms of these communities. It starts with the ideal vision built-in and therefore models a best fit to that scenario.
One obvious issue, then, is that there is clearly no community of "competitors". Under the pure SCNM03 stakeholder network model our aim is to make competitors a member of one or more of the other communities. We are therefore encouraged to both define our service offering away from competition and structure ourselves as complementary to another's offering or needs. The extent to which we are not able to achieve this influences the inherent risk that lays in the public communities.
We do not loose the unresolved participants, instead they appear as sub-communities of the public community and are subject to a range of risk mitigation strategies.
==Stakeholder Communities and Sub-Communities in SCNM03==
Each of these 8 communities is comprised of smaller communities with more specialised shared needs. For example, workforce is comprised of two specialised communities: contractors and staff (or other appropriate terminology). While many requirements of these groups are the same, there are specific differences in engagement, management, ancillary services, social interaction and disclosure levels between these groups to warrant separate community identities.
Conceptually the stakeholder network organisation is (almost) a franchiser of community management systems within a defined product/service space and in a given organisational cultural context. An organisation adopting this model will naturally look to standardise the managerial and technological profile of the communities it manages.
Applying the stakeholder network model in process design, performance analysis, compliance management or risk assessment often results in process structures and views that differ dramatically from the Divisional, Matrix, Hierarchical and Service models under which the organisation may operate. The community network model is agnostic when it comes to organisational structure (with the one exception being an organisation exactly mirroring the network model itself).
By way of example, an organisation that produces widgets, might traditionally see itself in terms of functions and processes concerning widgets. It has widget raw materials planning and acquisition, inventory management, widget production, widget distribution, widget order management and sales, etc. The same organisation in the stakeholder network model would see the world in terms of satisfying the needs of defined stakeholder groups first - not the things they were manufacturing.
In the SCNM03 stakeholder network model the natural home of the manufacturing functions is in the customer community where they are firmly focused to the customer (note - not client) desires, and materials acquisition function might be seen to contract the services of both the partner and supplier communities to satisfy material demand.
A couple of outcomes of the model are immediately apparent from this example, the model blurs the distinction between internal sourcing and external sourcing,
From a computing perspective, the model automatically leads to service portal based architectures, systems consolidation, cloud structuring (whether internal or externally hosted), and highlights the places where inter-system integration and system standardisation are needed. From an operations perspective it leads to service focused organisational architectures with defined client groups and document service standard agreements.
==The SCNM03 Communities Explained==
An individual is often a member of multiple communities (eg Customers and Clients). Our standard stakeholder communities (which in 12 years have yet to be wrong) are:
{|
|-
|Clients
|style="padding-bottom: 10px; padding-top: 10px; border-bottom: 1px solid black;bottommargin:10px;"|Stakeholders who receive or deliver services Clients are interested in rapidly finding information, requesting service, reporting hazards / incidents / events / ideas.
A classic result of the client stakeholder focus are client portals. In a local government these might take the form of a resident portal, where a city rate payer can find in one spot all the online systems for garbage collection, events, bylaws, parking permits, voting, pet registration, planning applications and objection lodgment, etc. In a direct-to-customer manufacturer the client might have access to a portal with product information, product enhancements, support, manuals, training, online-store, peer forums, product reviews, newsletter/blog, and peer/expert hints and suggestions all in one spot.
|-
|Customers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Stakeholders who pay for services that clients receive. This is separation is very common.
Customers want to pay for things in as convenient and consolidated a way as possible, and have mechanisms available for enquiring, revoking or monitoring services for which they pay. Companies that send multiple bills for the different services they provide are examples of firms that seriously need to look at their customers as a stakeholder group.
Governments provide the classic examples of customer and client separation: A State Government might pay for (or part-pay for) some services that are received by citizens of a city government. The state government is the customer, while the citizen is the client.
|-
|Suppliers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Suppliers of services and materials to the organisation. Suppliers have common service interests such as finding tenders, quotes, interfacing supply catalogues to purchase order systems, checking on payment status, locating standard contracts, etc.
|-
|Partners
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Partners are providers of complementary services. A “meals on wheels” charity provider may function as a partner to a local government, delivering services complementary to those of the city government, but funded by non-City sources.
Partners are mainly interested in ensuring their services stay complementary and not competitive with the organisation. So information on strategies, management of joint projects, identification of opportunities, etc are of interest.
Roads constructions authorities are partners who provide accident minimisation services, and traffic impact control services, etc. that complement those of the local or city government roads teams.
The relationship between insurance companies and the fire service is another example of a partnering structure. Insurance companies have an interest in facilitating the fire control services as they reduce their insured risks.
Franchised sales teams for a retailer, independent software manufacturers for a computer or games console manufacturer, and joint-ventures are all examples of partner community networks.
|-
|Workforce
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Workforce include both employees, contractors and consultants. HR systems, payroll, contract management, OHS, incident management, etc. are examples of services needed by this community.
|-
|Treasury/Custodians
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Treasury & other custodians are always an internal community. Their members are charged with maintaining assets and lowest level enabling systems for the other communities.
IT/IS, Building Management, Maintenance and Treasury are always members of the custodians group. They protect assets and provide the infrastructure on which the community specific applications reside.
Email, communications, data storage, server management clearly fit under this group.
|-
|Governance
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The governance community, like the workforce community includes multiple sub-communities, such as the executive, regulators, government bodies, risk management, compliance management, etc. These communities use services related to the provision of control and performance monitoring. Finance, council management, boards, executive team, performance review committee, inter-government reporting, risk, and compliance systems, and planning/budgeting systems are typically included here. Governance community members are both internal and external bodies with which the organisation has an accounting and reporting relationship.
|-
|The Public
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The public includes everyone else. This is a very important community as it has the ultimate power to remove the entire organisation from existence, or cause government to legislate it out of existence.
It is also the group from which all the other stakeholders originally come. From a strategic perspective, the aim of every organisation should be to get every member of the public community to transition to one of the other stakeholder groups.
The public need to know about the services an organisation provides, its ethics, and social performance.
While most membership of this community is reasonably obvious, the presence of public relations teams, lobbying and marketing in this community may be less so.
An organisation is always a member of the public stakeholder communities of all other organisations.
|}
=Applying the Stakeholder Network Model=
The stakeholder networks model is recursive. It applies organisation wide and through each sub grouping down to the individual business unit level (in fact it can also work at the individual level – but not generally in an IS context). Just as the organisation has these broad stakeholder groups, each business unit has the same stakeholder breakdown, all be it with most stakeholders in the various communities being internal to the organisation – rather than external to the City.
The stakeholder community network has clear relationships between the elements - particularly as realised in SCNM03 - and provides a model under which social networking and portal systems naturally fit. The model leads naturally to both network organisations (those using mixed in and out-sourcing, shared service models and joint-ventures as their standard business model.
The stakeholder community model has a number of applications:
#As an IT system design paradigm and idea promoter.
#As a full organisational modelling paradigm. In this form it results in dramatically different organisation models from those in general usage and is thus often too radical for executive comfort.
#As an analytic “best practice” benchmark it is outstanding, and even when only partly applied results in improved and more cost efficient process design.
#In designing and online and web service business presence. With a little thought it should be apparent how effective the stakeholder model is in designing an online presence and structuring of mutual obligation social networks.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
f54d609f8d240d7ba168fd4d101ce36b7edfe76b
346
324
2018-10-29T11:57:34Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Introduction - What is the Stakeholder Community Network Model?=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this and other topics. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Re engineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, style and the detail provided, as the original text was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time.
'''''Author's Note:''''' The stakeholder community network concept was originally mapped out in the mid to late 1990's and reflected both my own search for a paradigm for online and virtual corporations. It effectively pre-dates the rise of cloud computing and social network sites as a component of business (for which it almost seems to have been designed) by some five to eight years. It did, however, benefit from existence of the fore-runners of these concepts. It was developed in the context of the observed behaviours of successful online ventures such as DELL and CISCO, the Victorian whole of government reform agenda, the tail end of the TQM experiment, shift from paper to online work flow both intra and inter business, rise of risk management, progressive adoption of balanced score cards, appearance of network trading organisations (groups of independent complementary businesses that traded together as a unit cross-feeding work and niching away from each other through specialisation - they flourished briefly locally in the mid-1990's), and the rise of on-line portals, peer managed corporate forums, application service providers, enterprise scale ERP and CRM systems, and web based B2B systems and the emergence of cataloguing standards. I have used it heavily over the years. It has been modified over time, to accommodate learnings from organisations that survived economic, technological social and political reversals and fertilised throughout by proven tactical and management philosophies, the stakeholder community network model would now seem to have come of age.
</noinclude>
==What and Why==
===What is the Community Network Theory of Organisations?===
====Organisational Community Network Theory====
'''''Organisational Community Network Theory premises that an organisation is a network of one or more communities existing in a network of other communities. The network links communities along lines of exchange such as communication, dependence, and obligation. Communities are collections autonomous agents and/or other communities that interact and share a sense of group identity, or share at least one purpose in common.'''''
Agents are essentially people, but the category could easily accommodate AI devices as these develop appropriate capabilities.
====Characteristics of a Community in Organisational Design====
Communities provide a natural, spontaneously-forming, self-organising, and evolving human organisational structure that forms because something is shared by the participants. Through the things the participants share in common, the community unit provides a framework for standardisation, streamlining, automating, and specialising in delivery of services and products to meet the shared purposes and operational needs of the individual community, and groups of communities.
Communities form initially because there is one or more needs in common among the participants (possibly only the need to identify and classify each other!). They are not inherently permanent structures, however there are some communities, that because of their survival through multiple generations or over multiple business cycles are effectively permanent. Such a list might include cities, countries, religions, professional associations, sporting clubs, and some government agencies, for example. At the other end of the continuum are communities that form spontaneously and last for little longer than the span of the first and only meeting. Examples might include emergency assemblies, concerts, demonstrations, staff inductions and rallies, etc.
Members of a community may be individuals or other communities. Communities contain eight non-exclusive classes of participant:
# Members - All participants are members, regardless of whether they are also members of the other classes.
# Beneficiaries - Information, goods and services consumers
# Suppliers - Information, goods and services providers
# Patrons - Funding providers who therefore also tend to direct
# Governors - Providers who administer, moderate, direct, control access, monitor, and tune.
# Custodians - Provide the infrastructure, durable assets, information warehouse, community tools.
# Partners - Provide compatible, complementary non competitive services or goods consumed by members in association with those of the community, but not as part of the community.
# Public - Comprised of potential participants, and participants who may also spontaneously form communities that compete with or otherwise influence the context of the community.
The more mature the community, the more clearly these roles are differentiated and actively operating. For a community to reach stability over an extended time the more important it is for these duties implied in these roles to be fulfilled.
Members of a community:
*share in a communal identity,
*have a shared purpose with other members,
*need similar access to information, and
*draw from a common set of tools.
The community will interact with other communities both individually and as a group. The more cohesive and mature the community is, however, the more likely it is that it will interact as community with other communities through nominated representatives.
The community is the fundamental building block of an organisation, but communities are structurally recursive and fluid. Communities themselves naturally subdivide into teams that service particular interests or needs of the community. These teams from their own communities, and together these internal communities form a network of interacting communities. The larger and more heterogeneous the parent community the more noticeable, numerous, segregated, larger and autonomous these internal communities become.
These internal communities may also interact directly with external communities, and have external participants in otherwise internal communities. The more predominant the external participation is, the more likely is the internal community to transition though the parent community boundary to become an external community (with respect to the originating parent community). Similarly the higher the proportion of community participation from a single community in an external community, the more likely that external community will transition to an internal contextually constrained community.
Each community is, therefore, comprised of a fluid network of communities contextually constrained by, and in some way supporting the activities of the parent community.
Community based organisational structures extend horizontally through unconstrained networks of interactions and vertically through community subdivision and absorption into constrained networks of specialised communities.
====Making and Strengthening a Community====
The longer a community survivies - the more mature it becomes - the more clearly the community identity, roles and rules become. For example, a group of people with a common interest in a playing of cricket meet by chance through visits to a local field - perhaps looking for a game being played. Over time they tend to arrive more regularly and predictably at around the same time in greater numbers. Some start bringing equipment and start a game, while others join in fielding or watching. As the predictability of the presence of other interested parties grows, participants start arriving in the expectation that others will also be present, while other participants bring supporting material - like refreshments, etc. Gradually, a community is forming with self nominated and perhaps suggested or allocated roles.
Eventually the group might suggest a common name - the Sometimes Cricket Club - and others might attempt to organise more sophisticated or permanent resources, and eventually the funding needs of the group might dictate an expansion in its membership and the need to more formally manage finances on behalf of the group, etc. Rules might initially be common-sense and unspoken (like not stealing the bat and ball from the guy that supplied it), others may be agreed through shared experience. Sharing or common interests and the need to improve predicatbility of participants in games will encourage the group members to share contact details and channels of communication. The more individuals invest their time, energy and resources on behalf of the group, the more they will expect later joining members to make a catch-up contribution for the existing investment - and the community may start placing barriers to entry in the form of membership criteria and fees.
As the group grows handshake agreements may need to be formally agreed and recorded, and individuals will be formally allocated roles and leadership agreed. Along the way as disagreements arise (like who should bat first) dispute resolution mechanisms will be required.
Thus a community has been formed and gradually self-organised. If the initial casual group fails to ever define roles, find equipment supplier(s), it will be most unlikely to ever get to the stage of even the first game. If it fails to agree its meeting place and times of meetings it will probably not achieve the second game. If it fails to identify its membership and establish an identity (and therefore a brand) and all the other functions of a cricket club it will be unlikely to last out a season.
To make an effective long term community we need to pay attention to the characteristics that form a community and ensure that these characteristics are serviced. From the simple example above we see that a community has:
*Members
*Shared resources
*Identity / Brand
*Communication
*Define and shared purpose
*Location - a meeting place (which may be virtual)
*Roles
*Rules
*Governance structure
*Barriers to entry (note this might be as small as deciding to participate)
*Patron (implied or formal)
We grow and strengthen a community by addressing these characteristics directly. Ignoring any one of these will result in the failure of the community over time. For a community that assembles for a single purpose for only a short period of time - such as a demonstration, or an entertainment event this may not be a concern. If we wish the community to have any kind of longevity we will need to consider how we enable the defining charcteristices of the community.
It is with some surprise that we note that when we look at the permanent communities within many organisations we will find that several of these characteristics are only weakly addressed - if at all - rarely understood, and even more rarely considered. Herein lies the key to the internal structural failure of many organisations that have grown much beyond the oversite of their founders splitting into many semi-autonomous communities.
====The Organisation as a Community====
Here we distinguish a physical organisation from the organisation of its operations and resources.
A physical organisation - such as a company, government agency, not-for-profit, or even a political party - is:
# a community containing a network of communities,
# a patron of both internal and external communities
# a custodian of information and provider of infrastructure for communities
# a governor of community mandate, direction, performance, and culture, etc.
The physical organisation is, by definition, a community, but its boundaries may be so fuzzily defined that as a community it is little more than a container for a network of communities, whose primary allegiances are directed outside of the physical organisational boundary. Some communities in the organisation's network are planned and facilitated communities, while others are not planned but facilitated (such as professional associations, unions, standards bodies) and others are neither planned nor facilitated (but, perhaps, accommodated) (such as schools, sporting clubs, arts groups, social movements, etc.).
As a patron the physical organisation plays it primary role. Patronage is provided through a funded pool of resources that can be applied to communities as participants and enablers of community infrastructure, and through direct funding of community operations, or through funding infrastructure provision, etc. Patronage is about funding, and every gift "in kind" of resources or equipment, etc is an implied gift of funding as well. Patronage is accompanied by some ability to influence direction - if only from the implied threat of future funding cessation.
As a custodian, the physical organisation will also provide services to communities of storing knowledge, providing and maintaining technical and physical infrastructure used by communities, and management of liquid assets, etc. These are called custodian functions because they are about the preservation of assets, wealth, capability and capacity.
In its governance function the physical organisation imposes accountability for patronage, standards, policy compliance, legal compliance, strategic direction, performance measurement, financial control and resource utilisation, etc
All organisations are simultaneously intersected by many special interest communities:
*The average workforce is riddled with communities some intersecting the organisation, some not - union(s), professional bodies, schools (if staff have school age children), political, sporting, social, OHS cases, divisional, project, etc.
*Industrial associations, standards committees, regulators, etc.
*The company is surrounded by public interest groups, political and semi political groups, consumer advocacy groups, and the public relations industries.
*Internally the organisation might have communities of buyers, marketing and sales, logistics, process & quality improvement, governance, safety, research and development, financial control, etc.
Communities do not respect the conventional boundaries of corporate or governmental agencies. Communities that interact with external stakeholders, for example, draw in members of the public and convert them into organisational stakeholders in the process, but not employees (at least in the conventional sense).
====The Advantages from using Communities to Model Organisations====
In some organisational theories, communities are represented as external and internal forces or drivers, but are not directly modelled into the organisational structure. The organisation is seen as a collection of consumer-provider relationships - whether those relationships are about transmitting instructions, funding, goods, services, resources, etc. The relationships are essentially hierarchical - even in matrix organisations - and feed back and feed forward control systems have to be imposed on the structures to make them work. Structural entropy gradually causes the structure to disassemble without constant maintenance on the organisation structure itself.
The community is an advance on the classic consumer-provider interactive model, because it:
*assumes most business relationships are multi-directional exchanges between the provider and the consumer and other providers and consumers extending over a period of time;
*recognises that all transactions between parties involve a series of micro exchanges going in both directions, not a single uni-directional exchange. For example, a purchase involves the consumer providing information (identity, location, preferences, competitor data, demand level, buying cycle, etc.) and possibly funding, a sales team matching the need to available offerings and defining and providing the promise, a legal team defining the obligations, a delivery team to deliver the good or service, a quality and support team providing quality management, logistics team providing transport, etc. All of these are participants of the same community involved in meeting client needs.
*delivers the benefits of the one-stop-shop process models, without the training cost, and inherent quality variability, by forming a community of specialists to collectively provide the single point solution.
*provides a model for structuring the online presence of an organisation.
*provides an organisational architecture that distributes the costs of providing and consuming goods and services across the community rather than exclusively concentrated in the larger party. For example, a buying community might assume some of the costs of sales by providing their details online directly into the client database, select from available product (by watching videos, reading information and product comparisons provided from central location), or submit special orders online, respond to questions from other clients in hosted forums, and advertise the organisation's products and quality in organised reviewer sites, or social networking sites.
*places the provider and consumer into the same "team" and positions them as jointly trying to meet a need. The community model facilitates all participants contributing jointly and sharing ownership of the outcome - rather than one party meeting the needs of the other.
Each community is a collection of participants (members) who share common operational characteristics, goals, interests and/or functional needs. The greater the extent to which the participants share characteristics, interests, needs and goals in common the greater the cohesion in and resilience of the community - in simple terms the community is active, "tight", involved, and the members share a sense of identity, belonging and, most importantly, ownership.
Communities are semi-autonomous, self-selecting, self directed, and inclusive. This does not mean communities are necessarily "open-access". In fact communities with higher barriers to entry often have the highest sense of cohesion because membership is something hard to attain and therefore something of value. Cohesion does not necessarily mean active, however, and lack of activity generally makes a community less interesting organisationally. Communities survive by exchanging things. The greater the volume of services, tangible goods or intangible goods (such as information), that flows through and around the community the stronger the community becomes. In the community model an organisation therefore benefits by fostering participation and particularly communication among all its members.
===What is the Stakeholder Community Network Model?===
'''''The stakeholder community network model is an organisational design and analysis paradigm that sees the organisation as a network of co-dependent stakeholder communities positioned in a larger network of interacting (but not necessarily co-dependent) communities. Within this paradigm, all of an organisation's services, functions and facilities exist to service the needs of the various stakeholder communities in the network.'''''
It should be noted from the outset, that co-dependent does not mean cooperative. As with domestic co-dependent relationships, the community network may include some positively destructive co-dependent community relationships.
The model defines an organisation as consisting of a network of operations that may extend beyond the boundaries of the organisation's body corporate. One such situation might arise in franchised operations or trading networks where an external entity provides critical services on which the corporate organisation depends.
The model works as an organisational design paradigm, a process design framework, an IT strategic design paradigm and a risk and performance analysis framework. It is directly suited to modern network, online, virtual, service operational models as well as bricks and mortar industries including utilities, government, general and project manufacturing, and education. It has not been tested in the resources sector or transport sector.
As an analysis tool identification and labelling of existing implicit and explicit communities and the physical and virtual flows between them against current planning, score cards, policies, performance measurement systems, service agreements, compliance frameworks, risk models, quality, control and feedback systems highlight areas of dysfunction, duplication, redundant effort, counter-productive strategies, missed opportunities, and structural inefficiency and ineffectiveness.
As a design tool it results in the alignment of organisation wide activities to identifiable purposes with targeted participants and measurable performance. It facilitates structurally many different and potentially divergent simultaneous strategies while painting a boundary and direction for such divergence. Such support in organisational design is essential for dealing in global, highly cyclic, or political markets where cultures, rules and geographic features may require the ability to operate as "her to him and him to her", and to retire and replace entire limbs rapidly.
As a customer, partner and supplier service process model it results in bound customers and suppliers and well integrated partners while distributing a significant portion of the organisations costs to the participants.
As an IT systems framework it provides an efficient protocol for defining shared services, community portal service architectures, intra-cloud and cloud services, virtualisation clusters, etc.
==Definitions==
===The Organisation===
Organisations are networks of communities. These communities are comprised of members drawn from inside and outside the organisation's corporate legal identity, and may include communities of which the organisation has no effective control (in traditional terms).
Under the stakeholder community network model we view an organisation as a community comprised exclusively of interconnected sub-communities of people providing and consuming goods and services. Each sub-community forms multiple sub-sub-communities within it, and the community subdivision continues recursively until the costs of organising communities out way the benefits gained from the additional community.
Contrast this view of an organisation with that of other models that classify organisations in terms of bureaucratic, divisional, matrix, and similar structures. Under the stakeholder network view all of these structures can coexist in an organisation simultaneously as they are simply overlapping communities defined around structural paradigms. The stakeholder community network model does not replace such paradigms - it absorbs them.
In the stakeholder community view an organisation is a free-flowing evolving network of teams forming and disbanding as required, with some acquiring near-permanent status, while others enjoy but a single day in the sunshine. Community membership is not exclusive and it is normal for members of one community to also be members of other communities.
===The Community===
The model first defines a structural unit (the community) that possesses identifiable and comparable characteristics, such as focus, information need, functional need, etc. Secondly, the model looks to the mechanisms of facilitating stakeholder communities in a cost effective and consistently reliable and predictable way, utilising common services designed to enable and utilise the shared or distinguishing characteristics. So initially, at least, the model is community structure agnostic.
Communities form for multiple reasons, including:
*shared geographic proximity
*shared heritage
*shared communications technology
*shared language
*shared interests
*shared skills
The things we share are like gravitational attractors around which people cluster in self organising social units we are calling communities.
As communities grow beyond a few members they form sub communities whose members service the parent community or concentrate in some specialised capacity in addition their other roles as members of the community.
The communities in which we are most commonly interested (in the general organisational performance improvement context) are those forming around shared interests and skills. Within an organisation the geographic, and language communities may be crucially important, and in some contexts would be directly accommodated but they will also usually need some form of communities formed around skills and interests (like, at the very least, consuming or providing something), in order to assist the organisation achieve its purpose.
Within each community formed around shared interests or skills are a further set shared interests such as membership, meeting space, information, branding, commercial services, engagement, arbitration, and support. As these needs are common (with minor variations) across all communities they are an attractive first target for shared service provision across all communities. In designing these shared services one should remember that a properly harnessed community can be self managing, peer supporting and self selecting. Shared service provided to communities should be designed to encourage this ownership by the community membership.
A community model assumes a multi-way conversation within the community among the community members - not a massively parallel bilateral conversation between the community members and the organisation. The latter is a client-supplier relationship and by excluding inter-member interaction it embeds the costly push model of marketing, sales and service delivery. By encouraging intra-community conversation we harness the consumers in the community into one or more of the many supply roles in the community. In a customer/client oriented community supply roles span such things as marketing assistance with reviews, discussions and forum participation to support assistance in peer help spaces, and even product improvement and testing such as in software Beta programmes. On the supplier and partner side, supplier side community roles include online supply of certifications, supplier self-registration of details, self selection of available contracts, online invoice entry directly by suppliers, and suppliers providing new product information feeds matching community standardised classifications and measures, etc.
===The Stakeholder Community===
A stakeholder community, is a collection of people, agencies, or units of an agency, that share three traits in common:
# They have an interest in the organisation being modelled or analysed (IE: they are stakeholders).
# As a group, they are co-dependent with other groups of the same organisation. (IE: the groups can not operate with complete autonomy as they depend on each other for their functioning and survival).
# They possess additional distinguishing dimensions of their interest in the organisation that allow them to be functionally separated from some members of the collection and similarly grouped with others (IE: they form an identifiable and functionally similar subgroup of stakeholders).
A stakeholder community of an organisation might be defined as geographically based, and representing all customers within a geographic area, or it might be an enterprise wide collection of staff injured in forklift truck accidents, or a worldwide extra net of ECL policy advisers, or suppliers and corporate buyers for raw materials,... or any one of a long list of possible organisation specific or related groupings.
We call the members of a community "Resources". A resource may be a person or another collection of resources such an organisation, a unit of an organisation, another community. In all cases where a collection of resources is a member of a community, that collection will participate through one or more "community representatives". So in a sense resources can be seen as ultimately comprising people (even though they may be members fulfilling constrained roles).
===The Stakeholder Community Network===
A stakeholder community network is a collection of stakeholder communities that form a network of loosely co-dependent communities.
The communities comprising the network preserve the rules of membership of a stakeholder community domain (as defined above). The links between member communities represent the co-dependencies. The dependencies are functional in nature and may be about information, goods or services - provision or supply, etc. They therefore represent the first layer of potential service level agreements in an organisation.
Technically speaking, the graph connecting all members of the stakeholder network is a digraph (directed graph) when the functional attribute of the network relationship is included in the inter-community link definition.
===The Well-formed Stakeholder Network===
In the universe consisting of all possible stakeholder communities of an organisation, a complete network would include all communities in the network topography. Such a network is said to be "theoretically complete".
Theoretical completeness is neither practical nor possible to achieve in practice. We can not know, and thus enumerate, every possible stakeholder community as each resource and every possible combination of two or more resources up to and including the entire membership of the organisation's stakeholder domain is potentially a community.
Another way of viewing completeness is to first test to ensure that all members of the stakeholder community are also members in one or more of the other communities in the network. This network is then complete in terms of a organisation's resource coverage.
It is worth noting that an organisation's stakeholder resource list may include both members of the public and entities that have no direct dealing with the organisation as well as staff, clients and suppliers (etc.) of an organisation.
===The Stakeholder Community Network Model===
The stakeholder community network model views an organisation in terms of stakeholder communities with shared needs, interests and/or purposes.
The model is a government and business meta-organisational model for organisational design, performance analysis and competitive strategy. It founded on a theory of operational design that embraces networked co-dependent business structures (such as outsourcing, join-ventures and social networking), while not mandating them. The step into communities, however, fundamentally changes the organisational focus from internal structure management to external service delivery. By rejecting all activity not designed to service an identifiable community it forces the entire enterprise to embrace a service culture at every level - everybody is a client of somebody else and in a stakeholder relationship (and usually responsible to someone, or responsible for something) with many other people.
The community structure inherently distributes some of the costs of marketing, sales and servicing, from the net providers to the net consumers within the community, but is effectively a premium willingly paid by community net consumers for greater influence over service form, more relevant and timely information, improved service speed, and risk perception confirmation (the role of public forums), etc.
Communities are essentially self determining and semi-autonomous so a community network modelled organisation naturally accommodates multiple value streams simultaneously. The ability for a community to recursively sub-divide into smaller overlapping specialised communities means the enclosing community structure can accommodate not only multiple value streams internally, but also multiple agendas. Thus financial performance can be enhanced, while quality improvement, social policy or research (and other long term strategies) are driven with equal priority. Further, new value streams can be added to the structure without compromising the integrity or culture of the existing structure.
The semi autonomous nature of communities means that both competitive and and non-competitive business architectures are compatible with the community network model.
We say it is a "meta-organisational model" because, while you might design your physical organisational structure around the model (particularly at the business unit level, or in the online context), it is more common to use it to redesign the roles, service agreements and strategies of existing organisational structures in an organisation. The meta-organisational model is one that floats through a physical organisation providing a new virtualisation of the organisation by re-engineering the service agreements, social networks and logistical networks in an organisation.
One way to think of this is that the impact of applying the community stakeholder thought process is to rearrange the plumbing, the lifts, the corridors and the internal doorways inside a heritage listed building. It is still the same building on the outside, but now you don't get lost inside it, and clients and customers start sharing your destination, not just what you do.
Sure you could tear down the building and replace it with a campus that modelled your stakeholder community structure exactly, but you do not need to do so to get the benefits, and in fact doing so might be counter productive to your market.
The model does tend to have certain organisational impacts - even as a thought exercise:
*The model encourages networked structures and specialisation of semi-autonomous co-dependent internal units.
*The communities share common servicing needs and efficiency dictates some form of shared service provision for these common needs. These structures imply additional cost, which in a zero-sum change process implies that resources will have to be transferred from somewhere else.
*The network model will tend to reach across multiple divisions of an organisation in defining communities.
In the normal entity (government or business) an individual or even business unit might participate in multiple stakeholder communities at once. So the communities are not necessarily defining an organisational structure as much as a set of interlocking co-dependence structures around which services can be consolidated and streamlined, duplication identified and removed, and context specific organisational purposes can be clearly articulated.
=Applying the Stakeholder Community Network in Practice=
==Step 1. Identifying and Defining Stakeholder Communities==
We must fist decide whether we a looking for a directed outcome such "quality improvement" or an undirected (normal) outcome. This impacts the design of every community.
In a directed outcome model the directed outcome becomes a community in its own right that is automatically a participant in every other community. This allows consideration of the requirements of the directed outcome community to be capture and implemented in every other community structure.
In the undirected model no such imposed membership is mandated and the community architecture is left to optimise the framework with which it has been equipped.
In most situations we use the undirected model for analysis and the directed model in conceptual design (refactoring into an undirected model once the directed redesign has been finished).
==Step 2. Identifying and Defining the Community Ennoblement Functions==
In the model, the central object of the organisation is to ensure communities are facilitated, serviced, and harnessed for the purposes of the organisation as best it can, or otherwise "actively managed". The model sees only communities - so every participant within and without the organisation must be able to be defined as falling into one or more stakeholder communities if the model is to be considered "well-formed" (read "complete").
Within the model, the aim of the enterprise is to facilitate communities (generally) and a defined set of communities specifically - which translates into:
*identifying stakeholder communities
*mapping new and existing stakeholder communities to organisations objectives, mandate and purpose as they change
*mapping inter-community work flows testing and identifying duplicated communities, duplicated flows, and under resourcing, etc
*seeding communities as required
*funding stakeholder communities (eg seed capital, cross charging, external billing, etc)
*organising stakeholder communities
*branding stakeholder communities
*fostering community participation and outcome ownership
*providing the and possibly managing the infrastructure for community self organisation
*liaising/interfacing between stakeholder communities (eg. client community versus customer community)
*delivering the community's requested service or goods
*harnessing community ownership of the service/product improvement process
*trapping and archiving expert knowledge from both internal (to the organisation) and external community participants over time
Within an organisation adopting the stakeholder community network paradigm operationally, the stakeholder community network must be actively managed. This means it must be facilitated, moderated and funded. Resourcing is required to make it fast and efficient to implement and equip new communities and retire existing ones. Part of equipping a community is establishing its charter, budget, performance measures, governance, operating rules (constitution), core membership, decision model, meeting space, common (shared) tools and specialised applications or services need.
This necessitates the creation of a new centralised or distributed role of community facilitator(s) and a central role of community registrar (manager). The former is about equipping and assisting new communities, identifying and seed communities as required and advising and improving existing communities. The latter is about containing, policing, funding, planning, judging and budgeting communities.
==Step 3. Considerations in Designing the Stakeholder Community Analytical Structure==
Once we have a standard definition of the community concept as it applies in our analysis and organisation, the next step is to define a framework of communities through which to analyse the organisation.
As each community shares facilities between their members, the fewer top level communities there are the better the efficiency gains in the entire model will be. Unless, of course, their are too few and the resulting groupings are not homogeneous over sufficient characteristics, or the communities are badly chosen with many shared characteristics between the groups rather than within the groups.
Secondly, the choice of communities can slant the servicing view internally or externally, or indeed could simply mirror existing organisation structures. None of these effects are likely to produce efficiency gains sufficient to justify the operational overhead of the stakeholder community support systems. The gain comes from achieving 100% coverage of participants, with communities comprised of both external and internal participants, with the minimum need for intra-community process or system customisation. By demanding the mixing of internal and external members aim to eliminate duplication between external and internal systems and processes servicing the same need.
So, ultimately, the choice of top level stakeholder communities proves to be crucial to the outcome of the model - on all fronts.
In our experience, if the model is well designed the chosen top level community groups will tend to be highly co-dependent which automatically provides a structure and focus for service level agreements, and intra-community risk profiles will be highly consistent.
The choice of stakeholder communities used is prima-facie up to the organisation and the purpose of the analysis. While generalisation is possible at the highest level, as the view descends through the communities into their member sub-communities the groupings become quite specific to an organisation.
After many years of using and refining the concept we have settled on a standard top level stakeholder community model we call SCNM03. It has proven to be work predictably in both government and commercial agencies in both physical (eg manufacturing) and virtual (eg software) organisations. Alternative models include the groupings under Porter's Theory of Competitive Advantage.
=Standard Stakeholder Community Network Model: SCNM03 in Practice=
==SCNM03: Bishop's Model Stakeholder Network==
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
In the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
==Risk and the Stakeholder Community Network Model==
Risk in the model tends to vary with time and the degree of influence the organsiation (the meta-community) has in the specific community being examined. This influence will vary over time.
Consequently, in the longer time frames (ie. the strategic time frame) the Public and Governance communities are usually the highest inherent strategic risk communities in the model. The organisation tends to have the least influence over the sub-communities contained there-in and may participate only as a guest (information receiver, price-taking customer, subject of legislation, etc.), or not at all. Public attitudes can swing against the activities of the organisation, and influence the legislaters, who, in turn, can legislate the marketplace or the organisation out of existance. COnsumer preferences can change as technology progresses, making the organisation's business model irrelevant. The stakeholder network model therefore naturally tends to encourage both lobbying and active public relations management (or the exact opposite: invisibility!), and participation in external communities for information gathering.
Where timeframes being considered are shorter, ie. from an operational or tactical risk perspective, Workforce will rank as one of the highest risk spaces. If we think of Workforce as being comprised of smaller communities - say contractors and employees, and then each of these in turn being comprised of even smaller communities - say divisions, units and ultimately individuals we see that the more we subdivide the group the closer we get to a community of one member - the individual. In the very short term humans thus represent a highly variable factor.
In the micro-community of one person. the only member of the community that exists inside the employee's head is him or her self. All the risk minimisation and behaviour modification controls naturally present in a larger commnunity are dependent on that one member. In that community one person fulfills all the roles of the multi-member community. Strategies such as training, and standard processes work over an extended time frame to reduce the probability of incidents and create predictability across the workforce as a group, but in the very short or immediate timeframe the individual is still entirely responsible for each action with little chance for other community members to intercede (because there aren't any!). In the instant, this micro-community can make an unsafe decision that impacts the well being of the larger organisation (as well as themselves). Planning, thorough and extended training, careful member selection, 'idiot-proof' machine and user interface design will improve the predictability of the individual - but all these strategies take time to design, implement and achieve their effects. So, over the shortest unit of time - say, a second into the future - the individual can make a very bad decision with disastrous outcomes. This is a technical way of saying that people do dumb things that can be prevented with enough preparation and training - but only if enough time is available.
==Competition and the Stakeholder Community Network Model==
The SCNM03 model captures a deliberately divergent view of competitive strategy from that presented by many earlier authors. In this model, competitors are seen as potential suppliers, partners, clients, customers or workforce and strategies to bring them into one or more of those communities would be pursued.
Crucial to understanding the SCNM03 stakeholder model is that, purely applied, the model sees the entire universe in terms of these communities. It starts with the ideal vision built-in and therefore models a best fit to that scenario.
One obvious issue, then, is that there is clearly no community of "competitors". Under the pure SCNM03 stakeholder network model our aim is to make competitors a member of one or more of the other communities. We are therefore encouraged to both define our service offering away from competition and structure ourselves as complementary to another's offering or needs. The extent to which we are not able to achieve this influences the inherent risk that lays in the public communities.
We do not loose the unresolved participants, instead they appear as sub-communities of the public community and are subject to a range of risk mitigation strategies.
==Stakeholder Communities and Sub-Communities in SCNM03==
Each of these 8 communities is comprised of smaller communities with more specialised shared needs. For example, workforce is comprised of two specialised communities: contractors and staff (or other appropriate terminology). While many requirements of these groups are the same, there are specific differences in engagement, management, ancillary services, social interaction and disclosure levels between these groups to warrant separate community identities.
Conceptually the stakeholder network organisation is (almost) a franchiser of community management systems within a defined product/service space and in a given organisational cultural context. An organisation adopting this model will naturally look to standardise the managerial and technological profile of the communities it manages.
Applying the stakeholder network model in process design, performance analysis, compliance management or risk assessment often results in process structures and views that differ dramatically from the Divisional, Matrix, Hierarchical and Service models under which the organisation may operate. The community network model is agnostic when it comes to organisational structure (with the one exception being an organisation exactly mirroring the network model itself).
By way of example, an organisation that produces widgets, might traditionally see itself in terms of functions and processes concerning widgets. It has widget raw materials planning and acquisition, inventory management, widget production, widget distribution, widget order management and sales, etc. The same organisation in the stakeholder network model would see the world in terms of satisfying the needs of defined stakeholder groups first - not the things they were manufacturing.
In the SCNM03 stakeholder network model the natural home of the manufacturing functions is in the customer community where they are firmly focused to the customer (note - not client) desires, and materials acquisition function might be seen to contract the services of both the partner and supplier communities to satisfy material demand.
A couple of outcomes of the model are immediately apparent from this example, the model blurs the distinction between internal sourcing and external sourcing,
From a computing perspective, the model automatically leads to service portal based architectures, systems consolidation, cloud structuring (whether internal or externally hosted), and highlights the places where inter-system integration and system standardisation are needed. From an operations perspective it leads to service focused organisational architectures with defined client groups and document service standard agreements.
==The SCNM03 Communities Explained==
An individual is often a member of multiple communities (eg Customers and Clients). Our standard stakeholder communities (which in 12 years have yet to be wrong) are:
{|
|-
|Clients
|style="padding-bottom: 10px; padding-top: 10px; border-bottom: 1px solid black;bottommargin:10px;"|Stakeholders who receive or deliver services Clients are interested in rapidly finding information, requesting service, reporting hazards / incidents / events / ideas.
A classic result of the client stakeholder focus are client portals. In a local government these might take the form of a resident portal, where a city rate payer can find in one spot all the online systems for garbage collection, events, bylaws, parking permits, voting, pet registration, planning applications and objection lodgment, etc. In a direct-to-customer manufacturer the client might have access to a portal with product information, product enhancements, support, manuals, training, online-store, peer forums, product reviews, newsletter/blog, and peer/expert hints and suggestions all in one spot.
|-
|Customers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Stakeholders who pay for services that clients receive. This is separation is very common.
Customers want to pay for things in as convenient and consolidated a way as possible, and have mechanisms available for enquiring, revoking or monitoring services for which they pay. Companies that send multiple bills for the different services they provide are examples of firms that seriously need to look at their customers as a stakeholder group.
Governments provide the classic examples of customer and client separation: A State Government might pay for (or part-pay for) some services that are received by citizens of a city government. The state government is the customer, while the citizen is the client.
|-
|Suppliers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Suppliers of services and materials to the organisation. Suppliers have common service interests such as finding tenders, quotes, interfacing supply catalogues to purchase order systems, checking on payment status, locating standard contracts, etc.
|-
|Partners
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Partners are providers of complementary services. A “meals on wheels” charity provider may function as a partner to a local government, delivering services complementary to those of the city government, but funded by non-City sources.
Partners are mainly interested in ensuring their services stay complementary and not competitive with the organisation. So information on strategies, management of joint projects, identification of opportunities, etc are of interest.
Roads constructions authorities are partners who provide accident minimisation services, and traffic impact control services, etc. that complement those of the local or city government roads teams.
The relationship between insurance companies and the fire service is another example of a partnering structure. Insurance companies have an interest in facilitating the fire control services as they reduce their insured risks.
Franchised sales teams for a retailer, independent software manufacturers for a computer or games console manufacturer, and joint-ventures are all examples of partner community networks.
|-
|Workforce
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Workforce include both employees, contractors and consultants. HR systems, payroll, contract management, OHS, incident management, etc. are examples of services needed by this community.
|-
|Treasury/Custodians
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Treasury & other custodians are always an internal community. Their members are charged with maintaining assets and lowest level enabling systems for the other communities.
IT/IS, Building Management, Maintenance and Treasury are always members of the custodians group. They protect assets and provide the infrastructure on which the community specific applications reside.
Email, communications, data storage, server management clearly fit under this group.
|-
|Governance
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The governance community, like the workforce community includes multiple sub-communities, such as the executive, regulators, government bodies, risk management, compliance management, etc. These communities use services related to the provision of control and performance monitoring. Finance, council management, boards, executive team, performance review committee, inter-government reporting, risk, and compliance systems, and planning/budgeting systems are typically included here. Governance community members are both internal and external bodies with which the organisation has an accounting and reporting relationship.
|-
|The Public
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The public includes everyone else. This is a very important community as it has the ultimate power to remove the entire organisation from existence, or cause government to legislate it out of existence.
It is also the group from which all the other stakeholders originally come. From a strategic perspective, the aim of every organisation should be to get every member of the public community to transition to one of the other stakeholder groups.
The public need to know about the services an organisation provides, its ethics, and social performance.
While most membership of this community is reasonably obvious, the presence of public relations teams, lobbying and marketing in this community may be less so.
An organisation is always a member of the public stakeholder communities of all other organisations.
|}
=Applying the Stakeholder Network Model=
The stakeholder networks model is recursive. It applies organisation wide and through each sub grouping down to the individual business unit level (in fact it can also work at the individual level – but not generally in an IS context). Just as the organisation has these broad stakeholder groups, each business unit has the same stakeholder breakdown, all be it with most stakeholders in the various communities being internal to the organisation – rather than external to the City.
The stakeholder community network has clear relationships between the elements - particularly as realised in SCNM03 - and provides a model under which social networking and portal systems naturally fit. The model leads naturally to both network organisations (those using mixed in and out-sourcing, shared service models and joint-ventures as their standard business model.
The stakeholder community model has a number of applications:
#As an IT system design paradigm and idea promoter.
#As a full organisational modelling paradigm. In this form it results in dramatically different organisation models from those in general usage and is thus often too radical for executive comfort.
#As an analytic “best practice” benchmark it is outstanding, and even when only partly applied results in improved and more cost efficient process design.
#In designing and online and web service business presence. With a little thought it should be apparent how effective the stakeholder model is in designing an online presence and structuring of mutual obligation social networks.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
f54d609f8d240d7ba168fd4d101ce36b7edfe76b
Business Process Reengineering - Process Charting
0
289
326
325
2018-10-29T11:41:58Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Introduction - Business Process Charting=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
==Charting the Business Process - A Unified and Holistic Approach==
===Why Chart?===
There are many reasons we may wish to chart a business and its businesses processes including mapping of data flows, documenting process steps, designing automated and hybrid systems, defining intra and inter-organisational relationships, defining or analysing service agreements, etc.
===What is a (Business) Process Chart?===
A process chart is a diagramatic representation of a set of processes, that models the enveloping organisations as if it were a machine with a functional domain that encompassed the diagrammed processes.
From a computational perspective, a business process chart is a diagramatic program describing human, machine, natural, organisational, functional and non-functional systems using digraphs.
===What are the Characterisitics of a Good Process Charting Method?===
====Objectives====
This author proposes that the objectives of a good process charting system should be to:
* improve the understanding and clarity of the data represented in the chart,
* enable domain specific analysis (such as efficiency, economy, effectiveness, reliability, etc),
* enable viewing of the processes at multiple levels of detail simultaneously,
* chart the target analysis domain completely,
* seemlessly represent both automated and non automated processes in the same chart,
* enable the automated modelling of the system directly from the chart (which implies the charting "meta-language" should have a consistent "syntax" and semantics - similar to an "ideal" computer language),
* represent processes across diverse operations, industries, products and services without context specific modification of the syntax or semantics,
* produce charts from unfamiliar industries (etc) that are understandable to a moderately experienced chart reader, with no prior background in the subject charted, and
* enable the construction of "proofs" of the processes.
In this author's view these objectives are assisted when the charting system assumes the properties and conventions of well designed computer programming language - albeit a visual one. These properties include the grammatic (semantic and syntactic) consistency, structured functional encapsulation, object reuse and polymorphism, conceptual inheritance, simplicity and functional expansion.
====Consistent Identifiable Grammar====
The grammar of a process charting method defines the symbols, their meaning, and the rules for "legal" combinations of these symbols and meaning of such combinations.
In computational languages the atomic element in a programming language's grammar is called a token. In a text based computational language these tokens are strings of one or more characters, some of which are defined in the language with a special meaning. The tokens comprise the syntactic elements of the grammar. The grammar itself defines a consistent semantic interpretation of the syntactic elements when combined in pre-defined combinations.
In a process chart the atomic element is a symbol that maps to a real world object such as an organisation, a person, a data element, a process (or function), a data store, etc. These symbols comprise the syntactic elements of the charting method's grammar, and the charting rules document a grammar which delivers a consistent semantic interpretation of the syntactic elements when combined in the pre-defined combinations.
====Completeness====
A well designed charting system is internally consistent in atomic structure and behaviours, while mapping completely (in a mathematical sense) to the real world scenario being modelled.
To be conceptually useful, "completeness" chould be able to be "proven" - at least theoretically. This explanation implies an algebraic representation (eg predicate calculus) of the process charted should be derivable from the charting language. Having said that, it should be noted that few computing languages have such a mathematical validity test available (SQL being one notable exception).
====Minimal Syntactic Complexity====
Completeness in oricess modelling is a complex topic, and one fraught with some potentially counter productive implied solutions.
For example, a charting system with a unique symbol for every-process might achieve completeness, but it would achieve this at the expense of very high grammatic complexity.
The strength of process charting approach lies specifically in its ability to categorise, simplyify, and standardise our view of a social system. If one measure of language complexity lies in the number of rules in a grammar, then the greater the range of predefined (or reserved) symbols in the language, the greater the number of rulee that will be required to define their use.
Complexity, under such a measure, is minimised when the number of unique predifined "terms" is minimised. The mover restricted is symbol set, however, the more symbols must be used to represent simple everyday-repeating processes.
===The BPC Business Process Charting Method===
The core symbols of the process charting language are defined in the BPR overview. This author postulates that all human-machine processes can be documented with this minimum set of symbols. The simplicty of its symbol set (and therefore grammar) can lead to diagramatic complexity.
Certain objects and their processes occur with such rapidity, that diagrammatic complexity is reduced significantly by expamding the core set of symbols as shown in [[Business Process Reengineering - Chart Key]].
==Charting Example - Electronic Grants Management System==
The process charting method included on the following pages demonstrates the business process charting method as designed by this author and improved with input from clients and staff of BPC over 24 years. The example charts represent the BPC Process Reengineering Modelling and the BPC Stakeholder Community model in action in a real world situation. The resulting demonstration is a fully functional government grants management process for whole-of-government administration of government grants to the public.
*[[Business Process Reengineering - Chart Key]]
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
73ac152a4b245146897670bf781740106b14b9ef
348
326
2018-10-29T11:57:34Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Introduction - Business Process Charting=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
==Charting the Business Process - A Unified and Holistic Approach==
===Why Chart?===
There are many reasons we may wish to chart a business and its businesses processes including mapping of data flows, documenting process steps, designing automated and hybrid systems, defining intra and inter-organisational relationships, defining or analysing service agreements, etc.
===What is a (Business) Process Chart?===
A process chart is a diagramatic representation of a set of processes, that models the enveloping organisations as if it were a machine with a functional domain that encompassed the diagrammed processes.
From a computational perspective, a business process chart is a diagramatic program describing human, machine, natural, organisational, functional and non-functional systems using digraphs.
===What are the Characterisitics of a Good Process Charting Method?===
====Objectives====
This author proposes that the objectives of a good process charting system should be to:
* improve the understanding and clarity of the data represented in the chart,
* enable domain specific analysis (such as efficiency, economy, effectiveness, reliability, etc),
* enable viewing of the processes at multiple levels of detail simultaneously,
* chart the target analysis domain completely,
* seemlessly represent both automated and non automated processes in the same chart,
* enable the automated modelling of the system directly from the chart (which implies the charting "meta-language" should have a consistent "syntax" and semantics - similar to an "ideal" computer language),
* represent processes across diverse operations, industries, products and services without context specific modification of the syntax or semantics,
* produce charts from unfamiliar industries (etc) that are understandable to a moderately experienced chart reader, with no prior background in the subject charted, and
* enable the construction of "proofs" of the processes.
In this author's view these objectives are assisted when the charting system assumes the properties and conventions of well designed computer programming language - albeit a visual one. These properties include the grammatic (semantic and syntactic) consistency, structured functional encapsulation, object reuse and polymorphism, conceptual inheritance, simplicity and functional expansion.
====Consistent Identifiable Grammar====
The grammar of a process charting method defines the symbols, their meaning, and the rules for "legal" combinations of these symbols and meaning of such combinations.
In computational languages the atomic element in a programming language's grammar is called a token. In a text based computational language these tokens are strings of one or more characters, some of which are defined in the language with a special meaning. The tokens comprise the syntactic elements of the grammar. The grammar itself defines a consistent semantic interpretation of the syntactic elements when combined in pre-defined combinations.
In a process chart the atomic element is a symbol that maps to a real world object such as an organisation, a person, a data element, a process (or function), a data store, etc. These symbols comprise the syntactic elements of the charting method's grammar, and the charting rules document a grammar which delivers a consistent semantic interpretation of the syntactic elements when combined in the pre-defined combinations.
====Completeness====
A well designed charting system is internally consistent in atomic structure and behaviours, while mapping completely (in a mathematical sense) to the real world scenario being modelled.
To be conceptually useful, "completeness" chould be able to be "proven" - at least theoretically. This explanation implies an algebraic representation (eg predicate calculus) of the process charted should be derivable from the charting language. Having said that, it should be noted that few computing languages have such a mathematical validity test available (SQL being one notable exception).
====Minimal Syntactic Complexity====
Completeness in oricess modelling is a complex topic, and one fraught with some potentially counter productive implied solutions.
For example, a charting system with a unique symbol for every-process might achieve completeness, but it would achieve this at the expense of very high grammatic complexity.
The strength of process charting approach lies specifically in its ability to categorise, simplyify, and standardise our view of a social system. If one measure of language complexity lies in the number of rules in a grammar, then the greater the range of predefined (or reserved) symbols in the language, the greater the number of rulee that will be required to define their use.
Complexity, under such a measure, is minimised when the number of unique predifined "terms" is minimised. The mover restricted is symbol set, however, the more symbols must be used to represent simple everyday-repeating processes.
===The BPC Business Process Charting Method===
The core symbols of the process charting language are defined in the BPR overview. This author postulates that all human-machine processes can be documented with this minimum set of symbols. The simplicty of its symbol set (and therefore grammar) can lead to diagramatic complexity.
Certain objects and their processes occur with such rapidity, that diagrammatic complexity is reduced significantly by expamding the core set of symbols as shown in [[Business Process Reengineering - Chart Key]].
==Charting Example - Electronic Grants Management System==
The process charting method included on the following pages demonstrates the business process charting method as designed by this author and improved with input from clients and staff of BPC over 24 years. The example charts represent the BPC Process Reengineering Modelling and the BPC Stakeholder Community model in action in a real world situation. The resulting demonstration is a fully functional government grants management process for whole-of-government administration of government grants to the public.
*[[Business Process Reengineering - Chart Key]]
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
73ac152a4b245146897670bf781740106b14b9ef
Business Process Reengineering - Chart Key
0
290
328
327
2018-10-29T11:41:59Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Chart Symbols and Their Meanings==
[[IMAGE:BPRChartKeyV4.gif]]
==Process Charting Design Rules==
===Introduction - Key Concept===
The full process charting model forms a language for accurately describing processes and other object relationships. The language can be represented either diagrammatically or descriptively (textually). A chart drawn according to the charting method describes a network of unstructured interacting objects (processes, people, etc) and the data output states of this network as it consumes data through its inputs.
The charting method goes beyond a standard process flowchart in that its symbol grammar is sufficiently consistent and structured as to enable the translation of the chart to a text description. The text description takes the form of a program that in turn could be executed directly or translated / re-coded into a standard application programming language as an executable application.
This ability to reliably define a program simply by documenting a real world process according to the design rules below allows an automated modelling testbed to be constructed from the chart, and then stress tested with different data loads, or different error types, or checked for deadlocks, bottle knecks or compared against alternate process designs, etc. Such testing and anlysis can be done either manually or via automation.
There are a number of different symbols and descriptive encoding rules, but in essence many of thesee enhancements are for diagramtic efficiency. The core of the charting system revolves around one meta (undrawn) symbol - data - a few drawn symbols. The full model merely expands on these to provide a richer descriptive set, and more analytic detail with fewer individual diagramatic elements being required to represent the idea than otherwise.
All symbols are one of three classes:
* Objects - Things that originate, transform, store or consume data
* Events - Both consumers and orginators of event data. Events may receive and/or generate an excite or inhibit signal.
* Connectors - Lines joining events and objects through which data flows
===The importance of Data===
The life blood of the process diagram (or description) is "data". It is data that flows through the connectors to join event or object to event or object. Data is created when an event fires, or a data orgination object manufactures or otherwsie supplies data. Data is stored in data stores and transformed in processes. Data is discarded in data sinks.
Data is inherently transient and never drawn as a symbol, although it is documented. When data is stationary it is held in a data store. A document with writing on it is therefore a data store - not the data itself. Likewise a database record is a data store, not the data itself.
Data is virtual and can take many forms. It may be a piece of information a human would understand or an electronic blib with a voltage value to excite or inhibit the recipient proportionately.
Data is infinitely divisable, imutable and transformable.
Like energy, data can neither be created or destroyed across the entire universe of processes, but within the context of any subset of processes less than the infinite set of all possible processes, data can be orginated and discarded.
When data is held in a data store it transforms the data store in some way. In a paper document datastore, it results in a blank sheet displaying written or image data. In a manufactured item "data store" it results in the transformation of petro chemicals and metals into a consumer item like a lamp shade or a car.
===The Class of Objects===
<div class="mainfloatright" style="width:40%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" align=right>
[[Image:RecursiveShapes.png]]
''All objects are recursive and containers.''
[[Image:BPC4KeyChartObj.png]]
''All objects or events are connected by lines called connectors.''</div>
The key chart comes with a number of design usage rules that are perhaps a little unusual and therefore should be considered carefully:
* All symbols are either events, objects or connectors ( lines or arrows).
* All objects are (except events) are recursive - meaning that they can include nested members of the same type as the parent (as well as other types), a constrained subset of the child objects or, in some cases, unrestrained subsets. In computational terms a recursive function is one that invokes itself, while this form of pure recursion of objects is rare in process maps, it is legal within the charting rules.
* All objects are potentially containers of other objects and, therefore, all objects are notionally sets of one or more objects. (Object encapsulation)
* Objects contained within a parent inherit the in and out flows (connectors) of the parent - or rather they inherit the right to use the flows. (Object inheritance)
* All objects and/or events are connected by lines called connectors, or by being recursively embedded in a parent object - which then becomes a container for that object.
* Data flows through the connecting lines into the objects where it is stored, and/or transformed and/or distributed. Data is ethereal and moves from one place to another transforming and being transformed by the vessels in which it is store. A document, for example, is therefore considered to be a data store - not the data itself. A manufactured item, is also a data store, containing the end result of multiple processes each transforming the storage vessel. This is the key concept that enables this process charting method to transcend both service and manufacturing process modelling domains.
* The arrows connecting objects are data-flows - referring to the movement of information, not explicitly the media on which the information is stored at the time.
* Connecting Arrows can take a number of annotations, including:
** identification of the data stream (or data streams)
** a filter condition for access
** selector bars
** optional (conditional) flags
** authorisation signature lock
** global type flags (like E for error flows) and/or
** weight and fuzzyfiers (mainly used for neural and bayesian process modelling)
* Objects are scriptable
* All objects (and ideally, but not mandated - connectors) have unique identifiers.
* All objects can be contained in multiple container objects simultaneously - but each occurrence of object is globally unique - and therefore has the same definition everywhere where it appears.
* All objects can be containers and as such may be "drilled through" to their content
* A process object may be a "map" (tranformational or distributive) or a "controller" (quality governor).
* A process fires or executes when all required inflows have data present (asynchronous).
* Events impose a block on some or all functions of the connected object until the event fires.
* All processes are assumed to operate concurrently when data is present on their incoming connectors, or an event fires, unless also constrained by other events blocking the object's functions. Events may thus operate as a clock, or trigger and as a governor or inhibitor.
* The data-flow method is capable of modelling both excitatory networks and inhibitory process networks.
* Everything, that is not a connector or event, is an object of one type or another - including the organisation itself.
===Object Hierarchy===
There is an implied object as container hierarchy (although not in any way mandatory):
* Entities can contain processes and all other objects
* Processes can contain processes and all other objects
* Data-stores can contain data-store objects
This hierarchy is very much a rough rule of thumb, for there are many cases where a data-store will be modelled with containing processes and data-stores - such as where the data-store is intelligent. Entities like organisations or people are, however better seen as external to the process unless they are containers of the process, as they will always have some processes that are not modelled in any given chart and therefore are potentially unreliable.
===Entities and Entity Groups===
Notionally, every process, can have a controlling entity (particularly where a person is actually doing the process itself). In the charting method, processes are not "owned" by people (although this is how one tends to conceptualise them), so much as controlled by them. In its pure form the process chart would show "process owners" as controlling entities connecting to their processes and thus, like events, constraining their execution unless present and active. To avoid diagrammatic clutter, where a process is controlled by a single entity (or single entity group), that entity (or entity group) can be identified in the process "owner-controller" property in the process description.
An entity group might be a typing pool, call centre staff pool, a community, etc. Each member of the entity group is inter-changeable for each other member with respect to the process concerned. Individual entities within the entity group may have other filters, conditions and constraints that subsequently exclude them from actually controlling the process. An entity group may be a sub-group of another entity group such as C-level executives in a company entity, or administration staff in a stakeholder community.
With the exception of community entities (which are effectively both an entity and an entity group), all entities and entity groups are presented using the same symbol. This is consistent with the central assumptions about entities with respect to the view of the process flows presented in a chart.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
2a16cb2e0b8c5acd961534a7b1bbbfc19b9883c1
350
328
2018-10-29T11:57:34Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Chart Symbols and Their Meanings==
[[IMAGE:BPRChartKeyV4.gif]]
==Process Charting Design Rules==
===Introduction - Key Concept===
The full process charting model forms a language for accurately describing processes and other object relationships. The language can be represented either diagrammatically or descriptively (textually). A chart drawn according to the charting method describes a network of unstructured interacting objects (processes, people, etc) and the data output states of this network as it consumes data through its inputs.
The charting method goes beyond a standard process flowchart in that its symbol grammar is sufficiently consistent and structured as to enable the translation of the chart to a text description. The text description takes the form of a program that in turn could be executed directly or translated / re-coded into a standard application programming language as an executable application.
This ability to reliably define a program simply by documenting a real world process according to the design rules below allows an automated modelling testbed to be constructed from the chart, and then stress tested with different data loads, or different error types, or checked for deadlocks, bottle knecks or compared against alternate process designs, etc. Such testing and anlysis can be done either manually or via automation.
There are a number of different symbols and descriptive encoding rules, but in essence many of thesee enhancements are for diagramtic efficiency. The core of the charting system revolves around one meta (undrawn) symbol - data - a few drawn symbols. The full model merely expands on these to provide a richer descriptive set, and more analytic detail with fewer individual diagramatic elements being required to represent the idea than otherwise.
All symbols are one of three classes:
* Objects - Things that originate, transform, store or consume data
* Events - Both consumers and orginators of event data. Events may receive and/or generate an excite or inhibit signal.
* Connectors - Lines joining events and objects through which data flows
===The importance of Data===
The life blood of the process diagram (or description) is "data". It is data that flows through the connectors to join event or object to event or object. Data is created when an event fires, or a data orgination object manufactures or otherwsie supplies data. Data is stored in data stores and transformed in processes. Data is discarded in data sinks.
Data is inherently transient and never drawn as a symbol, although it is documented. When data is stationary it is held in a data store. A document with writing on it is therefore a data store - not the data itself. Likewise a database record is a data store, not the data itself.
Data is virtual and can take many forms. It may be a piece of information a human would understand or an electronic blib with a voltage value to excite or inhibit the recipient proportionately.
Data is infinitely divisable, imutable and transformable.
Like energy, data can neither be created or destroyed across the entire universe of processes, but within the context of any subset of processes less than the infinite set of all possible processes, data can be orginated and discarded.
When data is held in a data store it transforms the data store in some way. In a paper document datastore, it results in a blank sheet displaying written or image data. In a manufactured item "data store" it results in the transformation of petro chemicals and metals into a consumer item like a lamp shade or a car.
===The Class of Objects===
<div class="mainfloatright" style="width:40%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" align=right>
[[Image:RecursiveShapes.png]]
''All objects are recursive and containers.''
[[Image:BPC4KeyChartObj.png]]
''All objects or events are connected by lines called connectors.''</div>
The key chart comes with a number of design usage rules that are perhaps a little unusual and therefore should be considered carefully:
* All symbols are either events, objects or connectors ( lines or arrows).
* All objects are (except events) are recursive - meaning that they can include nested members of the same type as the parent (as well as other types), a constrained subset of the child objects or, in some cases, unrestrained subsets. In computational terms a recursive function is one that invokes itself, while this form of pure recursion of objects is rare in process maps, it is legal within the charting rules.
* All objects are potentially containers of other objects and, therefore, all objects are notionally sets of one or more objects. (Object encapsulation)
* Objects contained within a parent inherit the in and out flows (connectors) of the parent - or rather they inherit the right to use the flows. (Object inheritance)
* All objects and/or events are connected by lines called connectors, or by being recursively embedded in a parent object - which then becomes a container for that object.
* Data flows through the connecting lines into the objects where it is stored, and/or transformed and/or distributed. Data is ethereal and moves from one place to another transforming and being transformed by the vessels in which it is store. A document, for example, is therefore considered to be a data store - not the data itself. A manufactured item, is also a data store, containing the end result of multiple processes each transforming the storage vessel. This is the key concept that enables this process charting method to transcend both service and manufacturing process modelling domains.
* The arrows connecting objects are data-flows - referring to the movement of information, not explicitly the media on which the information is stored at the time.
* Connecting Arrows can take a number of annotations, including:
** identification of the data stream (or data streams)
** a filter condition for access
** selector bars
** optional (conditional) flags
** authorisation signature lock
** global type flags (like E for error flows) and/or
** weight and fuzzyfiers (mainly used for neural and bayesian process modelling)
* Objects are scriptable
* All objects (and ideally, but not mandated - connectors) have unique identifiers.
* All objects can be contained in multiple container objects simultaneously - but each occurrence of object is globally unique - and therefore has the same definition everywhere where it appears.
* All objects can be containers and as such may be "drilled through" to their content
* A process object may be a "map" (tranformational or distributive) or a "controller" (quality governor).
* A process fires or executes when all required inflows have data present (asynchronous).
* Events impose a block on some or all functions of the connected object until the event fires.
* All processes are assumed to operate concurrently when data is present on their incoming connectors, or an event fires, unless also constrained by other events blocking the object's functions. Events may thus operate as a clock, or trigger and as a governor or inhibitor.
* The data-flow method is capable of modelling both excitatory networks and inhibitory process networks.
* Everything, that is not a connector or event, is an object of one type or another - including the organisation itself.
===Object Hierarchy===
There is an implied object as container hierarchy (although not in any way mandatory):
* Entities can contain processes and all other objects
* Processes can contain processes and all other objects
* Data-stores can contain data-store objects
This hierarchy is very much a rough rule of thumb, for there are many cases where a data-store will be modelled with containing processes and data-stores - such as where the data-store is intelligent. Entities like organisations or people are, however better seen as external to the process unless they are containers of the process, as they will always have some processes that are not modelled in any given chart and therefore are potentially unreliable.
===Entities and Entity Groups===
Notionally, every process, can have a controlling entity (particularly where a person is actually doing the process itself). In the charting method, processes are not "owned" by people (although this is how one tends to conceptualise them), so much as controlled by them. In its pure form the process chart would show "process owners" as controlling entities connecting to their processes and thus, like events, constraining their execution unless present and active. To avoid diagrammatic clutter, where a process is controlled by a single entity (or single entity group), that entity (or entity group) can be identified in the process "owner-controller" property in the process description.
An entity group might be a typing pool, call centre staff pool, a community, etc. Each member of the entity group is inter-changeable for each other member with respect to the process concerned. Individual entities within the entity group may have other filters, conditions and constraints that subsequently exclude them from actually controlling the process. An entity group may be a sub-group of another entity group such as C-level executives in a company entity, or administration staff in a stakeholder community.
With the exception of community entities (which are effectively both an entity and an entity group), all entities and entity groups are presented using the same symbol. This is consistent with the central assumptions about entities with respect to the view of the process flows presented in a chart.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
2a16cb2e0b8c5acd961534a7b1bbbfc19b9883c1
Managing Risk in Mergers & Acquisitions - Causes of Success & Failure
0
291
334
333
2018-10-29T11:57:32Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2010 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
For the purposes of this article we will use the terms merger, acquisition and M&A interchangeably for the general activity of conducting a merger or acquisition of one legal business entity by another. The discussion will focus on the M&A activities between distinct legal entities rather than business units within a legel entity, as the issues in the latter case are fundamentally different from those in the former case.
Strictly speaking, a merger differs from an acquisition in that in an acquisition one entity assumes control and absorbs another entity, usually expunging the acquired entities operational distinctiveness. In a merger two or more entities join their business and control structures in a manner that delivers some level of shared control and business profile. In reality, the actual outcomes are rarely purely those of an acquisition or a merger - regardless of the original intentions. The act of acquiring or merging almost always results in irrevocable cultural and operational change for all entities involved - not just the entity acquired.
For this reason, and for reasons that will become apparent later on, we shall treat both activities as essentially the same.
Irrespective of the retoric for the merger, in order to succeed, it is critical for the parties to the merger (and particularly the dominant party) to understand clearly why they are really merging. Typical reasons for merging include (in no particular order):
* Economies of scale through larger productive capacity or ability to share services
* Vertical integration of productive capacity or the supply chain
* Market share / elimination of direct or indirect competition
* Securing supply
* Asset acquisition or stripping
* Strategic hedging through addition of counter cyclical products to the group mix
* Acquisition of access to Intellectual Proiperty
* Geographic expansion or access to markets with entry barriers
* Accumulation of complimentary product/service sets
* Suppression of emerging product line / Intellectual Property threats
* Acquisition of customers
Not all of these motivations will pass traditional measures of success such as "improved productivity" or "staff retention" - as clearly in a number of these cases the underlying purpose of the merger has nothing to do with establishing a bigger, better, more efficient business - just a safer business environment.
If your purpose is merely to eliminate a competitor, or acquire their IP, or strip their assets, etc. much of the discussion in the paper will be of limited applicability to your situation. Your objectives are met if the price you pay for acquisition and business wind-up delivers these outcomes for less than you gain in return. If your purpose is to gain productivity improvement, or economies of scale, complimentary product mix outcomes and retain as much of the acquired (or junior partner's) business / delivery capability as possible (etc.) then this paper is relevant to your circumstances.
=M&A - The State of the Industry=
==What Measure Success?==
The most obvious outcome of any M&A is prima-facie the elimination of an actual or potential competitor from the competitive mix.
In 1999 KPMG published a study of merger outcomes over the preceding 10 years. The study identified that 75% to 83% of mergers fail where failure was measured by lower productivity, labour unrest, higher absenteeism & loss of shareholder value or even dissolution of companies.
This and other studies highlight a central question in determining the strategy for a successful merger - what is the basis for measuring the success of an M&A project?
<table>
<tr>
<th>
Success Measure
</th>
<th>
Survey Outcome
</th>
<th>
Year of Study
</th>
</tr>
<tr>
<td>
Achievement of anticipated purpose
</td>
<td>
30-45%
</td>
<td>
1997
</td>
</tr>
<tr>
<td>
Achievement of strategic or financial object
</td>
<td>
<20%
</td>
<td>
1983, 1991, 1994
</td>
</tr>
<tr>
<td>
Preserve or Enhance book value
</td>
<td>
25%-45%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Enhance shareholder value
</td>
<td>
17%
</td>
<td>
1995
</td>
</tr>
<tr>
<td>
Preserve or improve NPAT
</td>
<td>
<50%
</td>
<td>
1996, 1999
</td>
</tr>
<tr>
<td>
Preserve or improve productivity
</td>
<td>
<25%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Preserve strike, absenteeism and accidents levels
</td>
<td>
<50%
</td>
<td>
1977, 1981, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Long Term
</td>
<td>
20-50%
</td>
<td>
1978, 1988, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Short Term
</td>
<td>
50%
</td>
<td>
1996
</td>
</tr>
</table>
A summary of the conclusions from a number of these studies can be found in [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
It is clear from the range of studies and the span of years they cover, that successful mergers, are distressingly and consistently unlikely - at least with respect to these measures of success. A Merger, like life, is not a dress rehearsal. Unfortunately, as most executives go through a merger only rarely, mistakes are common, and the first time you do it, it will be for real. It is therefore important to learn, as far as possible, from the conclusions of others that have gone before - because the odds of success are not in your favour.
Both the Zweig (1995) and KPMG (1999) study found in their respective studies of merger outcomes that on 17% of mergers resulted in an enhancement of either shareholder value or key performance drivers. Perhaps of even greater concern, Zweig found that shareholder value was actually destroyed in 53% of cases, and KPMG determined the performance drivers actually weakened in 78% of cases:
<table>
<tr>
<td>
[[image:Zweig95_M&A_ImpactOnShareValue.jpg]]
</td>
<td>
[[image:KPMG99_M&A_ImpactOnKPI.jpg]]
</td>
</tr>
</table>
=Why Merge=
Studies of merger outcomes in terms of only classical performance or direct shareholder value enhancement imply a need for successful integration of the pre-merger businesses. This assumption does not capture the total range of success measures that might properly apply to merger motivations (regardless of the public retoric of the entities involved). The need for successful integration of the pre-merger businesses depends on the true underlying motivation for the merger:
[[Image:MnA WhyMerge.jpg]]
The fundamental driver for measuring post-merger success is to first clearly define the reason(s) for the merger. As success integration of the merged businesses is possibly among the hardest to the successful outcomes to achieve, it is essential to map the requirement for this strategy to the reason for the merger. Ordered from least to highest need for integration, typical merger motivations might include:
# Eliminate a competitor
# Hedge market cycles
# Acquire brand
# Enter a geographic market
# Integrate vertically
# Opportunistic
# Grow market share
# Cut costs – economies of scale
# Grow size (defensive)
# Acquire technical or management expertise
=Reasons For Failure=
==A Summary of the Recent Studies==
Integration of the of the pre-merger businesses in the post-merger entity is a precursor to success in (possibly) the majority of merger strategies. From a comprehensive review of the literature we have identified the the most common reasons sited for integration failure, (with two added by the author from direct (anecdotal) experience).
{| border="1"
! !! Reason !! %
|-
|1 || Poorly planned and managed integration || 100
|-
|2 || Neglect of existing business due to the attention being paid to the acquired business || 68
|-
|3 || Underestimating the depth & pervasiveness of human issues triggered by the merger || 50
|-
|4 || Loss of key staff in acquired business || 50
|-
|5 || Demotivation of employees of acquired business || 50
|-
|6 || Underestimating problems of skill transfer || 34
|-
|7 || Selecting the wrong partner || 34
|-
|8 || Cultural incompatibility || 17
|-
|9 || Delayed decisions due to breakdown of responsibilities, delegations & authority || 17
|-
|10 || Too much focus on doing the deal - not enough on to integration planning & management || 17
|-
|11 || Insufficient research (due diligence) into the acquired business || 17
|-
|12 || Paying the wrong price or at the wrong time || 17
|-
|13 || Buying for the wrong reasons || 17
|-
|14 || Incompatible business and IT systems || JB
|-
|15 || Doomed by negotiation || JB
|-
|}
IT systems are likely to increase in importance because in the last 10-15 years they have become more entwined with business models & processes than was possibly the case when some of these studies on which this data is based were conducted, and in larger organisations can represent a key (and diferentiating) part of the businesses infrastructure investment. Incompatibility can be a critical financial and technical barrier to successful integration.
The last point emphasises that where one party in the pre-merger negotiation wins, the merged entities generally lose.
==Failure in a Nutshell==
Where business integration is a key ingredient of the post-merger mix, the studies allow us to identify the top 5 risks of that result in merger failure:
# Integration poorly planned and managed
# Underestimated cultural & human risks
# Loss of key success enablers (eg staff)
# Inaccurate financial due diligence
# Neglecting current business
As these studies examined mergers that actually completed (i.e. the tacke over survived the acquisition process), the studies ignored a common reason for merger failure: That of non-completion. Reasons for non completion might include:
# Legal (non participating competitor) or regulatory intervention
# Unacceptable risks, asset/liability valuations or cultural issues emerging during sue-dilligence
# Exogenous market shifts during the merger process (such as changes in market conditions of demand, financing, etc.)
# Death or departure of key personnel from the target entitites
# Excessive regulatory or judicial hurdles causing the process to extend unacceptably for the participants
# Failure, or inability to offer sufficient compensation to the vendors
# Gazumping by competitor acquirers
=Reasons for Success=
Conversely both formal studies and deductive reasoning allows us to identify the key reasons for successful mergers.
* No need to achieve an integrated business, and "right" price paid
* Nature of post merger structure (vertical, conglomerate or geographic, etc)
* Clearly enunciated & communicated direction
* Acquisition-specific & flexible integration strategy
* Clear decision structure and role definitions
* A sense of urgency and outcome ownership
* Compatible business systems
* Compatible business cultures
* Compatible accounting practices
* Integration ready culture
* Commonality of merger goals
* Active risk management strategy
* Actively managed, tracked & resourced integration project
* Minimised debt service load
* Pre-existing partnering or cohabitation
=Further Reading=
In our next article [[Managing Risk in Mergers & Acquisitions - A Success Strategy]], we examine how to apply this knowledge to create a successful merger strategy.
A cross linked review of the of the literature over a span of 20 years is available at [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]].
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
c05ec1f1dcaeb33c93138ca7f37f54140649ecab
376
334
2018-10-29T12:04:03Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2010 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
For the purposes of this article we will use the terms merger, acquisition and M&A interchangeably for the general activity of conducting a merger or acquisition of one legal business entity by another. The discussion will focus on the M&A activities between distinct legal entities rather than business units within a legel entity, as the issues in the latter case are fundamentally different from those in the former case.
Strictly speaking, a merger differs from an acquisition in that in an acquisition one entity assumes control and absorbs another entity, usually expunging the acquired entities operational distinctiveness. In a merger two or more entities join their business and control structures in a manner that delivers some level of shared control and business profile. In reality, the actual outcomes are rarely purely those of an acquisition or a merger - regardless of the original intentions. The act of acquiring or merging almost always results in irrevocable cultural and operational change for all entities involved - not just the entity acquired.
For this reason, and for reasons that will become apparent later on, we shall treat both activities as essentially the same.
Irrespective of the retoric for the merger, in order to succeed, it is critical for the parties to the merger (and particularly the dominant party) to understand clearly why they are really merging. Typical reasons for merging include (in no particular order):
* Economies of scale through larger productive capacity or ability to share services
* Vertical integration of productive capacity or the supply chain
* Market share / elimination of direct or indirect competition
* Securing supply
* Asset acquisition or stripping
* Strategic hedging through addition of counter cyclical products to the group mix
* Acquisition of access to Intellectual Proiperty
* Geographic expansion or access to markets with entry barriers
* Accumulation of complimentary product/service sets
* Suppression of emerging product line / Intellectual Property threats
* Acquisition of customers
Not all of these motivations will pass traditional measures of success such as "improved productivity" or "staff retention" - as clearly in a number of these cases the underlying purpose of the merger has nothing to do with establishing a bigger, better, more efficient business - just a safer business environment.
If your purpose is merely to eliminate a competitor, or acquire their IP, or strip their assets, etc. much of the discussion in the paper will be of limited applicability to your situation. Your objectives are met if the price you pay for acquisition and business wind-up delivers these outcomes for less than you gain in return. If your purpose is to gain productivity improvement, or economies of scale, complimentary product mix outcomes and retain as much of the acquired (or junior partner's) business / delivery capability as possible (etc.) then this paper is relevant to your circumstances.
=M&A - The State of the Industry=
==What Measure Success?==
The most obvious outcome of any M&A is prima-facie the elimination of an actual or potential competitor from the competitive mix.
In 1999 KPMG published a study of merger outcomes over the preceding 10 years. The study identified that 75% to 83% of mergers fail where failure was measured by lower productivity, labour unrest, higher absenteeism & loss of shareholder value or even dissolution of companies.
This and other studies highlight a central question in determining the strategy for a successful merger - what is the basis for measuring the success of an M&A project?
<table>
<tr>
<th>
Success Measure
</th>
<th>
Survey Outcome
</th>
<th>
Year of Study
</th>
</tr>
<tr>
<td>
Achievement of anticipated purpose
</td>
<td>
30-45%
</td>
<td>
1997
</td>
</tr>
<tr>
<td>
Achievement of strategic or financial object
</td>
<td>
<20%
</td>
<td>
1983, 1991, 1994
</td>
</tr>
<tr>
<td>
Preserve or Enhance book value
</td>
<td>
25%-45%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Enhance shareholder value
</td>
<td>
17%
</td>
<td>
1995
</td>
</tr>
<tr>
<td>
Preserve or improve NPAT
</td>
<td>
<50%
</td>
<td>
1996, 1999
</td>
</tr>
<tr>
<td>
Preserve or improve productivity
</td>
<td>
<25%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Preserve strike, absenteeism and accidents levels
</td>
<td>
<50%
</td>
<td>
1977, 1981, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Long Term
</td>
<td>
20-50%
</td>
<td>
1978, 1988, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Short Term
</td>
<td>
50%
</td>
<td>
1996
</td>
</tr>
</table>
A summary of the conclusions from a number of these studies can be found in [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
It is clear from the range of studies and the span of years they cover, that successful mergers, are distressingly and consistently unlikely - at least with respect to these measures of success. A Merger, like life, is not a dress rehearsal. Unfortunately, as most executives go through a merger only rarely, mistakes are common, and the first time you do it, it will be for real. It is therefore important to learn, as far as possible, from the conclusions of others that have gone before - because the odds of success are not in your favour.
Both the Zweig (1995) and KPMG (1999) study found in their respective studies of merger outcomes that on 17% of mergers resulted in an enhancement of either shareholder value or key performance drivers. Perhaps of even greater concern, Zweig found that shareholder value was actually destroyed in 53% of cases, and KPMG determined the performance drivers actually weakened in 78% of cases:
<table>
<tr>
<td>
[[image:Zweig95_M&A_ImpactOnShareValue.jpg]]
</td>
<td>
[[image:KPMG99_M&A_ImpactOnKPI.jpg]]
</td>
</tr>
</table>
=Why Merge=
Studies of merger outcomes in terms of only classical performance or direct shareholder value enhancement imply a need for successful integration of the pre-merger businesses. This assumption does not capture the total range of success measures that might properly apply to merger motivations (regardless of the public retoric of the entities involved). The need for successful integration of the pre-merger businesses depends on the true underlying motivation for the merger:
[[Image:MnA WhyMerge.jpg]]
The fundamental driver for measuring post-merger success is to first clearly define the reason(s) for the merger. As success integration of the merged businesses is possibly among the hardest to the successful outcomes to achieve, it is essential to map the requirement for this strategy to the reason for the merger. Ordered from least to highest need for integration, typical merger motivations might include:
# Eliminate a competitor
# Hedge market cycles
# Acquire brand
# Enter a geographic market
# Integrate vertically
# Opportunistic
# Grow market share
# Cut costs – economies of scale
# Grow size (defensive)
# Acquire technical or management expertise
=Reasons For Failure=
==A Summary of the Recent Studies==
Integration of the of the pre-merger businesses in the post-merger entity is a precursor to success in (possibly) the majority of merger strategies. From a comprehensive review of the literature we have identified the the most common reasons sited for integration failure, (with two added by the author from direct (anecdotal) experience).
{| border="1"
! !! Reason !! %
|-
|1 || Poorly planned and managed integration || 100
|-
|2 || Neglect of existing business due to the attention being paid to the acquired business || 68
|-
|3 || Underestimating the depth & pervasiveness of human issues triggered by the merger || 50
|-
|4 || Loss of key staff in acquired business || 50
|-
|5 || Demotivation of employees of acquired business || 50
|-
|6 || Underestimating problems of skill transfer || 34
|-
|7 || Selecting the wrong partner || 34
|-
|8 || Cultural incompatibility || 17
|-
|9 || Delayed decisions due to breakdown of responsibilities, delegations & authority || 17
|-
|10 || Too much focus on doing the deal - not enough on to integration planning & management || 17
|-
|11 || Insufficient research (due diligence) into the acquired business || 17
|-
|12 || Paying the wrong price or at the wrong time || 17
|-
|13 || Buying for the wrong reasons || 17
|-
|14 || Incompatible business and IT systems || JB
|-
|15 || Doomed by negotiation || JB
|-
|}
IT systems are likely to increase in importance because in the last 10-15 years they have become more entwined with business models & processes than was possibly the case when some of these studies on which this data is based were conducted, and in larger organisations can represent a key (and diferentiating) part of the businesses infrastructure investment. Incompatibility can be a critical financial and technical barrier to successful integration.
The last point emphasises that where one party in the pre-merger negotiation wins, the merged entities generally lose.
==Failure in a Nutshell==
Where business integration is a key ingredient of the post-merger mix, the studies allow us to identify the top 5 risks of that result in merger failure:
# Integration poorly planned and managed
# Underestimated cultural & human risks
# Loss of key success enablers (eg staff)
# Inaccurate financial due diligence
# Neglecting current business
As these studies examined mergers that actually completed (i.e. the tacke over survived the acquisition process), the studies ignored a common reason for merger failure: That of non-completion. Reasons for non completion might include:
# Legal (non participating competitor) or regulatory intervention
# Unacceptable risks, asset/liability valuations or cultural issues emerging during sue-dilligence
# Exogenous market shifts during the merger process (such as changes in market conditions of demand, financing, etc.)
# Death or departure of key personnel from the target entitites
# Excessive regulatory or judicial hurdles causing the process to extend unacceptably for the participants
# Failure, or inability to offer sufficient compensation to the vendors
# Gazumping by competitor acquirers
=Reasons for Success=
Conversely both formal studies and deductive reasoning allows us to identify the key reasons for successful mergers.
* No need to achieve an integrated business, and "right" price paid
* Nature of post merger structure (vertical, conglomerate or geographic, etc)
* Clearly enunciated & communicated direction
* Acquisition-specific & flexible integration strategy
* Clear decision structure and role definitions
* A sense of urgency and outcome ownership
* Compatible business systems
* Compatible business cultures
* Compatible accounting practices
* Integration ready culture
* Commonality of merger goals
* Active risk management strategy
* Actively managed, tracked & resourced integration project
* Minimised debt service load
* Pre-existing partnering or cohabitation
=Further Reading=
In our next article [[Managing Risk in Mergers & Acquisitions - A Success Strategy]], we examine how to apply this knowledge to create a successful merger strategy.
A cross linked review of the of the literature over a span of 20 years is available at [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]].
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
c05ec1f1dcaeb33c93138ca7f37f54140649ecab
RIAM:Overview: The Assertion Linked Systems Based Audit (ALSBA)
0
292
336
335
2018-10-29T11:57:32Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Introduction==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:ALSBA.png]]
</div>
</td>
</tr>
</table>
The keys to the method are '''structure''' and '''focus'''. With RIAM, Internal Audit applies a Systems and Assertion Based approach to answer targetted 'questions' about an audit area. Questions focus the review, while assertions define the criterion for the answer.
Many systems based approaches merely measure the compliance of an organisation's staff with a particular system. The ALSBA method is a substantial enhancement to this commonly used approach. The RIAM auditor analyses compliance of the system process with a strategic and/or tactical purpose, compliance of practice with procedure and the awareness and readiness presented by potential (risks and opportunities) in the system itself.
The avoidance of checklists makes the auditor adaptable by permanently adopting a 'learning' posture. More importantly, the process is universal in that the same logic structure can be applied from the strategic level through to transactional compliance level, and from 'hard' financial processes to 'soft' subjective process.
Very large organisations present some particular challenges for the systems based audit, including coordination of teams across multiple jurisdictions, locations and organisation units. Here we present an overview to the technical aspects of the RIAM SBA, in [[RIAM:Conduct of the Very Large Audit|Conduct of the Very Large Audit]] we explore the method in detail in both the large audit and small audit context.
==What Are Assertions?==
The figure on the preceding diagramme summarises the Assertion Linked Systems Based Audit analytic structure. The process starts with the five areas of Internal Audit's "Scope of work" within which Assertions are defined. Support for the selected Assertions is classified into management's 10 Control Classes (areas for management action). The systems built by management to support the Assertions within the Control Classes will have identifiable "Control Attributes" identical to those used in our Control Implementation Service, and are classifiable according to the "Type" - preventive, detective or corrective.
The concept of Assertions is the core of a RIAM Systems Based Audit. Assertions are truths that we wish to express about a system. They formulated as statements of "fact" about a system. Examples of typical Compliance Assertions for financial aspects of a Grants Scheme might be:
That:
a. Grant expenditure is bona fide (ie that acquittals are for actual grants and for services appropriate to grant activity);
b. Grant data reported/processed is:
* Attributed to the '''proper period''',
* '''Accurately''' calculated,
* Correctly and appropriately '''accumulated''',
* Accurately '''recorded''',
* Correctly '''disclosed''',
* '''Properly authorised''' with respect to transactions (ie grantee approved costs and the Commission is satisfied that the amount is for an appropriate expense),
* Providing benefits to which grantees are '''eligible''',
c. The relevant '''management directions''' and '''legislation are observed''':
* Payments are in accordance with legislation, and
* Approvals for grants are in accordance with the legislation (ie properly vetted by the Grant Committee and approval is given by the Board); and
d. The assets of the organisation are efficiently, effectively and otherwise '''appropriately protected and applied''' (ie having an appropriate process of grant approval that assures projects are of an appropriate standard, and that Commission resources are used efficiently).
==For What are Assertions Used?==
When we say a given system is operating satisfactorily we mean that our review has tested the truth of a set of assertions and we have found that they have been sustained. Thus testing the assertions is the purpose of the audit.
Assertions are the focus and underlay the structure of the RIAM analytic method. All review activities, findings, discussions and recommendations must be able to be tied back to the review's assertions.
The result is that both the auditor and the auditee have a precise understanding of the level of comfort a given review offers.
Assertions have another huge advantage for the auditor: They allow us to frame focus questions about a system in "yes" or "no" form, which are answered by proving or disproving the assertions. For example, the question "Is system XYZ operating effectively?", is, by its nature, subjective. My meaning of the word 'effective' may be radically different from your understanding of theat same word. If we say, "effective in this context means accurate and timely" then we both know that neither of us meant "authorised and consistent", or "fair and equitable".
Thus by combining a focus question, with the assertions that define a "yes" answer, we as auditors, can give management and the governance committee what they want: certainty. We do not need to hedge for the unknown - because we have stated clearly our context specific meaning.
Thus we say that assertions are the definitions of the audit project focus question.
For a detailed discussion of assertions and example assertion sets in various kinds of systems see:
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
==How do we Establish Assertions?==
A reviews assertions are agreed with the auditee management before a review commences. In many cases, such as financial balances audits and quality audits, we are able to recommend appropriate assertions. In other reviews, particularly those specifically requested by management, the managers will have a clear idea of particular "Questions" they wish answered by the review.
The establishment of "Questions" is the first step in selecting audit assertions.
During the entrance interview phase of the audit management identifies a number of questions about the target system they wish to have answered. The auditor then proposes a series of Assertions, the sustaining of which will constitute an affirmative answer, and the suppressing of which will constitute a negative answer. These assertions are agreed with management.
==What is the Assertion Linked Systems Based Approach?==
===The Objectives===
The objectives of the reviews are summarised as:
* Document the procedures in operation within the section so far as they relate to the target activities;
* Collect sufficient data and analyse that data to support assertions that address management's critical success factors represented by questions they request audit to answer;
* Identify risk and efficiency exposures to the organisation and the critical success factors of management;
* Recommend relevant and practicable changes in the systems and procedures to management where these exposures are present; and
* Form an opinion as to the overall reliability of the systems in place and as modified.
===Meeting The Objectives===
[[Image:ALSBASteps.png]]
The structure of the approach, diagrammed above, that meets the audit objectives has four phases. Here we summarise those phases. A more detailed discussion of these phases mapped into the context of both small team and large multi-location, team audits is explored in:
* [[RIAM:VLA:The Four Phases of the RALSBA|The Four Phases of the RIAM Systems Based Audit]]
<table width="100%" border=1 >
<tr ><td >
====PHASE 1: FAMILIARISATION, SCOPE AND PLANNING====
<ol>
<li> Define View of the Audit Area, Establish Risks, Threats & Benefits expected by Management.<br>
<br>
Identify the objectives and purposes of the section being reviewed, and the review being conducted; document critical success factors. Entrance interviews are held with senior management during which management's concerns and directions are communicated as well as the Critical Success Factors of the audit and the section being audited. Certain objectives, such as legislative compliance, are always assumed to be present;<br>
<br>
Identify the functions in place to realise the objectives, critical success factors and purposes. A series of initial interviews are conducted with relevant middle and line management and staff to:
* Introduce the review and reassure staff as to the assisting rather than policing nature of the review,
* Identify the operations and organisation structure adopted to meet the objectives, purposes and critical success factors.
<br>
<li> Set Focus Questions, Audit Scope, Boundary & Assertions
Establish focus questions and their associated answering assertions , the satisfaction of which will represent a "pass" result. The assertions represent the criteria for evaluation;
</ol>
This topic is explored in more detail in:
*[[RIAM:VLA:FAMILIARISATION, SCOPE & PLANNING|ALSBA - Phase 1. FAMILIARISATION, SCOPE & PLANNING in the Very Large Audit]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 2: DOCUMENTATION AND SYSTEMS ANALYSIS====
<ol>
<li> Systems Description<br>
Build functional description of the area under review, focussing on the ten Control Classes or other appropriate classification of management action areas.<br>
<br>
Build a cyclic description of control systems, examining both time based cycles and data flows.<br>
<br>
Investigate the control systems in place to implement the functions. Tasks include:
* Document the procedures in operation so far as they relate to the scope and boundary of the Audit task,
* Compare actual procedures to legislation, policies, guidelines and documented procedures noting exceptions;
<br>
<br>
Examine management information and reporting systems in place to monitor the operations;
<br><br>
<li> Threat Causing Assertion Failure & Controls Addressing Threats<br>
Evaluate the systems against the assertions to be supported, noting key controls in the systems, and which assertions they affect, to determine:
* Potential strengths and weaknesses of the designed systems;
* Preliminary ranking of risk and exposures including efficiency exposures.
</ol>
More detail is available on this topic in:
* [[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL|PHASE 2: DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]]
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 3: TESTING AND RESULTS ANALYSIS====
<ol>
<li> Test Systems<br>
<br>
Design a testing program and Test the system and its transactions and/or data for:
* Compliance of operations with specified system (strengths);
* Occurrence of the identified weaknesses, risks or exposures;
<br>
<li> Evaluate Results<br>
Analyse the results of systems analysis and compliance testing stages to accept or refute the established assertions and operating compliance. <br>
</ol>
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 4: REPORTING AND FOLLOW UP====
<ol>
<li> Design Corrections<br>
Conclude and report in which we:<br>
* Identify risk and efficiency exposures to the Organisation;
* Recommend changes in the systems and procedures to the Organisation's management where these exposures are present;
* Form an opinion as to the overall reliability of the systems in place and as modified;
* Report to both management and the Audit Committee after and during each task;
<br>
<br>
The control system's ability to support the assertions and therefore the key controls identified are analysed at three levels:<br>
<br>
* '''Preventive Controls/Treatments'''
** Including direct controls such as authorisation and certification of forms, indirect controls such as training, maintenance of up-to-date reference material, section administration and organisation;
<br>
* '''Detective Controls/Treatments'''
** Such as supervisor review, batch control totals, edit checks and periodic system reconciliations;
<br>
* '''Corrective Controls/Treatments'''
** Such as routing an error back through the same control system that originally processed and detected the error and response to exception reports.
<br>
<br>
<li> Gain Management Ownership<br>
Conduct exit interviews, produce the final report and review action plans as required.
<br><br>
Although steps presented here suggest a linear sequence of steps, the correct approach involves regular, on-going to management during the conduct of the review. Interim reports, either formal or informal should be provided during the review. The key factor is that there should be NO SURPRISES for management at the end of the review. This facilitates ownership and acceptance of the findings, recommendations and the audit generally.
<br><br>
<li> Classify Findings, Facilitate Action Plans and Update the Organisation Risk Model
The final stage of the Review is to formalise the findings and recommendations by classifying their effects on the risk evaluation of the organisation and feed these back into the risk model. The risk model both provides an ongoing measure of the organisation's risk level, and eventually feeds back into the planning process for the identification of either further action or necessary reviews.
<br><br>
</ol>
</td></tr>
</table>
===Establishing the framework===
The key principles of the framework include:
* Interviews to scope and focus the review and involvement of Management and Staff throughout the process;
* Ensuring agreement as to the purpose, focus, scope, boundary, approach and findings of the review;
* Assertions as criteria for evaluation.
* Application of Risk Analysis, not just at the Planning stage, but also the Threat Analysis stage when assessing Systems Design, and the Reporting Stage when finalising recommendations. The Audit Risk is the risk that the audit will provide a wrong opinion. This is a function of:
** The Inherent Risk in the organisation
*** the risk that an error is likely to occur;
** The Control Risk
*** the risk that the control system will not prevent, detect or correct the error; and
** The Detection Risk
*** the risk that our procedures will not identify the existence of a material error.
The ALSBA uses Assertion focussed Risk and Threat analytic procedures to minimise this risk.
* Risk and Threat analysis aims to minimise the cost of reviews by keeping procedures tuned to the real exposures, and when combined with assertions, raises the certainty that our systems opinion is correct.
* Use of a variety of report and presentation styles to best communicate information; and
* The Internal Auditor MUST become part of the management & systems improvement process, not a disinterested, occasional observer.
* Analysis of control systems performance in meeting objectives.
* Clear discussion and specific recommendations to provide improvements.
==What is Threat Testing?==
Threat testing is an approach to assertion testing used as an alternative to a Desired Control Model. RIAM supports both concepts.
The key benefits of threat testing are:
* Controls analysis is kept current to the ACTUAL systems in place rather than an out-of-date control model;
* The audit process recognises and supports improvement and change in systems - essential for environments where Total Quality Management is operating;
* By evaluating the sources of possible problems, the process RESULTS in the development of Desired Control Models;
* Management is involved in the assessment of risks of systems failure;
This is a brief outline of the Threat Testing process :
* Each assertion is examined in turn. For each assertion a list of causes for failure of an assertion is prepared based on experience, statistical sampling, management advice, consultant advice, and checklists, etc. These causes are called threats. To each threat a probability of occurrence may be assigned if desired (perhaps based on historic samples).
* Each threat is then applied to the control system model (developed during the systems documentation phase) to investigate the probability of the system preventing the threat (ie. mitigating the risk). This probability is expressed as a probability of system failure.
* The risk of the threat occurring multiplied by the risk of system failure (Control Risk) is probability of the assertion not being sustained in operation.
The sum of all such threat related probabilities is the total risk of assertion failure in the system.
==How Do We Document Systems?==
===Working Papers===
RIAM working papers are designed to form a "tree" or pyramid with the apex being the opinion of the systems in operation, and the base being the detailed "views" or models of the organisation's systems and the testing results verifying aspects of the system's operations.
<table width="100%" border=1 >
<tr ><th >REF</th><th>CONTENTS</th></tr>
<tr><td>1</td><td>Final Audit Report and Other Relevant Files</td></tr>
<tr><td>2</td><td>Supervisor, Manager & Partner Reviews and Follow Up</td></tr>
<tr><td>3</td><td>Engagement Letters, Contract and Contacts</td></tr>
<tr><td>4</td><td>Action Plan, Client Follow Up and Correspondence</td></tr>
<tr><td>5</td><td>Matters for Manager & Partner Attention</td></tr>
<tr><td>6</td><td>Matters for Review Next Audit</td></tr>
<tr><td>7</td><td>Planning Documents and Audit Program</td></tr>
<tr><td>8</td><td>Work & Time recording Schedule</td></tr>
<tr><td>9</td><td>Background and Organisation Details</td></tr>
<tr><td>10</td><td>Organisation Objectives, Operating & Financial Policies, and Performance Measures</td></tr>
<tr><td>11</td><td>Strength & Weakness Schedule</td></tr>
<tr><td>12</td><td>Control System Documentation and Conclusion<br>
(Control Questionnaires, flowcharts, checklists and narratives)</td></tr>
<tr><td>13</td><td>Records of Interview</td></tr>
<tr><td>14</td><td>Legislation and Management Directives - Compliance<br>
(Including Important Contracts and Agreements)</td></tr>
<tr><td>15</td><td>Analysis and Tests of Transactions, Processes and Account Balances</td></tr>
<tr><td>16</td><td>Other Background Data and Notes</td></tr>
</table>
''The Index for The Standard RIAM Audit File''
The foregoing index shows that the files are self contained units including not only plans and tests, but also:
* date records of client contacts;
* relevant legislation and directions;
* full internal and external cross references;
* systems documentation; and
* organisation background and structures.
Section 12 of the file contains the detailed analysis of the systems under review:
<table width="100%" border=1 >
<tr ><th >PHASE</th><th>ACTION</th><th>WHO</th><th>REF</th></tr>
<tr><td>1</td><td>Conclusion</td><td>12.</td></tr>
<tr><td>2</td><td>Objectives (Purpose) of the Control System</td><td>12.</td></tr>
<tr><td>3</td><td>Framework of Analysis (Assertions to be supported)</td><td>12.</td></tr>
<tr><td>4</td><td>Key Controls</td><td>12.</td></tr>
<tr><td>5</td><td>Overview of the Control System (Principal Flows)</td><td>12.</td></tr>
<tr><td>6</td><td>Control System Flowcharts/Documentation</td><td>12.</td></tr>
<tr><td>7</td><td>Files & Records in the System</td><td>12.</td></tr>
<tr><td>8</td><td>Cycles in the System</td><td>12.</td></tr>
<tr><td>9</td><td>Transactions and Value</td><td>12.</td></tr>
<tr><td>10</td><td>Documents in the System</td><td>12.</td></tr>
<tr><td>11</td><td>Segregation of Duties</td><td>12.</td></tr>
<tr><td>12</td><td>Other</td><td>12.</td></tr>
</table>
''Index for Section 12 of the Standard Audit File - Control System Documentation''
The continuation of the "tree" structured analysis is evident in the above index. Each subsection contains further structured working papers, the details of which can be found in the volume "Standard Forms & Papers" of this series.
===Methods of Analysis===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IA9DocMethods.png]]
</div>
</td>
</tr>
</table>
The working papers require that systems design is analysed BEFORE any testing is performed. While prewritten test programs can be used, the full benefit of the method is received when the systems analysis is performed using the various systems models:
<ul>
<li> Segregation of Duties Chart
<li> Client Provider Analysis
<li> Key Quantities (transaction values and volumes)
<li> Cyclic Events
<li> Annotated Data Flows, Narrations and/or Document Flows
<li> Key Controls structured by their "data flow focus":
<ul>
<li> Inputs
<li> Processes
<li> Outputs
<li> Storage
</ul>
</ul>
And evaluated within the assertion/control attribute structure outlined earlier.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%">
<tr>
<td>
<div class="left">
[[Image:IAAnotatedDataFlow.png]]
</div>
</td>
<td>
<div class="right">
[[Image:IASegOfDutiesChart.png]]
</div>
</td>
</tr>
</table>
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IAAssertionMatrix.png]]
</div>
</td>
</tr>
</table>
The Working Paper's documentation of the system culminates in the Assertion Matrix and Control Strength & Weaknesses Chart. The Systems Analysis of section 12 of the file is summarised in these charts.
Within the assertion structured systems model are many subsystems. These are documented throughout the documentation "tree". Each subsystem should be documented in the way that best suits our analytic needs. For transaction flows this might be some type of annotated data flow, for a delegations analysis it might be an organisation chart, and for a risk analysis it might be a Fitzgerald Matrix, etc.
There are a number of techniques available to the auditor for use in documenting systems of internal control, such as:
* Narration
* Process and Document Flows
* Annotated Data Flows
* Organisation Charts
* Segregation of Duties Chart
* Assertion Matrix
* Lancaster Modelling
* Algorithm Pseudo-programming
* Simulation
Irrespective of which method is chosen, documentation should include:
* the origin of every document and record in the system
* all processing that takes place on the document
* the disposition of every document and record in the system
* a description of internal controls operating within the system
==What Are Some of the Types of Reviews Conducted Within the ALSBA?==
Management Assurance services utilising the ALSBA cover the full range of Internal Audit work including:
<table width="100%" border=1 >
<tr>
<td>
* Internal Audit Unit Performance Review;
* Efficiency and Effectiveness Reviews;
* Compliance and Integrity Reviews;
* Strategic and Tactical Planning Reviews;
* Financial Audits;
* Systems Analysis and Design Review;
* Quality Audit (TQM);
* Computer Controls Implementation;
* Methodology Design and Development Review;
* Control Systems Design;
</td>
<td>
* Training Review;
* EDP Reviews (15 different types);
* Corporate Design and Planning Reviews;
* Risk Management Review;
* Change Control;
* Occupational Health & Safety;
* Inventory Management;
* Maintenance Systems;
* Process Control;
* Fraud Control; and
* Quality Management System Integration.
</td>
</tr>
</table>
==How Do We Report?==
Ultimately the product produced and of greatest significance to management is the report. Our reporting is standardised to ensure consistency of structure, coverage, presentation, language and quality.
The significant features of our reports include:
* Standardised structure;
* Systems documentation and flow charts;
* Every finding is presented with: "Observation", "Risks and Implications", "Recommendations", and "Management Comment" sub sections;
* Clear, specific and relevant recommendations, not vague references to the need to "review" an area or "correct a problem";
* Clearly argued risks and implications of each finding. An observation is analysed by:
** The assertions affected,
** Risks and exposures from the observation,
** Arguments in favour of the breach and audit's comment on that argument;Inclusion of and focus on Action Plans; and
** Linking of findings to a clearly stated premise for the finding's importance: The Assertions affected.
Although the Report structure is one of the aspects of RIAM specifically tailored to the client, most adopt a close variation of one standard structure. RIAM includes five distinct report structures to assist clients identify their reporting needs.
The report is presented under the following headings/sections:
<ol>
<li> Executive Summary<br>
Provides a summary of the purpose, objectives, assertions, approach, scope, the overall opinion, key findings and issues arising.
<br>
<li> Objectives and Approach<br>
Addresses the "How" and "Why" of the review, and defines the assertions on which the conclusions and findings are based.
<br>
<li> Scope and Boundary<br>
Clearly defines the matters covered by the review, and most importantly the matters excluded from the review.
<br>
<li> Brief Description of the System Reviewed<br>
Covers the Purpose of the Section/Systems, The People and Organisation Structure, the Principal Activities of the Section/Systems, Documents and Records (both manual and computer) and the Reports Produced from and to the Section/Systems.
<br>
<li> Checklist of Findings, Recommendations and Action Plans<br>
Presents in Landscape form a summary of the findings and recommendations in section 6 under the headings: "Findings" and "Recommendations". Tables include boxes for Action Plans to be referenced or detailed. This section assists in monitoring and following up responses to audit recommendations by the Audit Committee.
<br>
<li> Detailed Findings and Recommendations<br>
The findings and recommendations have a standard structure:<br>
* Observation
** The observed facts, relevant legislation, directions and industry relevant information.
* Implications and Risks
** Assertions suppressed or supported.
** Principal risks and exposures.
** Arguments in favour of, or reasons for, the breach and audit's comment.
** Summation of audit's conclusion as to risk or exposure.
* Recommendations
** Numbered, clear, specific and relevant recommendations for action.
** Where alternatives are identified either by audit or the client they are presented and evaluated.
* Management Comment
** Management's response to the issues raised and action taken. After discussion and exit interviews the vast body of your recommendations should be accepted by management. If not, you have not done your job correctly!
</ol>
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
[[Category:Internal Audit - RIAM]]
{{BackLinks}}
</noinclude>
f16b463043523a28c430d6952430f5ca3868579b
352
336
2018-10-29T11:59:10Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Introduction==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:ALSBA.png]]
</div>
</td>
</tr>
</table>
The keys to the method are '''structure''' and '''focus'''. With RIAM, Internal Audit applies a Systems and Assertion Based approach to answer targetted 'questions' about an audit area. Questions focus the review, while assertions define the criterion for the answer.
Many systems based approaches merely measure the compliance of an organisation's staff with a particular system. The ALSBA method is a substantial enhancement to this commonly used approach. The RIAM auditor analyses compliance of the system process with a strategic and/or tactical purpose, compliance of practice with procedure and the awareness and readiness presented by potential (risks and opportunities) in the system itself.
The avoidance of checklists makes the auditor adaptable by permanently adopting a 'learning' posture. More importantly, the process is universal in that the same logic structure can be applied from the strategic level through to transactional compliance level, and from 'hard' financial processes to 'soft' subjective process.
Very large organisations present some particular challenges for the systems based audit, including coordination of teams across multiple jurisdictions, locations and organisation units. Here we present an overview to the technical aspects of the RIAM SBA, in [[RIAM:Conduct of the Very Large Audit|Conduct of the Very Large Audit]] we explore the method in detail in both the large audit and small audit context.
==What Are Assertions?==
The figure on the preceding diagramme summarises the Assertion Linked Systems Based Audit analytic structure. The process starts with the five areas of Internal Audit's "Scope of work" within which Assertions are defined. Support for the selected Assertions is classified into management's 10 Control Classes (areas for management action). The systems built by management to support the Assertions within the Control Classes will have identifiable "Control Attributes" identical to those used in our Control Implementation Service, and are classifiable according to the "Type" - preventive, detective or corrective.
The concept of Assertions is the core of a RIAM Systems Based Audit. Assertions are truths that we wish to express about a system. They formulated as statements of "fact" about a system. Examples of typical Compliance Assertions for financial aspects of a Grants Scheme might be:
That:
a. Grant expenditure is bona fide (ie that acquittals are for actual grants and for services appropriate to grant activity);
b. Grant data reported/processed is:
* Attributed to the '''proper period''',
* '''Accurately''' calculated,
* Correctly and appropriately '''accumulated''',
* Accurately '''recorded''',
* Correctly '''disclosed''',
* '''Properly authorised''' with respect to transactions (ie grantee approved costs and the Commission is satisfied that the amount is for an appropriate expense),
* Providing benefits to which grantees are '''eligible''',
c. The relevant '''management directions''' and '''legislation are observed''':
* Payments are in accordance with legislation, and
* Approvals for grants are in accordance with the legislation (ie properly vetted by the Grant Committee and approval is given by the Board); and
d. The assets of the organisation are efficiently, effectively and otherwise '''appropriately protected and applied''' (ie having an appropriate process of grant approval that assures projects are of an appropriate standard, and that Commission resources are used efficiently).
==For What are Assertions Used?==
When we say a given system is operating satisfactorily we mean that our review has tested the truth of a set of assertions and we have found that they have been sustained. Thus testing the assertions is the purpose of the audit.
Assertions are the focus and underlay the structure of the RIAM analytic method. All review activities, findings, discussions and recommendations must be able to be tied back to the review's assertions.
The result is that both the auditor and the auditee have a precise understanding of the level of comfort a given review offers.
Assertions have another huge advantage for the auditor: They allow us to frame focus questions about a system in "yes" or "no" form, which are answered by proving or disproving the assertions. For example, the question "Is system XYZ operating effectively?", is, by its nature, subjective. My meaning of the word 'effective' may be radically different from your understanding of theat same word. If we say, "effective in this context means accurate and timely" then we both know that neither of us meant "authorised and consistent", or "fair and equitable".
Thus by combining a focus question, with the assertions that define a "yes" answer, we as auditors, can give management and the governance committee what they want: certainty. We do not need to hedge for the unknown - because we have stated clearly our context specific meaning.
Thus we say that assertions are the definitions of the audit project focus question.
For a detailed discussion of assertions and example assertion sets in various kinds of systems see:
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
==How do we Establish Assertions?==
A reviews assertions are agreed with the auditee management before a review commences. In many cases, such as financial balances audits and quality audits, we are able to recommend appropriate assertions. In other reviews, particularly those specifically requested by management, the managers will have a clear idea of particular "Questions" they wish answered by the review.
The establishment of "Questions" is the first step in selecting audit assertions.
During the entrance interview phase of the audit management identifies a number of questions about the target system they wish to have answered. The auditor then proposes a series of Assertions, the sustaining of which will constitute an affirmative answer, and the suppressing of which will constitute a negative answer. These assertions are agreed with management.
==What is the Assertion Linked Systems Based Approach?==
===The Objectives===
The objectives of the reviews are summarised as:
* Document the procedures in operation within the section so far as they relate to the target activities;
* Collect sufficient data and analyse that data to support assertions that address management's critical success factors represented by questions they request audit to answer;
* Identify risk and efficiency exposures to the organisation and the critical success factors of management;
* Recommend relevant and practicable changes in the systems and procedures to management where these exposures are present; and
* Form an opinion as to the overall reliability of the systems in place and as modified.
===Meeting The Objectives===
[[Image:ALSBASteps.png]]
The structure of the approach, diagrammed above, that meets the audit objectives has four phases. Here we summarise those phases. A more detailed discussion of these phases mapped into the context of both small team and large multi-location, team audits is explored in:
* [[RIAM:VLA:The Four Phases of the RALSBA|The Four Phases of the RIAM Systems Based Audit]]
<table width="100%" border=1 >
<tr ><td >
====PHASE 1: FAMILIARISATION, SCOPE AND PLANNING====
<ol>
<li> Define View of the Audit Area, Establish Risks, Threats & Benefits expected by Management.<br>
<br>
Identify the objectives and purposes of the section being reviewed, and the review being conducted; document critical success factors. Entrance interviews are held with senior management during which management's concerns and directions are communicated as well as the Critical Success Factors of the audit and the section being audited. Certain objectives, such as legislative compliance, are always assumed to be present;<br>
<br>
Identify the functions in place to realise the objectives, critical success factors and purposes. A series of initial interviews are conducted with relevant middle and line management and staff to:
* Introduce the review and reassure staff as to the assisting rather than policing nature of the review,
* Identify the operations and organisation structure adopted to meet the objectives, purposes and critical success factors.
<br>
<li> Set Focus Questions, Audit Scope, Boundary & Assertions
Establish focus questions and their associated answering assertions , the satisfaction of which will represent a "pass" result. The assertions represent the criteria for evaluation;
</ol>
This topic is explored in more detail in:
*[[RIAM:VLA:FAMILIARISATION, SCOPE & PLANNING|ALSBA - Phase 1. FAMILIARISATION, SCOPE & PLANNING in the Very Large Audit]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 2: DOCUMENTATION AND SYSTEMS ANALYSIS====
<ol>
<li> Systems Description<br>
Build functional description of the area under review, focussing on the ten Control Classes or other appropriate classification of management action areas.<br>
<br>
Build a cyclic description of control systems, examining both time based cycles and data flows.<br>
<br>
Investigate the control systems in place to implement the functions. Tasks include:
* Document the procedures in operation so far as they relate to the scope and boundary of the Audit task,
* Compare actual procedures to legislation, policies, guidelines and documented procedures noting exceptions;
<br>
<br>
Examine management information and reporting systems in place to monitor the operations;
<br><br>
<li> Threat Causing Assertion Failure & Controls Addressing Threats<br>
Evaluate the systems against the assertions to be supported, noting key controls in the systems, and which assertions they affect, to determine:
* Potential strengths and weaknesses of the designed systems;
* Preliminary ranking of risk and exposures including efficiency exposures.
</ol>
More detail is available on this topic in:
* [[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL|PHASE 2: DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]]
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 3: TESTING AND RESULTS ANALYSIS====
<ol>
<li> Test Systems<br>
<br>
Design a testing program and Test the system and its transactions and/or data for:
* Compliance of operations with specified system (strengths);
* Occurrence of the identified weaknesses, risks or exposures;
<br>
<li> Evaluate Results<br>
Analyse the results of systems analysis and compliance testing stages to accept or refute the established assertions and operating compliance. <br>
</ol>
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 4: REPORTING AND FOLLOW UP====
<ol>
<li> Design Corrections<br>
Conclude and report in which we:<br>
* Identify risk and efficiency exposures to the Organisation;
* Recommend changes in the systems and procedures to the Organisation's management where these exposures are present;
* Form an opinion as to the overall reliability of the systems in place and as modified;
* Report to both management and the Audit Committee after and during each task;
<br>
<br>
The control system's ability to support the assertions and therefore the key controls identified are analysed at three levels:<br>
<br>
* '''Preventive Controls/Treatments'''
** Including direct controls such as authorisation and certification of forms, indirect controls such as training, maintenance of up-to-date reference material, section administration and organisation;
<br>
* '''Detective Controls/Treatments'''
** Such as supervisor review, batch control totals, edit checks and periodic system reconciliations;
<br>
* '''Corrective Controls/Treatments'''
** Such as routing an error back through the same control system that originally processed and detected the error and response to exception reports.
<br>
<br>
<li> Gain Management Ownership<br>
Conduct exit interviews, produce the final report and review action plans as required.
<br><br>
Although steps presented here suggest a linear sequence of steps, the correct approach involves regular, on-going to management during the conduct of the review. Interim reports, either formal or informal should be provided during the review. The key factor is that there should be NO SURPRISES for management at the end of the review. This facilitates ownership and acceptance of the findings, recommendations and the audit generally.
<br><br>
<li> Classify Findings, Facilitate Action Plans and Update the Organisation Risk Model
The final stage of the Review is to formalise the findings and recommendations by classifying their effects on the risk evaluation of the organisation and feed these back into the risk model. The risk model both provides an ongoing measure of the organisation's risk level, and eventually feeds back into the planning process for the identification of either further action or necessary reviews.
<br><br>
</ol>
</td></tr>
</table>
===Establishing the framework===
The key principles of the framework include:
* Interviews to scope and focus the review and involvement of Management and Staff throughout the process;
* Ensuring agreement as to the purpose, focus, scope, boundary, approach and findings of the review;
* Assertions as criteria for evaluation.
* Application of Risk Analysis, not just at the Planning stage, but also the Threat Analysis stage when assessing Systems Design, and the Reporting Stage when finalising recommendations. The Audit Risk is the risk that the audit will provide a wrong opinion. This is a function of:
** The Inherent Risk in the organisation
*** the risk that an error is likely to occur;
** The Control Risk
*** the risk that the control system will not prevent, detect or correct the error; and
** The Detection Risk
*** the risk that our procedures will not identify the existence of a material error.
The ALSBA uses Assertion focussed Risk and Threat analytic procedures to minimise this risk.
* Risk and Threat analysis aims to minimise the cost of reviews by keeping procedures tuned to the real exposures, and when combined with assertions, raises the certainty that our systems opinion is correct.
* Use of a variety of report and presentation styles to best communicate information; and
* The Internal Auditor MUST become part of the management & systems improvement process, not a disinterested, occasional observer.
* Analysis of control systems performance in meeting objectives.
* Clear discussion and specific recommendations to provide improvements.
==What is Threat Testing?==
Threat testing is an approach to assertion testing used as an alternative to a Desired Control Model. RIAM supports both concepts.
The key benefits of threat testing are:
* Controls analysis is kept current to the ACTUAL systems in place rather than an out-of-date control model;
* The audit process recognises and supports improvement and change in systems - essential for environments where Total Quality Management is operating;
* By evaluating the sources of possible problems, the process RESULTS in the development of Desired Control Models;
* Management is involved in the assessment of risks of systems failure;
This is a brief outline of the Threat Testing process :
* Each assertion is examined in turn. For each assertion a list of causes for failure of an assertion is prepared based on experience, statistical sampling, management advice, consultant advice, and checklists, etc. These causes are called threats. To each threat a probability of occurrence may be assigned if desired (perhaps based on historic samples).
* Each threat is then applied to the control system model (developed during the systems documentation phase) to investigate the probability of the system preventing the threat (ie. mitigating the risk). This probability is expressed as a probability of system failure.
* The risk of the threat occurring multiplied by the risk of system failure (Control Risk) is probability of the assertion not being sustained in operation.
The sum of all such threat related probabilities is the total risk of assertion failure in the system.
==How Do We Document Systems?==
===Working Papers===
RIAM working papers are designed to form a "tree" or pyramid with the apex being the opinion of the systems in operation, and the base being the detailed "views" or models of the organisation's systems and the testing results verifying aspects of the system's operations.
<table width="100%" border=1 >
<tr ><th >REF</th><th>CONTENTS</th></tr>
<tr><td>1</td><td>Final Audit Report and Other Relevant Files</td></tr>
<tr><td>2</td><td>Supervisor, Manager & Partner Reviews and Follow Up</td></tr>
<tr><td>3</td><td>Engagement Letters, Contract and Contacts</td></tr>
<tr><td>4</td><td>Action Plan, Client Follow Up and Correspondence</td></tr>
<tr><td>5</td><td>Matters for Manager & Partner Attention</td></tr>
<tr><td>6</td><td>Matters for Review Next Audit</td></tr>
<tr><td>7</td><td>Planning Documents and Audit Program</td></tr>
<tr><td>8</td><td>Work & Time recording Schedule</td></tr>
<tr><td>9</td><td>Background and Organisation Details</td></tr>
<tr><td>10</td><td>Organisation Objectives, Operating & Financial Policies, and Performance Measures</td></tr>
<tr><td>11</td><td>Strength & Weakness Schedule</td></tr>
<tr><td>12</td><td>Control System Documentation and Conclusion<br>
(Control Questionnaires, flowcharts, checklists and narratives)</td></tr>
<tr><td>13</td><td>Records of Interview</td></tr>
<tr><td>14</td><td>Legislation and Management Directives - Compliance<br>
(Including Important Contracts and Agreements)</td></tr>
<tr><td>15</td><td>Analysis and Tests of Transactions, Processes and Account Balances</td></tr>
<tr><td>16</td><td>Other Background Data and Notes</td></tr>
</table>
''The Index for The Standard RIAM Audit File''
The foregoing index shows that the files are self contained units including not only plans and tests, but also:
* date records of client contacts;
* relevant legislation and directions;
* full internal and external cross references;
* systems documentation; and
* organisation background and structures.
Section 12 of the file contains the detailed analysis of the systems under review:
<table width="100%" border=1 >
<tr ><th >PHASE</th><th>ACTION</th><th>WHO</th><th>REF</th></tr>
<tr><td>1</td><td>Conclusion</td><td>12.</td></tr>
<tr><td>2</td><td>Objectives (Purpose) of the Control System</td><td>12.</td></tr>
<tr><td>3</td><td>Framework of Analysis (Assertions to be supported)</td><td>12.</td></tr>
<tr><td>4</td><td>Key Controls</td><td>12.</td></tr>
<tr><td>5</td><td>Overview of the Control System (Principal Flows)</td><td>12.</td></tr>
<tr><td>6</td><td>Control System Flowcharts/Documentation</td><td>12.</td></tr>
<tr><td>7</td><td>Files & Records in the System</td><td>12.</td></tr>
<tr><td>8</td><td>Cycles in the System</td><td>12.</td></tr>
<tr><td>9</td><td>Transactions and Value</td><td>12.</td></tr>
<tr><td>10</td><td>Documents in the System</td><td>12.</td></tr>
<tr><td>11</td><td>Segregation of Duties</td><td>12.</td></tr>
<tr><td>12</td><td>Other</td><td>12.</td></tr>
</table>
''Index for Section 12 of the Standard Audit File - Control System Documentation''
The continuation of the "tree" structured analysis is evident in the above index. Each subsection contains further structured working papers, the details of which can be found in the volume "Standard Forms & Papers" of this series.
===Methods of Analysis===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IA9DocMethods.png]]
</div>
</td>
</tr>
</table>
The working papers require that systems design is analysed BEFORE any testing is performed. While prewritten test programs can be used, the full benefit of the method is received when the systems analysis is performed using the various systems models:
<ul>
<li> Segregation of Duties Chart
<li> Client Provider Analysis
<li> Key Quantities (transaction values and volumes)
<li> Cyclic Events
<li> Annotated Data Flows, Narrations and/or Document Flows
<li> Key Controls structured by their "data flow focus":
<ul>
<li> Inputs
<li> Processes
<li> Outputs
<li> Storage
</ul>
</ul>
And evaluated within the assertion/control attribute structure outlined earlier.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%">
<tr>
<td>
<div class="left">
[[Image:IAAnotatedDataFlow.png]]
</div>
</td>
<td>
<div class="right">
[[Image:IASegOfDutiesChart.png]]
</div>
</td>
</tr>
</table>
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IAAssertionMatrix.png]]
</div>
</td>
</tr>
</table>
The Working Paper's documentation of the system culminates in the Assertion Matrix and Control Strength & Weaknesses Chart. The Systems Analysis of section 12 of the file is summarised in these charts.
Within the assertion structured systems model are many subsystems. These are documented throughout the documentation "tree". Each subsystem should be documented in the way that best suits our analytic needs. For transaction flows this might be some type of annotated data flow, for a delegations analysis it might be an organisation chart, and for a risk analysis it might be a Fitzgerald Matrix, etc.
There are a number of techniques available to the auditor for use in documenting systems of internal control, such as:
* Narration
* Process and Document Flows
* Annotated Data Flows
* Organisation Charts
* Segregation of Duties Chart
* Assertion Matrix
* Lancaster Modelling
* Algorithm Pseudo-programming
* Simulation
Irrespective of which method is chosen, documentation should include:
* the origin of every document and record in the system
* all processing that takes place on the document
* the disposition of every document and record in the system
* a description of internal controls operating within the system
==What Are Some of the Types of Reviews Conducted Within the ALSBA?==
Management Assurance services utilising the ALSBA cover the full range of Internal Audit work including:
<table width="100%" border=1 >
<tr>
<td>
* Internal Audit Unit Performance Review;
* Efficiency and Effectiveness Reviews;
* Compliance and Integrity Reviews;
* Strategic and Tactical Planning Reviews;
* Financial Audits;
* Systems Analysis and Design Review;
* Quality Audit (TQM);
* Computer Controls Implementation;
* Methodology Design and Development Review;
* Control Systems Design;
</td>
<td>
* Training Review;
* EDP Reviews (15 different types);
* Corporate Design and Planning Reviews;
* Risk Management Review;
* Change Control;
* Occupational Health & Safety;
* Inventory Management;
* Maintenance Systems;
* Process Control;
* Fraud Control; and
* Quality Management System Integration.
</td>
</tr>
</table>
==How Do We Report?==
Ultimately the product produced and of greatest significance to management is the report. Our reporting is standardised to ensure consistency of structure, coverage, presentation, language and quality.
The significant features of our reports include:
* Standardised structure;
* Systems documentation and flow charts;
* Every finding is presented with: "Observation", "Risks and Implications", "Recommendations", and "Management Comment" sub sections;
* Clear, specific and relevant recommendations, not vague references to the need to "review" an area or "correct a problem";
* Clearly argued risks and implications of each finding. An observation is analysed by:
** The assertions affected,
** Risks and exposures from the observation,
** Arguments in favour of the breach and audit's comment on that argument;Inclusion of and focus on Action Plans; and
** Linking of findings to a clearly stated premise for the finding's importance: The Assertions affected.
Although the Report structure is one of the aspects of RIAM specifically tailored to the client, most adopt a close variation of one standard structure. RIAM includes five distinct report structures to assist clients identify their reporting needs.
The report is presented under the following headings/sections:
<ol>
<li> Executive Summary<br>
Provides a summary of the purpose, objectives, assertions, approach, scope, the overall opinion, key findings and issues arising.
<br>
<li> Objectives and Approach<br>
Addresses the "How" and "Why" of the review, and defines the assertions on which the conclusions and findings are based.
<br>
<li> Scope and Boundary<br>
Clearly defines the matters covered by the review, and most importantly the matters excluded from the review.
<br>
<li> Brief Description of the System Reviewed<br>
Covers the Purpose of the Section/Systems, The People and Organisation Structure, the Principal Activities of the Section/Systems, Documents and Records (both manual and computer) and the Reports Produced from and to the Section/Systems.
<br>
<li> Checklist of Findings, Recommendations and Action Plans<br>
Presents in Landscape form a summary of the findings and recommendations in section 6 under the headings: "Findings" and "Recommendations". Tables include boxes for Action Plans to be referenced or detailed. This section assists in monitoring and following up responses to audit recommendations by the Audit Committee.
<br>
<li> Detailed Findings and Recommendations<br>
The findings and recommendations have a standard structure:<br>
* Observation
** The observed facts, relevant legislation, directions and industry relevant information.
* Implications and Risks
** Assertions suppressed or supported.
** Principal risks and exposures.
** Arguments in favour of, or reasons for, the breach and audit's comment.
** Summation of audit's conclusion as to risk or exposure.
* Recommendations
** Numbered, clear, specific and relevant recommendations for action.
** Where alternatives are identified either by audit or the client they are presented and evaluated.
* Management Comment
** Management's response to the issues raised and action taken. After discussion and exit interviews the vast body of your recommendations should be accepted by management. If not, you have not done your job correctly!
</ol>
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
[[Category:Internal Audit - RIAM]]
{{BackLinks}}
</noinclude>
f16b463043523a28c430d6952430f5ca3868579b
356
352
2018-10-29T12:01:02Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Introduction==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:ALSBA.png]]
</div>
</td>
</tr>
</table>
The keys to the method are '''structure''' and '''focus'''. With RIAM, Internal Audit applies a Systems and Assertion Based approach to answer targetted 'questions' about an audit area. Questions focus the review, while assertions define the criterion for the answer.
Many systems based approaches merely measure the compliance of an organisation's staff with a particular system. The ALSBA method is a substantial enhancement to this commonly used approach. The RIAM auditor analyses compliance of the system process with a strategic and/or tactical purpose, compliance of practice with procedure and the awareness and readiness presented by potential (risks and opportunities) in the system itself.
The avoidance of checklists makes the auditor adaptable by permanently adopting a 'learning' posture. More importantly, the process is universal in that the same logic structure can be applied from the strategic level through to transactional compliance level, and from 'hard' financial processes to 'soft' subjective process.
Very large organisations present some particular challenges for the systems based audit, including coordination of teams across multiple jurisdictions, locations and organisation units. Here we present an overview to the technical aspects of the RIAM SBA, in [[RIAM:Conduct of the Very Large Audit|Conduct of the Very Large Audit]] we explore the method in detail in both the large audit and small audit context.
==What Are Assertions?==
The figure on the preceding diagramme summarises the Assertion Linked Systems Based Audit analytic structure. The process starts with the five areas of Internal Audit's "Scope of work" within which Assertions are defined. Support for the selected Assertions is classified into management's 10 Control Classes (areas for management action). The systems built by management to support the Assertions within the Control Classes will have identifiable "Control Attributes" identical to those used in our Control Implementation Service, and are classifiable according to the "Type" - preventive, detective or corrective.
The concept of Assertions is the core of a RIAM Systems Based Audit. Assertions are truths that we wish to express about a system. They formulated as statements of "fact" about a system. Examples of typical Compliance Assertions for financial aspects of a Grants Scheme might be:
That:
a. Grant expenditure is bona fide (ie that acquittals are for actual grants and for services appropriate to grant activity);
b. Grant data reported/processed is:
* Attributed to the '''proper period''',
* '''Accurately''' calculated,
* Correctly and appropriately '''accumulated''',
* Accurately '''recorded''',
* Correctly '''disclosed''',
* '''Properly authorised''' with respect to transactions (ie grantee approved costs and the Commission is satisfied that the amount is for an appropriate expense),
* Providing benefits to which grantees are '''eligible''',
c. The relevant '''management directions''' and '''legislation are observed''':
* Payments are in accordance with legislation, and
* Approvals for grants are in accordance with the legislation (ie properly vetted by the Grant Committee and approval is given by the Board); and
d. The assets of the organisation are efficiently, effectively and otherwise '''appropriately protected and applied''' (ie having an appropriate process of grant approval that assures projects are of an appropriate standard, and that Commission resources are used efficiently).
==For What are Assertions Used?==
When we say a given system is operating satisfactorily we mean that our review has tested the truth of a set of assertions and we have found that they have been sustained. Thus testing the assertions is the purpose of the audit.
Assertions are the focus and underlay the structure of the RIAM analytic method. All review activities, findings, discussions and recommendations must be able to be tied back to the review's assertions.
The result is that both the auditor and the auditee have a precise understanding of the level of comfort a given review offers.
Assertions have another huge advantage for the auditor: They allow us to frame focus questions about a system in "yes" or "no" form, which are answered by proving or disproving the assertions. For example, the question "Is system XYZ operating effectively?", is, by its nature, subjective. My meaning of the word 'effective' may be radically different from your understanding of theat same word. If we say, "effective in this context means accurate and timely" then we both know that neither of us meant "authorised and consistent", or "fair and equitable".
Thus by combining a focus question, with the assertions that define a "yes" answer, we as auditors, can give management and the governance committee what they want: certainty. We do not need to hedge for the unknown - because we have stated clearly our context specific meaning.
Thus we say that assertions are the definitions of the audit project focus question.
For a detailed discussion of assertions and example assertion sets in various kinds of systems see:
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
==How do we Establish Assertions?==
A reviews assertions are agreed with the auditee management before a review commences. In many cases, such as financial balances audits and quality audits, we are able to recommend appropriate assertions. In other reviews, particularly those specifically requested by management, the managers will have a clear idea of particular "Questions" they wish answered by the review.
The establishment of "Questions" is the first step in selecting audit assertions.
During the entrance interview phase of the audit management identifies a number of questions about the target system they wish to have answered. The auditor then proposes a series of Assertions, the sustaining of which will constitute an affirmative answer, and the suppressing of which will constitute a negative answer. These assertions are agreed with management.
==What is the Assertion Linked Systems Based Approach?==
===The Objectives===
The objectives of the reviews are summarised as:
* Document the procedures in operation within the section so far as they relate to the target activities;
* Collect sufficient data and analyse that data to support assertions that address management's critical success factors represented by questions they request audit to answer;
* Identify risk and efficiency exposures to the organisation and the critical success factors of management;
* Recommend relevant and practicable changes in the systems and procedures to management where these exposures are present; and
* Form an opinion as to the overall reliability of the systems in place and as modified.
===Meeting The Objectives===
[[Image:ALSBASteps.png]]
The structure of the approach, diagrammed above, that meets the audit objectives has four phases. Here we summarise those phases. A more detailed discussion of these phases mapped into the context of both small team and large multi-location, team audits is explored in:
* [[RIAM:VLA:The Four Phases of the RALSBA|The Four Phases of the RIAM Systems Based Audit]]
<table width="100%" border=1 >
<tr ><td >
====PHASE 1: FAMILIARISATION, SCOPE AND PLANNING====
<ol>
<li> Define View of the Audit Area, Establish Risks, Threats & Benefits expected by Management.<br>
<br>
Identify the objectives and purposes of the section being reviewed, and the review being conducted; document critical success factors. Entrance interviews are held with senior management during which management's concerns and directions are communicated as well as the Critical Success Factors of the audit and the section being audited. Certain objectives, such as legislative compliance, are always assumed to be present;<br>
<br>
Identify the functions in place to realise the objectives, critical success factors and purposes. A series of initial interviews are conducted with relevant middle and line management and staff to:
* Introduce the review and reassure staff as to the assisting rather than policing nature of the review,
* Identify the operations and organisation structure adopted to meet the objectives, purposes and critical success factors.
<br>
<li> Set Focus Questions, Audit Scope, Boundary & Assertions
Establish focus questions and their associated answering assertions , the satisfaction of which will represent a "pass" result. The assertions represent the criteria for evaluation;
</ol>
This topic is explored in more detail in:
*[[RIAM:VLA:FAMILIARISATION, SCOPE & PLANNING|ALSBA - Phase 1. FAMILIARISATION, SCOPE & PLANNING in the Very Large Audit]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 2: DOCUMENTATION AND SYSTEMS ANALYSIS====
<ol>
<li> Systems Description<br>
Build functional description of the area under review, focussing on the ten Control Classes or other appropriate classification of management action areas.<br>
<br>
Build a cyclic description of control systems, examining both time based cycles and data flows.<br>
<br>
Investigate the control systems in place to implement the functions. Tasks include:
* Document the procedures in operation so far as they relate to the scope and boundary of the Audit task,
* Compare actual procedures to legislation, policies, guidelines and documented procedures noting exceptions;
<br>
<br>
Examine management information and reporting systems in place to monitor the operations;
<br><br>
<li> Threat Causing Assertion Failure & Controls Addressing Threats<br>
Evaluate the systems against the assertions to be supported, noting key controls in the systems, and which assertions they affect, to determine:
* Potential strengths and weaknesses of the designed systems;
* Preliminary ranking of risk and exposures including efficiency exposures.
</ol>
More detail is available on this topic in:
* [[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL|PHASE 2: DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]]
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 3: TESTING AND RESULTS ANALYSIS====
<ol>
<li> Test Systems<br>
<br>
Design a testing program and Test the system and its transactions and/or data for:
* Compliance of operations with specified system (strengths);
* Occurrence of the identified weaknesses, risks or exposures;
<br>
<li> Evaluate Results<br>
Analyse the results of systems analysis and compliance testing stages to accept or refute the established assertions and operating compliance. <br>
</ol>
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 4: REPORTING AND FOLLOW UP====
<ol>
<li> Design Corrections<br>
Conclude and report in which we:<br>
* Identify risk and efficiency exposures to the Organisation;
* Recommend changes in the systems and procedures to the Organisation's management where these exposures are present;
* Form an opinion as to the overall reliability of the systems in place and as modified;
* Report to both management and the Audit Committee after and during each task;
<br>
<br>
The control system's ability to support the assertions and therefore the key controls identified are analysed at three levels:<br>
<br>
* '''Preventive Controls/Treatments'''
** Including direct controls such as authorisation and certification of forms, indirect controls such as training, maintenance of up-to-date reference material, section administration and organisation;
<br>
* '''Detective Controls/Treatments'''
** Such as supervisor review, batch control totals, edit checks and periodic system reconciliations;
<br>
* '''Corrective Controls/Treatments'''
** Such as routing an error back through the same control system that originally processed and detected the error and response to exception reports.
<br>
<br>
<li> Gain Management Ownership<br>
Conduct exit interviews, produce the final report and review action plans as required.
<br><br>
Although steps presented here suggest a linear sequence of steps, the correct approach involves regular, on-going to management during the conduct of the review. Interim reports, either formal or informal should be provided during the review. The key factor is that there should be NO SURPRISES for management at the end of the review. This facilitates ownership and acceptance of the findings, recommendations and the audit generally.
<br><br>
<li> Classify Findings, Facilitate Action Plans and Update the Organisation Risk Model
The final stage of the Review is to formalise the findings and recommendations by classifying their effects on the risk evaluation of the organisation and feed these back into the risk model. The risk model both provides an ongoing measure of the organisation's risk level, and eventually feeds back into the planning process for the identification of either further action or necessary reviews.
<br><br>
</ol>
</td></tr>
</table>
===Establishing the framework===
The key principles of the framework include:
* Interviews to scope and focus the review and involvement of Management and Staff throughout the process;
* Ensuring agreement as to the purpose, focus, scope, boundary, approach and findings of the review;
* Assertions as criteria for evaluation.
* Application of Risk Analysis, not just at the Planning stage, but also the Threat Analysis stage when assessing Systems Design, and the Reporting Stage when finalising recommendations. The Audit Risk is the risk that the audit will provide a wrong opinion. This is a function of:
** The Inherent Risk in the organisation
*** the risk that an error is likely to occur;
** The Control Risk
*** the risk that the control system will not prevent, detect or correct the error; and
** The Detection Risk
*** the risk that our procedures will not identify the existence of a material error.
The ALSBA uses Assertion focussed Risk and Threat analytic procedures to minimise this risk.
* Risk and Threat analysis aims to minimise the cost of reviews by keeping procedures tuned to the real exposures, and when combined with assertions, raises the certainty that our systems opinion is correct.
* Use of a variety of report and presentation styles to best communicate information; and
* The Internal Auditor MUST become part of the management & systems improvement process, not a disinterested, occasional observer.
* Analysis of control systems performance in meeting objectives.
* Clear discussion and specific recommendations to provide improvements.
==What is Threat Testing?==
Threat testing is an approach to assertion testing used as an alternative to a Desired Control Model. RIAM supports both concepts.
The key benefits of threat testing are:
* Controls analysis is kept current to the ACTUAL systems in place rather than an out-of-date control model;
* The audit process recognises and supports improvement and change in systems - essential for environments where Total Quality Management is operating;
* By evaluating the sources of possible problems, the process RESULTS in the development of Desired Control Models;
* Management is involved in the assessment of risks of systems failure;
This is a brief outline of the Threat Testing process :
* Each assertion is examined in turn. For each assertion a list of causes for failure of an assertion is prepared based on experience, statistical sampling, management advice, consultant advice, and checklists, etc. These causes are called threats. To each threat a probability of occurrence may be assigned if desired (perhaps based on historic samples).
* Each threat is then applied to the control system model (developed during the systems documentation phase) to investigate the probability of the system preventing the threat (ie. mitigating the risk). This probability is expressed as a probability of system failure.
* The risk of the threat occurring multiplied by the risk of system failure (Control Risk) is probability of the assertion not being sustained in operation.
The sum of all such threat related probabilities is the total risk of assertion failure in the system.
==How Do We Document Systems?==
===Working Papers===
RIAM working papers are designed to form a "tree" or pyramid with the apex being the opinion of the systems in operation, and the base being the detailed "views" or models of the organisation's systems and the testing results verifying aspects of the system's operations.
<table width="100%" border=1 >
<tr ><th >REF</th><th>CONTENTS</th></tr>
<tr><td>1</td><td>Final Audit Report and Other Relevant Files</td></tr>
<tr><td>2</td><td>Supervisor, Manager & Partner Reviews and Follow Up</td></tr>
<tr><td>3</td><td>Engagement Letters, Contract and Contacts</td></tr>
<tr><td>4</td><td>Action Plan, Client Follow Up and Correspondence</td></tr>
<tr><td>5</td><td>Matters for Manager & Partner Attention</td></tr>
<tr><td>6</td><td>Matters for Review Next Audit</td></tr>
<tr><td>7</td><td>Planning Documents and Audit Program</td></tr>
<tr><td>8</td><td>Work & Time recording Schedule</td></tr>
<tr><td>9</td><td>Background and Organisation Details</td></tr>
<tr><td>10</td><td>Organisation Objectives, Operating & Financial Policies, and Performance Measures</td></tr>
<tr><td>11</td><td>Strength & Weakness Schedule</td></tr>
<tr><td>12</td><td>Control System Documentation and Conclusion<br>
(Control Questionnaires, flowcharts, checklists and narratives)</td></tr>
<tr><td>13</td><td>Records of Interview</td></tr>
<tr><td>14</td><td>Legislation and Management Directives - Compliance<br>
(Including Important Contracts and Agreements)</td></tr>
<tr><td>15</td><td>Analysis and Tests of Transactions, Processes and Account Balances</td></tr>
<tr><td>16</td><td>Other Background Data and Notes</td></tr>
</table>
''The Index for The Standard RIAM Audit File''
The foregoing index shows that the files are self contained units including not only plans and tests, but also:
* date records of client contacts;
* relevant legislation and directions;
* full internal and external cross references;
* systems documentation; and
* organisation background and structures.
Section 12 of the file contains the detailed analysis of the systems under review:
<table width="100%" border=1 >
<tr ><th >PHASE</th><th>ACTION</th><th>WHO</th><th>REF</th></tr>
<tr><td>1</td><td>Conclusion</td><td>12.</td></tr>
<tr><td>2</td><td>Objectives (Purpose) of the Control System</td><td>12.</td></tr>
<tr><td>3</td><td>Framework of Analysis (Assertions to be supported)</td><td>12.</td></tr>
<tr><td>4</td><td>Key Controls</td><td>12.</td></tr>
<tr><td>5</td><td>Overview of the Control System (Principal Flows)</td><td>12.</td></tr>
<tr><td>6</td><td>Control System Flowcharts/Documentation</td><td>12.</td></tr>
<tr><td>7</td><td>Files & Records in the System</td><td>12.</td></tr>
<tr><td>8</td><td>Cycles in the System</td><td>12.</td></tr>
<tr><td>9</td><td>Transactions and Value</td><td>12.</td></tr>
<tr><td>10</td><td>Documents in the System</td><td>12.</td></tr>
<tr><td>11</td><td>Segregation of Duties</td><td>12.</td></tr>
<tr><td>12</td><td>Other</td><td>12.</td></tr>
</table>
''Index for Section 12 of the Standard Audit File - Control System Documentation''
The continuation of the "tree" structured analysis is evident in the above index. Each subsection contains further structured working papers, the details of which can be found in the volume "Standard Forms & Papers" of this series.
===Methods of Analysis===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IA9DocMethods.png]]
</div>
</td>
</tr>
</table>
The working papers require that systems design is analysed BEFORE any testing is performed. While prewritten test programs can be used, the full benefit of the method is received when the systems analysis is performed using the various systems models:
<ul>
<li> Segregation of Duties Chart
<li> Client Provider Analysis
<li> Key Quantities (transaction values and volumes)
<li> Cyclic Events
<li> Annotated Data Flows, Narrations and/or Document Flows
<li> Key Controls structured by their "data flow focus":
<ul>
<li> Inputs
<li> Processes
<li> Outputs
<li> Storage
</ul>
</ul>
And evaluated within the assertion/control attribute structure outlined earlier.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%">
<tr>
<td>
<div class="left">
[[Image:IAAnotatedDataFlow.png]]
</div>
</td>
<td>
<div class="right">
[[Image:IASegOfDutiesChart.png]]
</div>
</td>
</tr>
</table>
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IAAssertionMatrix.png]]
</div>
</td>
</tr>
</table>
The Working Paper's documentation of the system culminates in the Assertion Matrix and Control Strength & Weaknesses Chart. The Systems Analysis of section 12 of the file is summarised in these charts.
Within the assertion structured systems model are many subsystems. These are documented throughout the documentation "tree". Each subsystem should be documented in the way that best suits our analytic needs. For transaction flows this might be some type of annotated data flow, for a delegations analysis it might be an organisation chart, and for a risk analysis it might be a Fitzgerald Matrix, etc.
There are a number of techniques available to the auditor for use in documenting systems of internal control, such as:
* Narration
* Process and Document Flows
* Annotated Data Flows
* Organisation Charts
* Segregation of Duties Chart
* Assertion Matrix
* Lancaster Modelling
* Algorithm Pseudo-programming
* Simulation
Irrespective of which method is chosen, documentation should include:
* the origin of every document and record in the system
* all processing that takes place on the document
* the disposition of every document and record in the system
* a description of internal controls operating within the system
==What Are Some of the Types of Reviews Conducted Within the ALSBA?==
Management Assurance services utilising the ALSBA cover the full range of Internal Audit work including:
<table width="100%" border=1 >
<tr>
<td>
* Internal Audit Unit Performance Review;
* Efficiency and Effectiveness Reviews;
* Compliance and Integrity Reviews;
* Strategic and Tactical Planning Reviews;
* Financial Audits;
* Systems Analysis and Design Review;
* Quality Audit (TQM);
* Computer Controls Implementation;
* Methodology Design and Development Review;
* Control Systems Design;
</td>
<td>
* Training Review;
* EDP Reviews (15 different types);
* Corporate Design and Planning Reviews;
* Risk Management Review;
* Change Control;
* Occupational Health & Safety;
* Inventory Management;
* Maintenance Systems;
* Process Control;
* Fraud Control; and
* Quality Management System Integration.
</td>
</tr>
</table>
==How Do We Report?==
Ultimately the product produced and of greatest significance to management is the report. Our reporting is standardised to ensure consistency of structure, coverage, presentation, language and quality.
The significant features of our reports include:
* Standardised structure;
* Systems documentation and flow charts;
* Every finding is presented with: "Observation", "Risks and Implications", "Recommendations", and "Management Comment" sub sections;
* Clear, specific and relevant recommendations, not vague references to the need to "review" an area or "correct a problem";
* Clearly argued risks and implications of each finding. An observation is analysed by:
** The assertions affected,
** Risks and exposures from the observation,
** Arguments in favour of the breach and audit's comment on that argument;Inclusion of and focus on Action Plans; and
** Linking of findings to a clearly stated premise for the finding's importance: The Assertions affected.
Although the Report structure is one of the aspects of RIAM specifically tailored to the client, most adopt a close variation of one standard structure. RIAM includes five distinct report structures to assist clients identify their reporting needs.
The report is presented under the following headings/sections:
<ol>
<li> Executive Summary<br>
Provides a summary of the purpose, objectives, assertions, approach, scope, the overall opinion, key findings and issues arising.
<br>
<li> Objectives and Approach<br>
Addresses the "How" and "Why" of the review, and defines the assertions on which the conclusions and findings are based.
<br>
<li> Scope and Boundary<br>
Clearly defines the matters covered by the review, and most importantly the matters excluded from the review.
<br>
<li> Brief Description of the System Reviewed<br>
Covers the Purpose of the Section/Systems, The People and Organisation Structure, the Principal Activities of the Section/Systems, Documents and Records (both manual and computer) and the Reports Produced from and to the Section/Systems.
<br>
<li> Checklist of Findings, Recommendations and Action Plans<br>
Presents in Landscape form a summary of the findings and recommendations in section 6 under the headings: "Findings" and "Recommendations". Tables include boxes for Action Plans to be referenced or detailed. This section assists in monitoring and following up responses to audit recommendations by the Audit Committee.
<br>
<li> Detailed Findings and Recommendations<br>
The findings and recommendations have a standard structure:<br>
* Observation
** The observed facts, relevant legislation, directions and industry relevant information.
* Implications and Risks
** Assertions suppressed or supported.
** Principal risks and exposures.
** Arguments in favour of, or reasons for, the breach and audit's comment.
** Summation of audit's conclusion as to risk or exposure.
* Recommendations
** Numbered, clear, specific and relevant recommendations for action.
** Where alternatives are identified either by audit or the client they are presented and evaluated.
* Management Comment
** Management's response to the issues raised and action taken. After discussion and exit interviews the vast body of your recommendations should be accepted by management. If not, you have not done your job correctly!
</ol>
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
[[Category:Internal Audit - RIAM]]
{{BackLinks}}
</noinclude>
f16b463043523a28c430d6952430f5ca3868579b
378
356
2018-10-29T12:04:04Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Introduction==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:ALSBA.png]]
</div>
</td>
</tr>
</table>
The keys to the method are '''structure''' and '''focus'''. With RIAM, Internal Audit applies a Systems and Assertion Based approach to answer targetted 'questions' about an audit area. Questions focus the review, while assertions define the criterion for the answer.
Many systems based approaches merely measure the compliance of an organisation's staff with a particular system. The ALSBA method is a substantial enhancement to this commonly used approach. The RIAM auditor analyses compliance of the system process with a strategic and/or tactical purpose, compliance of practice with procedure and the awareness and readiness presented by potential (risks and opportunities) in the system itself.
The avoidance of checklists makes the auditor adaptable by permanently adopting a 'learning' posture. More importantly, the process is universal in that the same logic structure can be applied from the strategic level through to transactional compliance level, and from 'hard' financial processes to 'soft' subjective process.
Very large organisations present some particular challenges for the systems based audit, including coordination of teams across multiple jurisdictions, locations and organisation units. Here we present an overview to the technical aspects of the RIAM SBA, in [[RIAM:Conduct of the Very Large Audit|Conduct of the Very Large Audit]] we explore the method in detail in both the large audit and small audit context.
==What Are Assertions?==
The figure on the preceding diagramme summarises the Assertion Linked Systems Based Audit analytic structure. The process starts with the five areas of Internal Audit's "Scope of work" within which Assertions are defined. Support for the selected Assertions is classified into management's 10 Control Classes (areas for management action). The systems built by management to support the Assertions within the Control Classes will have identifiable "Control Attributes" identical to those used in our Control Implementation Service, and are classifiable according to the "Type" - preventive, detective or corrective.
The concept of Assertions is the core of a RIAM Systems Based Audit. Assertions are truths that we wish to express about a system. They formulated as statements of "fact" about a system. Examples of typical Compliance Assertions for financial aspects of a Grants Scheme might be:
That:
a. Grant expenditure is bona fide (ie that acquittals are for actual grants and for services appropriate to grant activity);
b. Grant data reported/processed is:
* Attributed to the '''proper period''',
* '''Accurately''' calculated,
* Correctly and appropriately '''accumulated''',
* Accurately '''recorded''',
* Correctly '''disclosed''',
* '''Properly authorised''' with respect to transactions (ie grantee approved costs and the Commission is satisfied that the amount is for an appropriate expense),
* Providing benefits to which grantees are '''eligible''',
c. The relevant '''management directions''' and '''legislation are observed''':
* Payments are in accordance with legislation, and
* Approvals for grants are in accordance with the legislation (ie properly vetted by the Grant Committee and approval is given by the Board); and
d. The assets of the organisation are efficiently, effectively and otherwise '''appropriately protected and applied''' (ie having an appropriate process of grant approval that assures projects are of an appropriate standard, and that Commission resources are used efficiently).
==For What are Assertions Used?==
When we say a given system is operating satisfactorily we mean that our review has tested the truth of a set of assertions and we have found that they have been sustained. Thus testing the assertions is the purpose of the audit.
Assertions are the focus and underlay the structure of the RIAM analytic method. All review activities, findings, discussions and recommendations must be able to be tied back to the review's assertions.
The result is that both the auditor and the auditee have a precise understanding of the level of comfort a given review offers.
Assertions have another huge advantage for the auditor: They allow us to frame focus questions about a system in "yes" or "no" form, which are answered by proving or disproving the assertions. For example, the question "Is system XYZ operating effectively?", is, by its nature, subjective. My meaning of the word 'effective' may be radically different from your understanding of theat same word. If we say, "effective in this context means accurate and timely" then we both know that neither of us meant "authorised and consistent", or "fair and equitable".
Thus by combining a focus question, with the assertions that define a "yes" answer, we as auditors, can give management and the governance committee what they want: certainty. We do not need to hedge for the unknown - because we have stated clearly our context specific meaning.
Thus we say that assertions are the definitions of the audit project focus question.
For a detailed discussion of assertions and example assertion sets in various kinds of systems see:
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
==How do we Establish Assertions?==
A reviews assertions are agreed with the auditee management before a review commences. In many cases, such as financial balances audits and quality audits, we are able to recommend appropriate assertions. In other reviews, particularly those specifically requested by management, the managers will have a clear idea of particular "Questions" they wish answered by the review.
The establishment of "Questions" is the first step in selecting audit assertions.
During the entrance interview phase of the audit management identifies a number of questions about the target system they wish to have answered. The auditor then proposes a series of Assertions, the sustaining of which will constitute an affirmative answer, and the suppressing of which will constitute a negative answer. These assertions are agreed with management.
==What is the Assertion Linked Systems Based Approach?==
===The Objectives===
The objectives of the reviews are summarised as:
* Document the procedures in operation within the section so far as they relate to the target activities;
* Collect sufficient data and analyse that data to support assertions that address management's critical success factors represented by questions they request audit to answer;
* Identify risk and efficiency exposures to the organisation and the critical success factors of management;
* Recommend relevant and practicable changes in the systems and procedures to management where these exposures are present; and
* Form an opinion as to the overall reliability of the systems in place and as modified.
===Meeting The Objectives===
[[Image:ALSBASteps.png]]
The structure of the approach, diagrammed above, that meets the audit objectives has four phases. Here we summarise those phases. A more detailed discussion of these phases mapped into the context of both small team and large multi-location, team audits is explored in:
* [[RIAM:VLA:The Four Phases of the RALSBA|The Four Phases of the RIAM Systems Based Audit]]
<table width="100%" border=1 >
<tr ><td >
====PHASE 1: FAMILIARISATION, SCOPE AND PLANNING====
<ol>
<li> Define View of the Audit Area, Establish Risks, Threats & Benefits expected by Management.<br>
<br>
Identify the objectives and purposes of the section being reviewed, and the review being conducted; document critical success factors. Entrance interviews are held with senior management during which management's concerns and directions are communicated as well as the Critical Success Factors of the audit and the section being audited. Certain objectives, such as legislative compliance, are always assumed to be present;<br>
<br>
Identify the functions in place to realise the objectives, critical success factors and purposes. A series of initial interviews are conducted with relevant middle and line management and staff to:
* Introduce the review and reassure staff as to the assisting rather than policing nature of the review,
* Identify the operations and organisation structure adopted to meet the objectives, purposes and critical success factors.
<br>
<li> Set Focus Questions, Audit Scope, Boundary & Assertions
Establish focus questions and their associated answering assertions , the satisfaction of which will represent a "pass" result. The assertions represent the criteria for evaluation;
</ol>
This topic is explored in more detail in:
*[[RIAM:VLA:FAMILIARISATION, SCOPE & PLANNING|ALSBA - Phase 1. FAMILIARISATION, SCOPE & PLANNING in the Very Large Audit]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 2: DOCUMENTATION AND SYSTEMS ANALYSIS====
<ol>
<li> Systems Description<br>
Build functional description of the area under review, focussing on the ten Control Classes or other appropriate classification of management action areas.<br>
<br>
Build a cyclic description of control systems, examining both time based cycles and data flows.<br>
<br>
Investigate the control systems in place to implement the functions. Tasks include:
* Document the procedures in operation so far as they relate to the scope and boundary of the Audit task,
* Compare actual procedures to legislation, policies, guidelines and documented procedures noting exceptions;
<br>
<br>
Examine management information and reporting systems in place to monitor the operations;
<br><br>
<li> Threat Causing Assertion Failure & Controls Addressing Threats<br>
Evaluate the systems against the assertions to be supported, noting key controls in the systems, and which assertions they affect, to determine:
* Potential strengths and weaknesses of the designed systems;
* Preliminary ranking of risk and exposures including efficiency exposures.
</ol>
More detail is available on this topic in:
* [[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL|PHASE 2: DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]]
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 3: TESTING AND RESULTS ANALYSIS====
<ol>
<li> Test Systems<br>
<br>
Design a testing program and Test the system and its transactions and/or data for:
* Compliance of operations with specified system (strengths);
* Occurrence of the identified weaknesses, risks or exposures;
<br>
<li> Evaluate Results<br>
Analyse the results of systems analysis and compliance testing stages to accept or refute the established assertions and operating compliance. <br>
</ol>
</td></tr>
</table>
<table width="100%" border=1 >
<tr ><td >
====PHASE 4: REPORTING AND FOLLOW UP====
<ol>
<li> Design Corrections<br>
Conclude and report in which we:<br>
* Identify risk and efficiency exposures to the Organisation;
* Recommend changes in the systems and procedures to the Organisation's management where these exposures are present;
* Form an opinion as to the overall reliability of the systems in place and as modified;
* Report to both management and the Audit Committee after and during each task;
<br>
<br>
The control system's ability to support the assertions and therefore the key controls identified are analysed at three levels:<br>
<br>
* '''Preventive Controls/Treatments'''
** Including direct controls such as authorisation and certification of forms, indirect controls such as training, maintenance of up-to-date reference material, section administration and organisation;
<br>
* '''Detective Controls/Treatments'''
** Such as supervisor review, batch control totals, edit checks and periodic system reconciliations;
<br>
* '''Corrective Controls/Treatments'''
** Such as routing an error back through the same control system that originally processed and detected the error and response to exception reports.
<br>
<br>
<li> Gain Management Ownership<br>
Conduct exit interviews, produce the final report and review action plans as required.
<br><br>
Although steps presented here suggest a linear sequence of steps, the correct approach involves regular, on-going to management during the conduct of the review. Interim reports, either formal or informal should be provided during the review. The key factor is that there should be NO SURPRISES for management at the end of the review. This facilitates ownership and acceptance of the findings, recommendations and the audit generally.
<br><br>
<li> Classify Findings, Facilitate Action Plans and Update the Organisation Risk Model
The final stage of the Review is to formalise the findings and recommendations by classifying their effects on the risk evaluation of the organisation and feed these back into the risk model. The risk model both provides an ongoing measure of the organisation's risk level, and eventually feeds back into the planning process for the identification of either further action or necessary reviews.
<br><br>
</ol>
</td></tr>
</table>
===Establishing the framework===
The key principles of the framework include:
* Interviews to scope and focus the review and involvement of Management and Staff throughout the process;
* Ensuring agreement as to the purpose, focus, scope, boundary, approach and findings of the review;
* Assertions as criteria for evaluation.
* Application of Risk Analysis, not just at the Planning stage, but also the Threat Analysis stage when assessing Systems Design, and the Reporting Stage when finalising recommendations. The Audit Risk is the risk that the audit will provide a wrong opinion. This is a function of:
** The Inherent Risk in the organisation
*** the risk that an error is likely to occur;
** The Control Risk
*** the risk that the control system will not prevent, detect or correct the error; and
** The Detection Risk
*** the risk that our procedures will not identify the existence of a material error.
The ALSBA uses Assertion focussed Risk and Threat analytic procedures to minimise this risk.
* Risk and Threat analysis aims to minimise the cost of reviews by keeping procedures tuned to the real exposures, and when combined with assertions, raises the certainty that our systems opinion is correct.
* Use of a variety of report and presentation styles to best communicate information; and
* The Internal Auditor MUST become part of the management & systems improvement process, not a disinterested, occasional observer.
* Analysis of control systems performance in meeting objectives.
* Clear discussion and specific recommendations to provide improvements.
==What is Threat Testing?==
Threat testing is an approach to assertion testing used as an alternative to a Desired Control Model. RIAM supports both concepts.
The key benefits of threat testing are:
* Controls analysis is kept current to the ACTUAL systems in place rather than an out-of-date control model;
* The audit process recognises and supports improvement and change in systems - essential for environments where Total Quality Management is operating;
* By evaluating the sources of possible problems, the process RESULTS in the development of Desired Control Models;
* Management is involved in the assessment of risks of systems failure;
This is a brief outline of the Threat Testing process :
* Each assertion is examined in turn. For each assertion a list of causes for failure of an assertion is prepared based on experience, statistical sampling, management advice, consultant advice, and checklists, etc. These causes are called threats. To each threat a probability of occurrence may be assigned if desired (perhaps based on historic samples).
* Each threat is then applied to the control system model (developed during the systems documentation phase) to investigate the probability of the system preventing the threat (ie. mitigating the risk). This probability is expressed as a probability of system failure.
* The risk of the threat occurring multiplied by the risk of system failure (Control Risk) is probability of the assertion not being sustained in operation.
The sum of all such threat related probabilities is the total risk of assertion failure in the system.
==How Do We Document Systems?==
===Working Papers===
RIAM working papers are designed to form a "tree" or pyramid with the apex being the opinion of the systems in operation, and the base being the detailed "views" or models of the organisation's systems and the testing results verifying aspects of the system's operations.
<table width="100%" border=1 >
<tr ><th >REF</th><th>CONTENTS</th></tr>
<tr><td>1</td><td>Final Audit Report and Other Relevant Files</td></tr>
<tr><td>2</td><td>Supervisor, Manager & Partner Reviews and Follow Up</td></tr>
<tr><td>3</td><td>Engagement Letters, Contract and Contacts</td></tr>
<tr><td>4</td><td>Action Plan, Client Follow Up and Correspondence</td></tr>
<tr><td>5</td><td>Matters for Manager & Partner Attention</td></tr>
<tr><td>6</td><td>Matters for Review Next Audit</td></tr>
<tr><td>7</td><td>Planning Documents and Audit Program</td></tr>
<tr><td>8</td><td>Work & Time recording Schedule</td></tr>
<tr><td>9</td><td>Background and Organisation Details</td></tr>
<tr><td>10</td><td>Organisation Objectives, Operating & Financial Policies, and Performance Measures</td></tr>
<tr><td>11</td><td>Strength & Weakness Schedule</td></tr>
<tr><td>12</td><td>Control System Documentation and Conclusion<br>
(Control Questionnaires, flowcharts, checklists and narratives)</td></tr>
<tr><td>13</td><td>Records of Interview</td></tr>
<tr><td>14</td><td>Legislation and Management Directives - Compliance<br>
(Including Important Contracts and Agreements)</td></tr>
<tr><td>15</td><td>Analysis and Tests of Transactions, Processes and Account Balances</td></tr>
<tr><td>16</td><td>Other Background Data and Notes</td></tr>
</table>
''The Index for The Standard RIAM Audit File''
The foregoing index shows that the files are self contained units including not only plans and tests, but also:
* date records of client contacts;
* relevant legislation and directions;
* full internal and external cross references;
* systems documentation; and
* organisation background and structures.
Section 12 of the file contains the detailed analysis of the systems under review:
<table width="100%" border=1 >
<tr ><th >PHASE</th><th>ACTION</th><th>WHO</th><th>REF</th></tr>
<tr><td>1</td><td>Conclusion</td><td>12.</td></tr>
<tr><td>2</td><td>Objectives (Purpose) of the Control System</td><td>12.</td></tr>
<tr><td>3</td><td>Framework of Analysis (Assertions to be supported)</td><td>12.</td></tr>
<tr><td>4</td><td>Key Controls</td><td>12.</td></tr>
<tr><td>5</td><td>Overview of the Control System (Principal Flows)</td><td>12.</td></tr>
<tr><td>6</td><td>Control System Flowcharts/Documentation</td><td>12.</td></tr>
<tr><td>7</td><td>Files & Records in the System</td><td>12.</td></tr>
<tr><td>8</td><td>Cycles in the System</td><td>12.</td></tr>
<tr><td>9</td><td>Transactions and Value</td><td>12.</td></tr>
<tr><td>10</td><td>Documents in the System</td><td>12.</td></tr>
<tr><td>11</td><td>Segregation of Duties</td><td>12.</td></tr>
<tr><td>12</td><td>Other</td><td>12.</td></tr>
</table>
''Index for Section 12 of the Standard Audit File - Control System Documentation''
The continuation of the "tree" structured analysis is evident in the above index. Each subsection contains further structured working papers, the details of which can be found in the volume "Standard Forms & Papers" of this series.
===Methods of Analysis===
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IA9DocMethods.png]]
</div>
</td>
</tr>
</table>
The working papers require that systems design is analysed BEFORE any testing is performed. While prewritten test programs can be used, the full benefit of the method is received when the systems analysis is performed using the various systems models:
<ul>
<li> Segregation of Duties Chart
<li> Client Provider Analysis
<li> Key Quantities (transaction values and volumes)
<li> Cyclic Events
<li> Annotated Data Flows, Narrations and/or Document Flows
<li> Key Controls structured by their "data flow focus":
<ul>
<li> Inputs
<li> Processes
<li> Outputs
<li> Storage
</ul>
</ul>
And evaluated within the assertion/control attribute structure outlined earlier.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" width="100%">
<tr>
<td>
<div class="left">
[[Image:IAAnotatedDataFlow.png]]
</div>
</td>
<td>
<div class="right">
[[Image:IASegOfDutiesChart.png]]
</div>
</td>
</tr>
</table>
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:IAAssertionMatrix.png]]
</div>
</td>
</tr>
</table>
The Working Paper's documentation of the system culminates in the Assertion Matrix and Control Strength & Weaknesses Chart. The Systems Analysis of section 12 of the file is summarised in these charts.
Within the assertion structured systems model are many subsystems. These are documented throughout the documentation "tree". Each subsystem should be documented in the way that best suits our analytic needs. For transaction flows this might be some type of annotated data flow, for a delegations analysis it might be an organisation chart, and for a risk analysis it might be a Fitzgerald Matrix, etc.
There are a number of techniques available to the auditor for use in documenting systems of internal control, such as:
* Narration
* Process and Document Flows
* Annotated Data Flows
* Organisation Charts
* Segregation of Duties Chart
* Assertion Matrix
* Lancaster Modelling
* Algorithm Pseudo-programming
* Simulation
Irrespective of which method is chosen, documentation should include:
* the origin of every document and record in the system
* all processing that takes place on the document
* the disposition of every document and record in the system
* a description of internal controls operating within the system
==What Are Some of the Types of Reviews Conducted Within the ALSBA?==
Management Assurance services utilising the ALSBA cover the full range of Internal Audit work including:
<table width="100%" border=1 >
<tr>
<td>
* Internal Audit Unit Performance Review;
* Efficiency and Effectiveness Reviews;
* Compliance and Integrity Reviews;
* Strategic and Tactical Planning Reviews;
* Financial Audits;
* Systems Analysis and Design Review;
* Quality Audit (TQM);
* Computer Controls Implementation;
* Methodology Design and Development Review;
* Control Systems Design;
</td>
<td>
* Training Review;
* EDP Reviews (15 different types);
* Corporate Design and Planning Reviews;
* Risk Management Review;
* Change Control;
* Occupational Health & Safety;
* Inventory Management;
* Maintenance Systems;
* Process Control;
* Fraud Control; and
* Quality Management System Integration.
</td>
</tr>
</table>
==How Do We Report?==
Ultimately the product produced and of greatest significance to management is the report. Our reporting is standardised to ensure consistency of structure, coverage, presentation, language and quality.
The significant features of our reports include:
* Standardised structure;
* Systems documentation and flow charts;
* Every finding is presented with: "Observation", "Risks and Implications", "Recommendations", and "Management Comment" sub sections;
* Clear, specific and relevant recommendations, not vague references to the need to "review" an area or "correct a problem";
* Clearly argued risks and implications of each finding. An observation is analysed by:
** The assertions affected,
** Risks and exposures from the observation,
** Arguments in favour of the breach and audit's comment on that argument;Inclusion of and focus on Action Plans; and
** Linking of findings to a clearly stated premise for the finding's importance: The Assertions affected.
Although the Report structure is one of the aspects of RIAM specifically tailored to the client, most adopt a close variation of one standard structure. RIAM includes five distinct report structures to assist clients identify their reporting needs.
The report is presented under the following headings/sections:
<ol>
<li> Executive Summary<br>
Provides a summary of the purpose, objectives, assertions, approach, scope, the overall opinion, key findings and issues arising.
<br>
<li> Objectives and Approach<br>
Addresses the "How" and "Why" of the review, and defines the assertions on which the conclusions and findings are based.
<br>
<li> Scope and Boundary<br>
Clearly defines the matters covered by the review, and most importantly the matters excluded from the review.
<br>
<li> Brief Description of the System Reviewed<br>
Covers the Purpose of the Section/Systems, The People and Organisation Structure, the Principal Activities of the Section/Systems, Documents and Records (both manual and computer) and the Reports Produced from and to the Section/Systems.
<br>
<li> Checklist of Findings, Recommendations and Action Plans<br>
Presents in Landscape form a summary of the findings and recommendations in section 6 under the headings: "Findings" and "Recommendations". Tables include boxes for Action Plans to be referenced or detailed. This section assists in monitoring and following up responses to audit recommendations by the Audit Committee.
<br>
<li> Detailed Findings and Recommendations<br>
The findings and recommendations have a standard structure:<br>
* Observation
** The observed facts, relevant legislation, directions and industry relevant information.
* Implications and Risks
** Assertions suppressed or supported.
** Principal risks and exposures.
** Arguments in favour of, or reasons for, the breach and audit's comment.
** Summation of audit's conclusion as to risk or exposure.
* Recommendations
** Numbered, clear, specific and relevant recommendations for action.
** Where alternatives are identified either by audit or the client they are presented and evaluated.
* Management Comment
** Management's response to the issues raised and action taken. After discussion and exit interviews the vast body of your recommendations should be accepted by management. If not, you have not done your job correctly!
</ol>
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
[[Category:Internal Audit - RIAM]]
{{BackLinks}}
</noinclude>
f16b463043523a28c430d6952430f5ca3868579b
BPC RiskManager Software Suite
0
3
340
276
2018-10-29T11:57:33Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=BPC RiskManager Software Suite - Risk, Compliance and Certification=
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) was first released in 2008. The latest version was released in 2011. It is updated continuously throughout the year and, as a client, we encourage you to actively participate in the development direction.
The Enrima edition of BPC RiskManager is a single-user and multi-user risk management, compliance management, financial statements certification, insurance, survey, incidents & hazards system all in one application. You can manage multiple organisations and simultaneously view governance issues as risks, compliance obligations (legislation, processes and procedures) and compliance topics simultaneously. It manages email based reminders for a large variety of user expectations internally.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width=100%
|-
|
* BPC RiskManager V5 (Express)
|[[image:BPCRiskManagerExpressV5.jpg]]
|-
|
* BPC RiskManager V6 (Enrima Edition)
|[[image:BPC_RiskManager_V6261_Main_Screen.jpg|600]]
|}
=Client Base=
BPC RiskManager clients are head quartered in Australia, Canada, the United Kingdom and the United States of America. Global clients, of course have offices in many other countries. [http://www.bishopphillips.com| Bishop Phillips Consulting] has local offices in both Australia and North America.
The system is used extensively in the education sector with a very substantial presence in Universities in both Australia and Canada and commercial education providers and colleges in the USA. Other significant client groups include insurance providers (both primary insurers and reinsurers), central government agencies (such as federal & state/province departments and local government), utilities such as postal, electrical and water utilities.
BPC RiskManager implements and substantially extends the Risk Management Standards "AS/NZS 4360:2004 :Risk Management", and "ISO 31000" and complies with the "ISO/IEC Guide 73- Risk Management – Vocabulary".
The Risk Manager is not restricted to merely following the one interpretation of the risk standards. As a consequence of its long market history, BPC RiskManager implements a large number of divergent risk management methodologies or methods. Any combination of one to three assessment groups each containing ratings for likelihood, consequnce and control are possible. For example some clients use a risk management methodology that utilises risk budgets with three rating groups "Inherent, Residual and Target" where inherent ratings shift with external factors and target shifts with the corporate risk appetite (ie a risk budget) while the residual floats according to assessment ratings.
Any number of self assessments in each group can be maintained together with a separate family of assessments and remediations created by audit/expert that coexist with management's risk assessments.
Whether your preferred risk methodology uses quantification (quantitative risk analysis), or qualification (qualitative risk analysis), BPC Risk Manager directly supports the approach on a per assessment basis. Terminology (including fields names and purpose and screen captions) is fully customisable so the system can directly implement the corporate risk methodology / risk method.
=Get a Fully Functional Evaluation Copy of BPC RiskManager for FREE=
You can get a free no-obligation fully functional copy of BPC RiskManager (Enrima Edition) simply by completing the request form here:
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php I want to evaluate BPC RiskManager without obligation for free, please.]
It will work for 60 days, and if you need more time you can contact us and request a longer evaluation. There are no limitations in the evaluation version and we will even give you support for free while you get it running. It is fully self installing and will open up accessing your first risk database when the installer finishes.
If it isn't right for you, you can just uninstall after the 60 days with no further obligation to us.
=Knowledge Base=
*[[BPC RiskManager V6 Enterprise (Enrima Edition)]]
** [[BPC RiskManager V6 Enterprise (Enrima Edition)| BPC RiskManager Features]]
** [[BPC RiskManager V6.2 Network Architecture]]
** [[RM625ENT Installation Instructions|BPC RiakManager V6.2.5 Installation Instructions]]
** [[BPC RiskManager Frequently Asked Questions|BPC RiskManager - Frequently Asked Questions]]
** [[BPC RiskManager Quick Help With Common Tasks]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
** [[BPC RiskManager V6 on 64 bit Windows]]
*[[BPC SurveyManager - Overview]]
** [[BPC Surveymanager - Key Features]]
** [[BPC SurveyManager - Introduction]]
** [[BPC SurveyManager - Creating Surveys - Layout and Markup Tags]]
** [[BPC SurveyManager - Creating Surveys - The Page Script]]
** [[BPC SurveyManager - Questions and Input Controls]]
** [[BPC SurveyManager - Creating Surveys - Properties]]
** [[BPC SurveyManager - Creating Surveys - Rules Scripting]]
** [[BPC SurveyManager - The Built In Reports]]
** [[BPC SurveyManager - Advanced Database Configuration Settings]]
** [[BPC SurveyManager - Client Overview]]
** [[BPC SurveyManager - Tutorials - Survey Layouts]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
<noinclude>
[[Category:Featured Article]]
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
dc7ccfc5f7d790cb2dd0c17b50cdde25c14ee35b
Risk Management - Introduction
0
293
342
341
2018-10-29T11:57:33Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==What Is Risk Management?==
===Risks, Causes & Consequences===
Risks to your operations and assets are a permanent and inescapable aspect of existence. Put simply if you have an objective, the central possibility exists that your objective may not be achieved. That possibility is risk.
Inputs required for your objective may not be available when required, or the cost of the same may make the objective inviable, or the social or technical assumptions may be invalidated, etc. These are threats, or causes of objective failure, and therefore causes of risk. Threats exist - some latent and some active, but all are potential causes of the failure to achieve your objective (with varying likelihoods).
Further, it may be that failure to achieve the objective, or preserve the asset may have impacts far beyond the loss of the expected benefit to be derived, or value of the asset lost. Those impacts are the consequences. For example, at the individual business level, failure to achieve a strategic objective may result in failure of the business, while on the international stage, failure to achieve a diplomatic objective may impact the society detrimentally for generations to come, and failure to protect a critical military or hazardous materials technology may result in extensive loss of life.
Lastly, a risk may not be a bad thing - it might be a good thing, or more commonly known as "an opportunity". Likewise, in impact may not just be "nothing to really bad" but also "really good to nothing to really bad". In its fullest extent risk management covers both opportunties and exposures. Most of the following discussion will consider risk management in its more common guise as managing exposures, but when we consider "Competitive risk Management" we will once again expand the definition.
<br>
===Risk Appetite===
The degree to which these undesired outcomes are more or less certain will effect your degree of concern about them. At the extreme ends, everybody may have pretty much the same response: an undesired outcome that is virtually certain to occur will probably be judged as unacceptable, while an undesired outcome that is virtually certain not to occur, will probably be judged as acceptable. Between these extremes each individual, organisation, and society will have differing determinations of acceptability. This determination is also likely to vary with the nature of the undesired outcome (for example the 50% chance of a loss of thousands of lives is generally considered less acceptable than the 50% chance of the loss of ten dollars). This variance in judgement as the risk appetite - literally your or your organisation's willingness to passively accept the possibility of a particular type of undesired outcome.
===Risk Response, Mitigation and Control===
The reactive leader, when faced with changed circumstances will rapidly form a response. These responses are designed to minimise the consequences of the threat event and are risk mitigation actions, or risk treatments. Of course, some responses (like avoidance or insurance) are by this time out of the question - as the threat has materialised. Faced with too many or too big a change in circumstances, even the most responsive leader can be overwhelmed, and the process fails with the objective not achieved.
A wise leader then (at least) learns from experience, and establishes processes to minimise the likelihood of similar threat events occurring (prevention), to detect when they occur (detection) and immediately respond and mitigate the consequences when they occur regardless (correction). These preplanned and pre-established processes of prevention, detection and correction are controls.
===Rating a Risk===
All controls have a cost - whether measure in money, time, tactical advantage, etc. Too much control may make the achievement of the object inviable. The leader may judge that some threats experienced are unlikely to occur again (for example Yr 2000 date risk was a once off, as the year 2000 is unlikely to occur again in this time line!). Other threats will be considered almost certain - such as a sunny day melting an unrefrigerated cargo of ice cream. the probability that a threat will eventuate is its likelihood. Where the likelihood is very low, the leader may judge it is not worth the cost of controlling.
Likewise, some consequences of threat events are so minor that they can be ignored, while others are catastrophic to the objective. This judgement is the impact rating of the consequence.
The Likelihood of a threat event, combined with it's level of impact to the object achievement constitute the inherent risk to the achievement of the objective.
Although not yet part of the standard, over recent years an additional rating parameter is being argued for consideration: "Velocity". The velocity of a risk is the speed with which a causal event translates into an outcome. Velocity is a rating against time inversely, so the shorter the time it takes for a causal event to result in a specific impact, the higher the velocity.
Conversely if we are going to consider a time based measure for the onset of a risk event, we should allow for a velocity measure on the mitigation side of the equation. Here we would have two types to consider - pre-event controls (such as training, and document manuals), have a velocity measure that acts during a different phase from that during which the impact velocity is measured. The control velocity of specific interest to mitigating impact velocity is that of the reactive controls - Event (or Error) Detection and Event (or Error) Correction controls
<blockquote>
'''NOTE:''' Controls fall into one of three groups - Prevention, Detection and Correction. The first group identifies proactive controls (although some control steps in a given strategy of controls may be reactive even here), while the latter two describe purely reactive controls. Note that under this view the process of setting up a reactive control system and training the participants and systems in the operation of that control is itself a proactive step and hence a Preventive control, while the operation of the actual control itself is, to the triggering causal event, reactive.
</blockquote>
A similar case may, on the face of it, be advanced for direct estimation of Risk Frequency. Specifically, such a measure is one of the frequency of a causal event - with an assessed likelihood of triggering at each cycle. The amount of time required for a single cycle from Causal Event A<sub>0</sub> to the next potential occurrence of Causal Event A at time 1 :i.e. A<sub>1</sub> is the velocity of the likelihood of a causal event being once again tested. On this basis we could again track the velocity of the likelihood.
A reasonably strong case might also be advanced that likelihood measures carry an implied frequency measurement as people tend to rate things as more certain to occur of they are always almost occurring than when rarely experienced, even if the causal event actually occurs on these rare occasions. In this case it is argued that rating likelihood velocity in fact double weights the likelihood rating.
This author leans to the former view. If we are separating some velocities from their coupled ratings, we should consistently apply the logic of separation to them all. On that basis the probability or reliability estimates are consistently cleansed of time subjectivity, and thence become an instantaneous rating rather than a multi period rating of the probability, impact or dampening (control mitigation rating). In database design terms the rating measures are normalised with respect to time. The obvious benefit is that the greater the consistency among the properties (functional and data) if not the content of those properties, the greater the reliability that the items can be combined to give a result that varies consistently with its inputs (in this case a Risk rating). If some of the inputs are themselves functions of other inputs (such as time) the result of combining the various components of the risk formula together will not appear to move consistently with the inputs.
A further benefit of separating velocity information is the colour it might bring to the risk analysis. One can picture a risk model where the assessment of an otherwise well rated risk, on the basis of likelihood velocity (think frequency), impact velocity (think: "How quickly will this hit us?") against preventive control velocity (think "How long will it take for the training to be completed?") and Detection control velocity (think "How quickly will we know that the wheels have fallen off?" and Correction control velocity (think "How quickly will we have cleaned up the mess?"), might reveal some fascinating structural problems in a control system. Such as a 12 month wait for detection controls to be in place for an high to medium impact impact of an event happening every week, and if those detection controls that then tell us only at the end of a quarter that a problem occurred that will take 6 months to fix, we might like to know - even though individually all these controls got the highest ratings in terms of effectiveness. Of course, if our risk formula dealt with these items properly as part of its model we would not have a well rated risk with such problems!
Expressed as a formula where f() means a function of the items in parentheses, the risk equation with all these potential inputs is then:
R<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(C<sub>i</sub>), f(CPV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;C
:Means Mitigating Strategies and Controls effectiveness rating mitigating causal events and consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating each impact and possibly some to all causal events
This formula says nothing more than that the risk rating is a function of eight variables, Whole-of-risk likelihood, likelihood velocity, impact, impact velocity, but mitigated by whole-of-risk control effectiveness-reliability, working over three velocities - Prevention control velocity, Detection control velocity and Correction control velocity. In term the value supplied for each of these ratings is itself a function of the assessed value of the rating to a normalised value (such as the range of reals from -1 to 1, or a shared 5 point scale, etc.)
The weakness in this formula lies in the consolidation of the three risk groups into a single control rating for the purposes of the risk function itself (thus hiding the relationship between the control group velocities and the control group ratings.
R<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(CP<sub>i</sub>), f(CD<sub>i</sub>), f(CC<sub>i</sub>), f(CPV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;CP
:Means Mitigating Strategies and Controls effectiveness rating at preventing causal events.
;CD
:Means Mitigating Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CC
:Means Mitigating Strategies and Controls effectiveness rating for reducing the likelihood of further causal events and mitigating consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating each impact and possibly some to all causal events
<br>
===From Risk Response To Risk Management===
Faced with a similar objective at another time, the prudent leader moves from re-action to pre-action. He applies his own and others' past experience, "common sense", and deductive reasoning when identifying the nature and causes of potential threat events and their consequences. He makes judgements as to the likelihood of these identified threats, and judgements as to the degree of impact arising from the consequences. This process is risk identification and assessment.
Comparing this assessment to the organisation's risk appetite he determines a range of risk responses, treatments or controls. The shift to pre-action (more commonly described as proactive management), the leader's options are widened when compared to the earlier reactive state. By preplanning the risk profile he is able to consider avoidance (just don't do it!), risk sharing (insurance) and threat prevention (training) as options in the risk mitigation armoury. Further the costs of each mitigation strategy can be considered against the benefits expected from the achievement of the objective, and the most effective, efficient and economic ones chosen.
In all cases a threat has a "tell-tale" which must be used to detect that the threat has eventuated, or that the likelihood of a threat has changed. These controls required in this case are detection controls. As with the other pre-action choices, detective controls are most advantageous before the event occurs - as once it has occurred they will generally tell you what you already know. This shifting assessment of risk based on the changes in likelihood over time is the current risk.
Implementing detection controls, allows the leader to defer the implementation (if not the planning, design and establishment) of other reactive controls, thus delivering a degree of certainty over the costs of mitigation at each point in a project, under a variety of circumstances and levels of current risk.
Once the controls (or risk mitigation plan) is applied to the assessed inherent risk of the objective the result is the residual risk - that portion of the inherent or current risk that remains after the controls have been applied.
Risk Management is about applying a structured thought process to identifying and managing such risks.
In one form or another, every leader undertakes risk management from the minute you establish a political ideology, manifesto, business vision, organisational mission, or business or political objective. Without a plan - however loosely defined - the objective is unlikely to be achieved. That plan is a map to managing risks to the non-achievement of the objective - starting with the most obvious risk: "inaction".
While Compliance Management is about a governance process for managing adherence to internally and externally known standards, policies, procedures, and controls; Risk Management is an approach to governance that aims to identify what plans, standards, policies, procedures, and controls are be required and how important each part is to the purpose, and when you will know which additional actions will be required. Risk Management is a systematic process of making a realistic evaluation of the true level of risks to your purpose, and mitigating those risks that exceed your risk appetite in the most efficient, effective and economic manner possible.
==What Is Enterprise Risk Management?==
Enterprise Risk Management applies the concepts outlined at the project or single objective level described above, and applies them across the enterprise, government, or society (as appropriate). Enterprise management distinguishes itself from project risk management by its aims:
* Firstly, it aims to reduce duplication of risk management planning and risk mitigation strategies by facilitating cross-organisational sharing of control frameworks, management expertise, and resources.
* Secondly, it aims to minimise contradictory, counter productive and mutually exclusive risk management strategies by facilitating enterprise wide knowledge of the risk profile of the organisation.
* Thirdly, it aims to inform the governance team of their true organisation wide position on a continuous and instantaneous basis.
* Fourthly, it aims to forecast the risk profile of the organisation within, at least, the decision cycle of the governance team.
==What is Competitive Risk Management?==
So far, we have considered risk management as a stability governance tool for the assisting the achievement of identified objectives. In essence it is under this view a defencive strategy. The scope of governance arguably extends beyond maintenance of environmental stability and achievement of defined near-term deadlines and objectives, to the identification of the correct objectives (those that succeed on some measure), and longer term aspirational objectives such "more profit" or, in social measures - "higher average literacy".
This shift implies to additional dimensions should be considered:
#A risk may also be an opportunity, and an impact may be both positive and negative. Where the impact is positive for the organisation the correct corrective control response is to in fact augment it the effect (such as by adjusting the causal states of other risks (opportunities)). The overall implication is that to accomodate opportunity the risk rating scheme needs to be balance around 0 (meaning minimum risk and minimum opportunity). Whether this is best done with a positive scale and a negative scale or whether this should be achieved with a linear scale with a floating normal line is, I think an implimentation question at this stage.
#A risk/opportunity may have a group of controls (strategies) intended both alternately to mitigate (Prevent, Detect, Correct ) and augment (Focus, Sense, Enable) a risk in some way. Note that we are expanding our control groups from three to six. This is necessary where two impact rating scales are used (an opportiunity scale and a impact scale). If only a single monotonic impact scale was used: eg. "really-good to negligeable to really-bad", we could prossibly escape with four groups: Focus, Prevent, Detect, Correct. Focus is the opportunity's version of Prevent. The difference is that in the case of a risk, an effective preventive control reduces the residual likelihood (if not the inherent likelihood) of a causal event, while in an opportnity we want precisely the opposite outcome. Thus we need to track these separately. In the case of the two scale system we need both the "opportunity" equivalents for detection and correction control functions separated as well.
In competitive risk management we utilise the techniques of "defencive" risk management as a method to inform competitive strategy. The same methods that are applied to determine and manage or avoid your risks, can be applied to:
#determine, induce and exploit your opportunities, and select the opportunities most likely to be successfully exploited; and
#determine and trigger your competitor's risks, and where they are either most exposed, or where their responsive mitigation costs will be greatest. In this use there is an implied additional measure-counter measure relationship between controls where an augmentation strategy is defined that is designed to detect or counter another mitigation strategy.
In competitive risk management we therefore look to identify and exploit our opportunities and the weakness in others through application if risk management techniques. Such an application of the method is likely to be most effective where knowledge of the competitor or competing industry approaches perfection, and the accuracyy of the model used approaches perfect accuracey. There are interesting implications to game theory where all participants in a market use equivalently competitive risk management methods and have equivalently perfect knowledge.
Competitive risk management is therefore a strategy setting process. In both cases the analysis expands the colour of the control analysis part of our formulah described in the previous section. Specifically the nature of the changes required are to accomodate additional ratings and velocities for allow treat risk and opportunity a single function (eg possibly describing a parabolic or logarythmic curve as the output).
Our revised formulah for competitive risk then becomes:
RO<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(CF<sub>i</sub>), f(CS<sub>i</sub>), f(CE<sub>i</sub>), f(CP<sub>i</sub>), f(CD<sub>i</sub>), f(CC<sub>i</sub>), f(CFV<sub>i</sub>), f(CSV<sub>i</sub>), f(CEV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;RO
:is expressed in a single scale such as: "really-good to negligeable to really-bad", or as complex numbers with two scales a rating (high to neglieable) and a binary (two position) scale - "Opportunity or Risk"
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;CP
:Means Mitigating Strategies and Controls effectiveness rating at preventing causal events.
;CD
:Means Mitigating Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CC
:Means Mitigating Strategies and Controls effectiveness rating for reducing the likelihood of further causal events and mitigating consequential impacts.
;CF
:Means Enabling Strategies and Controls effectiveness rating at focussing causal events.
;CS
:Means Enabling Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CE
:Means Enabling Strategies and Controls effectiveness rating for increasing the likelihood of further causal events and enabling consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CFV
:Means Focus Control Velocity Rating for each causal event
;CSV
:Means Sensing Control Velocity Rating for each causal event and possibly some to all impacts
;CEV
:Means Enabling Control Velocity Rating for each enabling control enabling impacts and possibly some to all causal events
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating control for all impacts and possibly mitigating some to all causal events
<br>
==The Evolution of the Risk Management Standard==
In Australia, a team of experienced risk management practitioners was assembled over two decades to codify a standard for risk management as it had been (and was being) developed and deployed in Australia and New Zealand. That codification was initially released by Standards Australia as AS/NZS 4360:1995, revised as AS/NZS 4360:1999 and revised again in its current version as AS/NZS 4360:2004. You can access the standard via [http://infostore.saiglobal.com/store/Details.aspx?DocN=AS0733759041AT|SAI Risk Management Portal]. While still very much in its infancy as a governance tool, and immature as a management science, risk management has rapidly been adopted across the world and is now codified into an international standard: ISO 31000:2009 standard (October 2009), and supported by the ISO Guide 73:2009 - largely based on the AS/NZS standard.
==The Classical Approach==
In classical risk management - with respect to a given focus - a business, a business objective, and asset, etc - we told to identify the risks first, so that they can be properly managed. In its classical form, risk management asks, and attempts to answer three questions:
*What can go wrong?
*What can I do to prevent it?
*What do I do if it happens?
You are advised to develop a risk register to document each potential problem, its level of seriousness, what is required to fix it, who will fix the problem, and monitor progress.
There are essentially four things you can do with risk. We will call them, the four T's:
* Tolerate it (by accepting or ignoring a risk - this is where the profit lies)
* Treat it (by actively re-mediating or controlling it)
* Transfer it (by insuring it, perhaps better described as "sharing it")
* Terminate it (by exiting the business that incurs it)
It is critical that leaders understand that risk management is NOT about avoiding risk, but about managing it.
==The Evolution of a Risk Management Thought==
The concept of risk and reward management are not new to mankind. The walls of cities and castles were early forms of risk management, and Hadrian's Wall, Agricola's Wall, Antonine Wall, and the Great Wall of China are dramatic statements of risk containment on a social scale.
History is littered with authors and thinkers exploring the relationship between risk awareness, risk exploitation, active management and outcomes. Military and political strategists have employed the concepts underpinning modern risk management for centuries. The writings of both military and political strategists such as Sun Tzu ("The Art of War"), Carl von Clausewitz ("On War"), Niccolò Machiavelli ("The Prince", "The Art of War"), and Miyamoto Musashi ("The Five Rings") are all examples of the practical application of risk awareness in strategy formation. To varying extent these works all encourage an awareness of one's own and one's opponent's weaknesses, and the mitigations and exploitation of the same.
Perhaps, what is new, is the codification of the process of identifying, measuring, assessing, and responding to risk laid down in the more recent writings. It would be naive, however, to consider that risk management, per-se, is new. The difference between a successful manager and an unsuccessful manager has always been their ability to see the potential reward in an opportunity and get strike the correct balance between ignoring, avoiding, transferring and mitigating risks. Too much risk avoidance means opportunities are not exploited, too much control or insurance means that there is no profit left from the risky activity, and too much ignorance means that eventually the strategy's angel will become history's fool.
In the absence of a formalised approach to risk management, the successful business leader is known as lucky. In truth, the success is probably more due to a that leader's accident of DNA and life experience that leads to instinctively correct risk judgements. It is possibly this instinct, more than anything else, that justifies the executive salary differentials.
There is an important observation to be made from the historic context of risk management theory. Currently risk management professionals tend to view the discipline as an extension of the strategy achievement, yet historically, risk management has been as much about strategy identification and formation, as about implementation.
Good risk management looks both inward and outward. By this I mean that risk management can be applied both to minimising your chance of failure and maximising your competitor's chance of failure. The essence of military strategist's thinking is to identify the weakness's of the opponent and exploit them to you own advantage. Application of the principles of risk management can enable you to not only identify the opponent's weaknesses, but identify the probable strategies they will employ to manage the risks arising from those weaknesses, and hence better inform your planners about potential strategies to employ.
Over the last 50 years a number of frameworks addressing risk management with respect to governance have emerged out of the experience of the different professional groups involved in strategic management, asset protection, public accountability, finance and risk. These groups include:
* Internal Audit - focused on control system reliability
* External Audit - focused on true and fair representation of financial position on a going concern basis
* Actuarial Science - focused on the pricing of risk for insurance
* Investment banking - focused on the pricing of risk for portfolio management, hedging, capital fees and adequacy
* Risk Management - focused on management of risk to strategic and tactical outcomes on an enterprise and societal basis
Setting aside the military and political authors, among the business community, some of the earliest work in risk management arose from the financial advisory community looking for models to minimise the downside risks to financial products investment.
==A Mathematical Basis To Risk Measurement==
As early is 1952 Harry M Markowitz published his paper "Portfolio Selection" in the Journal of Finance, exploring the advantages of risk diversification through balanced portfolio selection. The essence of portfolio theory is that risk essentially expressed the potential for a negative return (financial loss) and the
An investor can reduce portfolio risk simply by holding combinations of instruments which are not perfectly positively correlated (correlation coefficient -1<(r)<1)).
To a greater of lesser extent the professional bodies, standards organisations and government agencies have responded with guidelines and standards for the measurement, application, response and management of risk as it applies to their specific problem domains. In the 1978 the Institute of Internal Auditors - the international professional body of the Internal Audit profession issued its Standard's for the Professional Practise of Internal Audit (SPPIA). In Anne of the earliest standards based references to risk based management the standards included standard 320: "Compliance with Policies, Plans, Procedures, Laws and Regulations". The statement determined that "Internal auditors should review the systems established to ensure compliance with policies, plans, procedures, laws and regulations which could have a significant impact on operations and reports, and should determine whether the organisation is in compliance". The SPPIA standards mandated the
==Alternative Standards and Views of Risk Management==
Among the definitive pronouncements on risk management are:
* The King Report on Corporate Governance for South Africa (SA King II - 2002)
* A Risk Management Standard (RMS 2004) by the Federation of European Risk Management Association (UK FERMA)
* Australian/New Zealand Standard 4360—Risk Management (A/NZ 1995, 1999, 2004)
* COSO’s Enterprise Risk Management— Integrated Framework
* The Institute of Management Accountants’ (IMA)
* “A Global Perspective on Assessing Internal Control over Financial Reporting” (ICoFR)
* Basel II
* Standard & Poor’s and ERM
* ISO 31000:2009
Building on the work of many years, the middle of the first decade of the millenium saw a succession of enterprise risk management (ERM) related pronouncements. AS/NZS 4360: 2004 defined the risk management process as the “'''systematic application of management policies, procedures and practices to the tasks of communicating, establishing the context, identifying, analysing, evaluating, treating, monitoring and reviewing '''”. For the financial sector, the earlier BASEL I standard was superceded by BASEL II which closely mirrored by the view of AS/NZS 4360.
Expanding on an earlier Internal Control Framework from the early 1990's the Committee of Sponsoring Organisations of the Treadway Commission (COSO) releasmillenniumed the ‘Enterprise Risk Management (ERM) – Integrated Framework’ which attempted to map the COSO framework that formed the motivational basis for the US Sarbanes-Oxley compliance legislation into a broader enterprise risk management framework. The COSO/ERM framwork defined enterprise risk management as:
* A process, ongoing and flowing through an entity,
* Effected by people at every level of an organisation,
* Applied in strategy setting,
* Applied across the enterprise, at every level and unit, and includes taking an entity-level portfolio view of risk,
* Designed to identify potential events that, if they occur, will affect the entity and to manage risk within its risk appetite,
* Able to provide reasonable assurance to an entity’s management and board of directors,
* Geared to achievement of objectives in one or more separate but overlapping categories.
The standards enjoy a shared purpose to improve the predictability of business outcomes, but differ significantly in how that certainty is to be improved. While 4360 describes the process for management of risk, BASEL II mandates firm’s operational risk management (ORM) system must be “conceptually sound and implemented with integrity”, but stops short of defining the form or process of the ORM. BASEL II does specify that the ORM should be maintained by an independent operational risk management function, and that is to consist of at least “strategies, methodologies and risk reporting systems". It identifies that the purpose of the ORM is to "identify, measure, monitor and control/mitigate operational risk”.
Under BASEL II, the ORM systems should be:
* “credible and appropriate”,
* “well reasoned, well documented”,
* “transparent and accessible”, and
* capable of being validated by audit.
Among the failings of BASEL II, is the lack of definition of these key terms, which, in a sense, is where AS/NZSpractisessuperseded 4360 and the COSO ERM Framwork come in. The latter standards provides a framework under which a credible, reasoned, transparent, documented and verifiable risk management model can be established.
AS/NZS 4360 and COSO do not eliminate failure in the ORM/ERM, however, as in their implementation there is still considerable subjectivity in risk identification and assessment, and within the process documented by the standard there is not a mechanism for provining or measuring "completeness". They do, however, populate the next level of the BASEL II obligation.
This problem of "completeness" in ERM frameworks should not be underestimated. It is present in all current risk management standards and is possibly a key reason for failure in ERM frameworks. We shall explore approaches to solving this problem in later papers.
Owing to their differing origins the three standards employ slightly different terminology for shared ideas:
* AS/NZS 4360 refers to ‘Risk Treatment’, COSO to ‘Risk Response’ and Basel II uses ‘Risk Mitigation’.
While the seven ‘elements’ of AS/NZS 4360:2004 framework do not align precisely with the eight ‘components’ of the COSO process, the ‘end to end’ risk management process is the same.
<table cellpadding="10" >
<tr>
<th>
AS/NZS 4360: 2004
Framework
</th>
<th>
COSOframework ERM–Integrated
Framework
</th>
<th>
BASEL II ORM
Framework
</th>
</tr>
<tr>
<td>
Establish the context
</td>
<td>
Internal environment
</td>
<td>
</td>
</tr>
<tr>
<td>
Establish the context
</td>
<td>
Objective setting
</td>
<td>
</td>
</tr>
<tr>
<td>
Identify risks
</td>
<td>
Event identification
</td>
<td>
Identify
</td>
</tr>
<tr>
<td>
Analyse risks
</td>
<td>
Risk assessment
</td>
<td>
Assess
</td>
</tr>
<tr>
<td>
Evaluate risks
</td>
<td>
Risk assessment
</td>
<td>
Assess
</td>
</tr>
<tr>
<td>
Treat risks
</td>
<td>
Risk response and control activities
</td>
<td>
Control/mitigate
</td>
</tr>
<tr>
<td>
Monitor and review
</td>
<td>
Monitoring
</td>
<td>
Monitor
</td>
</tr>
<tr>
<td>
Consult and communicate
</td>
<td>
Information and communication
</td>
<td>
</td>
</tr>
</table>
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Risk Management]]
{{BackLinks}}
</noinclude>
0c92f2577353da0d73bf684aee6689d18b9f93ee
388
342
2018-10-29T12:04:05Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==What Is Risk Management?==
===Risks, Causes & Consequences===
Risks to your operations and assets are a permanent and inescapable aspect of existence. Put simply if you have an objective, the central possibility exists that your objective may not be achieved. That possibility is risk.
Inputs required for your objective may not be available when required, or the cost of the same may make the objective inviable, or the social or technical assumptions may be invalidated, etc. These are threats, or causes of objective failure, and therefore causes of risk. Threats exist - some latent and some active, but all are potential causes of the failure to achieve your objective (with varying likelihoods).
Further, it may be that failure to achieve the objective, or preserve the asset may have impacts far beyond the loss of the expected benefit to be derived, or value of the asset lost. Those impacts are the consequences. For example, at the individual business level, failure to achieve a strategic objective may result in failure of the business, while on the international stage, failure to achieve a diplomatic objective may impact the society detrimentally for generations to come, and failure to protect a critical military or hazardous materials technology may result in extensive loss of life.
Lastly, a risk may not be a bad thing - it might be a good thing, or more commonly known as "an opportunity". Likewise, in impact may not just be "nothing to really bad" but also "really good to nothing to really bad". In its fullest extent risk management covers both opportunties and exposures. Most of the following discussion will consider risk management in its more common guise as managing exposures, but when we consider "Competitive risk Management" we will once again expand the definition.
<br>
===Risk Appetite===
The degree to which these undesired outcomes are more or less certain will effect your degree of concern about them. At the extreme ends, everybody may have pretty much the same response: an undesired outcome that is virtually certain to occur will probably be judged as unacceptable, while an undesired outcome that is virtually certain not to occur, will probably be judged as acceptable. Between these extremes each individual, organisation, and society will have differing determinations of acceptability. This determination is also likely to vary with the nature of the undesired outcome (for example the 50% chance of a loss of thousands of lives is generally considered less acceptable than the 50% chance of the loss of ten dollars). This variance in judgement as the risk appetite - literally your or your organisation's willingness to passively accept the possibility of a particular type of undesired outcome.
===Risk Response, Mitigation and Control===
The reactive leader, when faced with changed circumstances will rapidly form a response. These responses are designed to minimise the consequences of the threat event and are risk mitigation actions, or risk treatments. Of course, some responses (like avoidance or insurance) are by this time out of the question - as the threat has materialised. Faced with too many or too big a change in circumstances, even the most responsive leader can be overwhelmed, and the process fails with the objective not achieved.
A wise leader then (at least) learns from experience, and establishes processes to minimise the likelihood of similar threat events occurring (prevention), to detect when they occur (detection) and immediately respond and mitigate the consequences when they occur regardless (correction). These preplanned and pre-established processes of prevention, detection and correction are controls.
===Rating a Risk===
All controls have a cost - whether measure in money, time, tactical advantage, etc. Too much control may make the achievement of the object inviable. The leader may judge that some threats experienced are unlikely to occur again (for example Yr 2000 date risk was a once off, as the year 2000 is unlikely to occur again in this time line!). Other threats will be considered almost certain - such as a sunny day melting an unrefrigerated cargo of ice cream. the probability that a threat will eventuate is its likelihood. Where the likelihood is very low, the leader may judge it is not worth the cost of controlling.
Likewise, some consequences of threat events are so minor that they can be ignored, while others are catastrophic to the objective. This judgement is the impact rating of the consequence.
The Likelihood of a threat event, combined with it's level of impact to the object achievement constitute the inherent risk to the achievement of the objective.
Although not yet part of the standard, over recent years an additional rating parameter is being argued for consideration: "Velocity". The velocity of a risk is the speed with which a causal event translates into an outcome. Velocity is a rating against time inversely, so the shorter the time it takes for a causal event to result in a specific impact, the higher the velocity.
Conversely if we are going to consider a time based measure for the onset of a risk event, we should allow for a velocity measure on the mitigation side of the equation. Here we would have two types to consider - pre-event controls (such as training, and document manuals), have a velocity measure that acts during a different phase from that during which the impact velocity is measured. The control velocity of specific interest to mitigating impact velocity is that of the reactive controls - Event (or Error) Detection and Event (or Error) Correction controls
<blockquote>
'''NOTE:''' Controls fall into one of three groups - Prevention, Detection and Correction. The first group identifies proactive controls (although some control steps in a given strategy of controls may be reactive even here), while the latter two describe purely reactive controls. Note that under this view the process of setting up a reactive control system and training the participants and systems in the operation of that control is itself a proactive step and hence a Preventive control, while the operation of the actual control itself is, to the triggering causal event, reactive.
</blockquote>
A similar case may, on the face of it, be advanced for direct estimation of Risk Frequency. Specifically, such a measure is one of the frequency of a causal event - with an assessed likelihood of triggering at each cycle. The amount of time required for a single cycle from Causal Event A<sub>0</sub> to the next potential occurrence of Causal Event A at time 1 :i.e. A<sub>1</sub> is the velocity of the likelihood of a causal event being once again tested. On this basis we could again track the velocity of the likelihood.
A reasonably strong case might also be advanced that likelihood measures carry an implied frequency measurement as people tend to rate things as more certain to occur of they are always almost occurring than when rarely experienced, even if the causal event actually occurs on these rare occasions. In this case it is argued that rating likelihood velocity in fact double weights the likelihood rating.
This author leans to the former view. If we are separating some velocities from their coupled ratings, we should consistently apply the logic of separation to them all. On that basis the probability or reliability estimates are consistently cleansed of time subjectivity, and thence become an instantaneous rating rather than a multi period rating of the probability, impact or dampening (control mitigation rating). In database design terms the rating measures are normalised with respect to time. The obvious benefit is that the greater the consistency among the properties (functional and data) if not the content of those properties, the greater the reliability that the items can be combined to give a result that varies consistently with its inputs (in this case a Risk rating). If some of the inputs are themselves functions of other inputs (such as time) the result of combining the various components of the risk formula together will not appear to move consistently with the inputs.
A further benefit of separating velocity information is the colour it might bring to the risk analysis. One can picture a risk model where the assessment of an otherwise well rated risk, on the basis of likelihood velocity (think frequency), impact velocity (think: "How quickly will this hit us?") against preventive control velocity (think "How long will it take for the training to be completed?") and Detection control velocity (think "How quickly will we know that the wheels have fallen off?" and Correction control velocity (think "How quickly will we have cleaned up the mess?"), might reveal some fascinating structural problems in a control system. Such as a 12 month wait for detection controls to be in place for an high to medium impact impact of an event happening every week, and if those detection controls that then tell us only at the end of a quarter that a problem occurred that will take 6 months to fix, we might like to know - even though individually all these controls got the highest ratings in terms of effectiveness. Of course, if our risk formula dealt with these items properly as part of its model we would not have a well rated risk with such problems!
Expressed as a formula where f() means a function of the items in parentheses, the risk equation with all these potential inputs is then:
R<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(C<sub>i</sub>), f(CPV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;C
:Means Mitigating Strategies and Controls effectiveness rating mitigating causal events and consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating each impact and possibly some to all causal events
This formula says nothing more than that the risk rating is a function of eight variables, Whole-of-risk likelihood, likelihood velocity, impact, impact velocity, but mitigated by whole-of-risk control effectiveness-reliability, working over three velocities - Prevention control velocity, Detection control velocity and Correction control velocity. In term the value supplied for each of these ratings is itself a function of the assessed value of the rating to a normalised value (such as the range of reals from -1 to 1, or a shared 5 point scale, etc.)
The weakness in this formula lies in the consolidation of the three risk groups into a single control rating for the purposes of the risk function itself (thus hiding the relationship between the control group velocities and the control group ratings.
R<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(CP<sub>i</sub>), f(CD<sub>i</sub>), f(CC<sub>i</sub>), f(CPV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;CP
:Means Mitigating Strategies and Controls effectiveness rating at preventing causal events.
;CD
:Means Mitigating Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CC
:Means Mitigating Strategies and Controls effectiveness rating for reducing the likelihood of further causal events and mitigating consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating each impact and possibly some to all causal events
<br>
===From Risk Response To Risk Management===
Faced with a similar objective at another time, the prudent leader moves from re-action to pre-action. He applies his own and others' past experience, "common sense", and deductive reasoning when identifying the nature and causes of potential threat events and their consequences. He makes judgements as to the likelihood of these identified threats, and judgements as to the degree of impact arising from the consequences. This process is risk identification and assessment.
Comparing this assessment to the organisation's risk appetite he determines a range of risk responses, treatments or controls. The shift to pre-action (more commonly described as proactive management), the leader's options are widened when compared to the earlier reactive state. By preplanning the risk profile he is able to consider avoidance (just don't do it!), risk sharing (insurance) and threat prevention (training) as options in the risk mitigation armoury. Further the costs of each mitigation strategy can be considered against the benefits expected from the achievement of the objective, and the most effective, efficient and economic ones chosen.
In all cases a threat has a "tell-tale" which must be used to detect that the threat has eventuated, or that the likelihood of a threat has changed. These controls required in this case are detection controls. As with the other pre-action choices, detective controls are most advantageous before the event occurs - as once it has occurred they will generally tell you what you already know. This shifting assessment of risk based on the changes in likelihood over time is the current risk.
Implementing detection controls, allows the leader to defer the implementation (if not the planning, design and establishment) of other reactive controls, thus delivering a degree of certainty over the costs of mitigation at each point in a project, under a variety of circumstances and levels of current risk.
Once the controls (or risk mitigation plan) is applied to the assessed inherent risk of the objective the result is the residual risk - that portion of the inherent or current risk that remains after the controls have been applied.
Risk Management is about applying a structured thought process to identifying and managing such risks.
In one form or another, every leader undertakes risk management from the minute you establish a political ideology, manifesto, business vision, organisational mission, or business or political objective. Without a plan - however loosely defined - the objective is unlikely to be achieved. That plan is a map to managing risks to the non-achievement of the objective - starting with the most obvious risk: "inaction".
While Compliance Management is about a governance process for managing adherence to internally and externally known standards, policies, procedures, and controls; Risk Management is an approach to governance that aims to identify what plans, standards, policies, procedures, and controls are be required and how important each part is to the purpose, and when you will know which additional actions will be required. Risk Management is a systematic process of making a realistic evaluation of the true level of risks to your purpose, and mitigating those risks that exceed your risk appetite in the most efficient, effective and economic manner possible.
==What Is Enterprise Risk Management?==
Enterprise Risk Management applies the concepts outlined at the project or single objective level described above, and applies them across the enterprise, government, or society (as appropriate). Enterprise management distinguishes itself from project risk management by its aims:
* Firstly, it aims to reduce duplication of risk management planning and risk mitigation strategies by facilitating cross-organisational sharing of control frameworks, management expertise, and resources.
* Secondly, it aims to minimise contradictory, counter productive and mutually exclusive risk management strategies by facilitating enterprise wide knowledge of the risk profile of the organisation.
* Thirdly, it aims to inform the governance team of their true organisation wide position on a continuous and instantaneous basis.
* Fourthly, it aims to forecast the risk profile of the organisation within, at least, the decision cycle of the governance team.
==What is Competitive Risk Management?==
So far, we have considered risk management as a stability governance tool for the assisting the achievement of identified objectives. In essence it is under this view a defencive strategy. The scope of governance arguably extends beyond maintenance of environmental stability and achievement of defined near-term deadlines and objectives, to the identification of the correct objectives (those that succeed on some measure), and longer term aspirational objectives such "more profit" or, in social measures - "higher average literacy".
This shift implies to additional dimensions should be considered:
#A risk may also be an opportunity, and an impact may be both positive and negative. Where the impact is positive for the organisation the correct corrective control response is to in fact augment it the effect (such as by adjusting the causal states of other risks (opportunities)). The overall implication is that to accomodate opportunity the risk rating scheme needs to be balance around 0 (meaning minimum risk and minimum opportunity). Whether this is best done with a positive scale and a negative scale or whether this should be achieved with a linear scale with a floating normal line is, I think an implimentation question at this stage.
#A risk/opportunity may have a group of controls (strategies) intended both alternately to mitigate (Prevent, Detect, Correct ) and augment (Focus, Sense, Enable) a risk in some way. Note that we are expanding our control groups from three to six. This is necessary where two impact rating scales are used (an opportiunity scale and a impact scale). If only a single monotonic impact scale was used: eg. "really-good to negligeable to really-bad", we could prossibly escape with four groups: Focus, Prevent, Detect, Correct. Focus is the opportunity's version of Prevent. The difference is that in the case of a risk, an effective preventive control reduces the residual likelihood (if not the inherent likelihood) of a causal event, while in an opportnity we want precisely the opposite outcome. Thus we need to track these separately. In the case of the two scale system we need both the "opportunity" equivalents for detection and correction control functions separated as well.
In competitive risk management we utilise the techniques of "defencive" risk management as a method to inform competitive strategy. The same methods that are applied to determine and manage or avoid your risks, can be applied to:
#determine, induce and exploit your opportunities, and select the opportunities most likely to be successfully exploited; and
#determine and trigger your competitor's risks, and where they are either most exposed, or where their responsive mitigation costs will be greatest. In this use there is an implied additional measure-counter measure relationship between controls where an augmentation strategy is defined that is designed to detect or counter another mitigation strategy.
In competitive risk management we therefore look to identify and exploit our opportunities and the weakness in others through application if risk management techniques. Such an application of the method is likely to be most effective where knowledge of the competitor or competing industry approaches perfection, and the accuracyy of the model used approaches perfect accuracey. There are interesting implications to game theory where all participants in a market use equivalently competitive risk management methods and have equivalently perfect knowledge.
Competitive risk management is therefore a strategy setting process. In both cases the analysis expands the colour of the control analysis part of our formulah described in the previous section. Specifically the nature of the changes required are to accomodate additional ratings and velocities for allow treat risk and opportunity a single function (eg possibly describing a parabolic or logarythmic curve as the output).
Our revised formulah for competitive risk then becomes:
RO<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(CF<sub>i</sub>), f(CS<sub>i</sub>), f(CE<sub>i</sub>), f(CP<sub>i</sub>), f(CD<sub>i</sub>), f(CC<sub>i</sub>), f(CFV<sub>i</sub>), f(CSV<sub>i</sub>), f(CEV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;RO
:is expressed in a single scale such as: "really-good to negligeable to really-bad", or as complex numbers with two scales a rating (high to neglieable) and a binary (two position) scale - "Opportunity or Risk"
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;CP
:Means Mitigating Strategies and Controls effectiveness rating at preventing causal events.
;CD
:Means Mitigating Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CC
:Means Mitigating Strategies and Controls effectiveness rating for reducing the likelihood of further causal events and mitigating consequential impacts.
;CF
:Means Enabling Strategies and Controls effectiveness rating at focussing causal events.
;CS
:Means Enabling Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CE
:Means Enabling Strategies and Controls effectiveness rating for increasing the likelihood of further causal events and enabling consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CFV
:Means Focus Control Velocity Rating for each causal event
;CSV
:Means Sensing Control Velocity Rating for each causal event and possibly some to all impacts
;CEV
:Means Enabling Control Velocity Rating for each enabling control enabling impacts and possibly some to all causal events
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating control for all impacts and possibly mitigating some to all causal events
<br>
==The Evolution of the Risk Management Standard==
In Australia, a team of experienced risk management practitioners was assembled over two decades to codify a standard for risk management as it had been (and was being) developed and deployed in Australia and New Zealand. That codification was initially released by Standards Australia as AS/NZS 4360:1995, revised as AS/NZS 4360:1999 and revised again in its current version as AS/NZS 4360:2004. You can access the standard via [http://infostore.saiglobal.com/store/Details.aspx?DocN=AS0733759041AT|SAI Risk Management Portal]. While still very much in its infancy as a governance tool, and immature as a management science, risk management has rapidly been adopted across the world and is now codified into an international standard: ISO 31000:2009 standard (October 2009), and supported by the ISO Guide 73:2009 - largely based on the AS/NZS standard.
==The Classical Approach==
In classical risk management - with respect to a given focus - a business, a business objective, and asset, etc - we told to identify the risks first, so that they can be properly managed. In its classical form, risk management asks, and attempts to answer three questions:
*What can go wrong?
*What can I do to prevent it?
*What do I do if it happens?
You are advised to develop a risk register to document each potential problem, its level of seriousness, what is required to fix it, who will fix the problem, and monitor progress.
There are essentially four things you can do with risk. We will call them, the four T's:
* Tolerate it (by accepting or ignoring a risk - this is where the profit lies)
* Treat it (by actively re-mediating or controlling it)
* Transfer it (by insuring it, perhaps better described as "sharing it")
* Terminate it (by exiting the business that incurs it)
It is critical that leaders understand that risk management is NOT about avoiding risk, but about managing it.
==The Evolution of a Risk Management Thought==
The concept of risk and reward management are not new to mankind. The walls of cities and castles were early forms of risk management, and Hadrian's Wall, Agricola's Wall, Antonine Wall, and the Great Wall of China are dramatic statements of risk containment on a social scale.
History is littered with authors and thinkers exploring the relationship between risk awareness, risk exploitation, active management and outcomes. Military and political strategists have employed the concepts underpinning modern risk management for centuries. The writings of both military and political strategists such as Sun Tzu ("The Art of War"), Carl von Clausewitz ("On War"), Niccolò Machiavelli ("The Prince", "The Art of War"), and Miyamoto Musashi ("The Five Rings") are all examples of the practical application of risk awareness in strategy formation. To varying extent these works all encourage an awareness of one's own and one's opponent's weaknesses, and the mitigations and exploitation of the same.
Perhaps, what is new, is the codification of the process of identifying, measuring, assessing, and responding to risk laid down in the more recent writings. It would be naive, however, to consider that risk management, per-se, is new. The difference between a successful manager and an unsuccessful manager has always been their ability to see the potential reward in an opportunity and get strike the correct balance between ignoring, avoiding, transferring and mitigating risks. Too much risk avoidance means opportunities are not exploited, too much control or insurance means that there is no profit left from the risky activity, and too much ignorance means that eventually the strategy's angel will become history's fool.
In the absence of a formalised approach to risk management, the successful business leader is known as lucky. In truth, the success is probably more due to a that leader's accident of DNA and life experience that leads to instinctively correct risk judgements. It is possibly this instinct, more than anything else, that justifies the executive salary differentials.
There is an important observation to be made from the historic context of risk management theory. Currently risk management professionals tend to view the discipline as an extension of the strategy achievement, yet historically, risk management has been as much about strategy identification and formation, as about implementation.
Good risk management looks both inward and outward. By this I mean that risk management can be applied both to minimising your chance of failure and maximising your competitor's chance of failure. The essence of military strategist's thinking is to identify the weakness's of the opponent and exploit them to you own advantage. Application of the principles of risk management can enable you to not only identify the opponent's weaknesses, but identify the probable strategies they will employ to manage the risks arising from those weaknesses, and hence better inform your planners about potential strategies to employ.
Over the last 50 years a number of frameworks addressing risk management with respect to governance have emerged out of the experience of the different professional groups involved in strategic management, asset protection, public accountability, finance and risk. These groups include:
* Internal Audit - focused on control system reliability
* External Audit - focused on true and fair representation of financial position on a going concern basis
* Actuarial Science - focused on the pricing of risk for insurance
* Investment banking - focused on the pricing of risk for portfolio management, hedging, capital fees and adequacy
* Risk Management - focused on management of risk to strategic and tactical outcomes on an enterprise and societal basis
Setting aside the military and political authors, among the business community, some of the earliest work in risk management arose from the financial advisory community looking for models to minimise the downside risks to financial products investment.
==A Mathematical Basis To Risk Measurement==
As early is 1952 Harry M Markowitz published his paper "Portfolio Selection" in the Journal of Finance, exploring the advantages of risk diversification through balanced portfolio selection. The essence of portfolio theory is that risk essentially expressed the potential for a negative return (financial loss) and the
An investor can reduce portfolio risk simply by holding combinations of instruments which are not perfectly positively correlated (correlation coefficient -1<(r)<1)).
To a greater of lesser extent the professional bodies, standards organisations and government agencies have responded with guidelines and standards for the measurement, application, response and management of risk as it applies to their specific problem domains. In the 1978 the Institute of Internal Auditors - the international professional body of the Internal Audit profession issued its Standard's for the Professional Practise of Internal Audit (SPPIA). In Anne of the earliest standards based references to risk based management the standards included standard 320: "Compliance with Policies, Plans, Procedures, Laws and Regulations". The statement determined that "Internal auditors should review the systems established to ensure compliance with policies, plans, procedures, laws and regulations which could have a significant impact on operations and reports, and should determine whether the organisation is in compliance". The SPPIA standards mandated the
==Alternative Standards and Views of Risk Management==
Among the definitive pronouncements on risk management are:
* The King Report on Corporate Governance for South Africa (SA King II - 2002)
* A Risk Management Standard (RMS 2004) by the Federation of European Risk Management Association (UK FERMA)
* Australian/New Zealand Standard 4360—Risk Management (A/NZ 1995, 1999, 2004)
* COSO’s Enterprise Risk Management— Integrated Framework
* The Institute of Management Accountants’ (IMA)
* “A Global Perspective on Assessing Internal Control over Financial Reporting” (ICoFR)
* Basel II
* Standard & Poor’s and ERM
* ISO 31000:2009
Building on the work of many years, the middle of the first decade of the millenium saw a succession of enterprise risk management (ERM) related pronouncements. AS/NZS 4360: 2004 defined the risk management process as the “'''systematic application of management policies, procedures and practices to the tasks of communicating, establishing the context, identifying, analysing, evaluating, treating, monitoring and reviewing '''”. For the financial sector, the earlier BASEL I standard was superceded by BASEL II which closely mirrored by the view of AS/NZS 4360.
Expanding on an earlier Internal Control Framework from the early 1990's the Committee of Sponsoring Organisations of the Treadway Commission (COSO) releasmillenniumed the ‘Enterprise Risk Management (ERM) – Integrated Framework’ which attempted to map the COSO framework that formed the motivational basis for the US Sarbanes-Oxley compliance legislation into a broader enterprise risk management framework. The COSO/ERM framwork defined enterprise risk management as:
* A process, ongoing and flowing through an entity,
* Effected by people at every level of an organisation,
* Applied in strategy setting,
* Applied across the enterprise, at every level and unit, and includes taking an entity-level portfolio view of risk,
* Designed to identify potential events that, if they occur, will affect the entity and to manage risk within its risk appetite,
* Able to provide reasonable assurance to an entity’s management and board of directors,
* Geared to achievement of objectives in one or more separate but overlapping categories.
The standards enjoy a shared purpose to improve the predictability of business outcomes, but differ significantly in how that certainty is to be improved. While 4360 describes the process for management of risk, BASEL II mandates firm’s operational risk management (ORM) system must be “conceptually sound and implemented with integrity”, but stops short of defining the form or process of the ORM. BASEL II does specify that the ORM should be maintained by an independent operational risk management function, and that is to consist of at least “strategies, methodologies and risk reporting systems". It identifies that the purpose of the ORM is to "identify, measure, monitor and control/mitigate operational risk”.
Under BASEL II, the ORM systems should be:
* “credible and appropriate”,
* “well reasoned, well documented”,
* “transparent and accessible”, and
* capable of being validated by audit.
Among the failings of BASEL II, is the lack of definition of these key terms, which, in a sense, is where AS/NZSpractisessuperseded 4360 and the COSO ERM Framwork come in. The latter standards provides a framework under which a credible, reasoned, transparent, documented and verifiable risk management model can be established.
AS/NZS 4360 and COSO do not eliminate failure in the ORM/ERM, however, as in their implementation there is still considerable subjectivity in risk identification and assessment, and within the process documented by the standard there is not a mechanism for provining or measuring "completeness". They do, however, populate the next level of the BASEL II obligation.
This problem of "completeness" in ERM frameworks should not be underestimated. It is present in all current risk management standards and is possibly a key reason for failure in ERM frameworks. We shall explore approaches to solving this problem in later papers.
Owing to their differing origins the three standards employ slightly different terminology for shared ideas:
* AS/NZS 4360 refers to ‘Risk Treatment’, COSO to ‘Risk Response’ and Basel II uses ‘Risk Mitigation’.
While the seven ‘elements’ of AS/NZS 4360:2004 framework do not align precisely with the eight ‘components’ of the COSO process, the ‘end to end’ risk management process is the same.
<table cellpadding="10" >
<tr>
<th>
AS/NZS 4360: 2004
Framework
</th>
<th>
COSOframework ERM–Integrated
Framework
</th>
<th>
BASEL II ORM
Framework
</th>
</tr>
<tr>
<td>
Establish the context
</td>
<td>
Internal environment
</td>
<td>
</td>
</tr>
<tr>
<td>
Establish the context
</td>
<td>
Objective setting
</td>
<td>
</td>
</tr>
<tr>
<td>
Identify risks
</td>
<td>
Event identification
</td>
<td>
Identify
</td>
</tr>
<tr>
<td>
Analyse risks
</td>
<td>
Risk assessment
</td>
<td>
Assess
</td>
</tr>
<tr>
<td>
Evaluate risks
</td>
<td>
Risk assessment
</td>
<td>
Assess
</td>
</tr>
<tr>
<td>
Treat risks
</td>
<td>
Risk response and control activities
</td>
<td>
Control/mitigate
</td>
</tr>
<tr>
<td>
Monitor and review
</td>
<td>
Monitoring
</td>
<td>
Monitor
</td>
</tr>
<tr>
<td>
Consult and communicate
</td>
<td>
Information and communication
</td>
<td>
</td>
</tr>
</table>
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Risk Management]]
{{BackLinks}}
</noinclude>
0c92f2577353da0d73bf684aee6689d18b9f93ee
Report Writing
0
294
344
343
2018-10-29T11:57:34Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==About The Author==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2007 - Moral Rights Retained
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
==About This Document==
This paper compliments the Internal Audit and Management Consulting guides and discussions throughout the RiskWiki. It presents a brief guide to issues of style and presentation in writing up findings generally and with a very few exceptions applies universally to consultant and management reports (as well as to Internal Audit Reports).
Texts used as the basis for some of the views presented in this document and worthy of further exploration include:
* The Penguin Working Words (Penguin 1993)
* Fowler's Modern English Usage 2nd Edition (Oxford University Press 1965)
* Oxford Dictionary (Oxford University Press)
* Style Manual 4th Edition (Australian Government Press Service 1988)
* Practical English Usage - Michael Swan (Oxford University Press 1980)
* The Cambridge Encyclopedia of Language - David Crystal (Cambridge University Press 1987)
* Deloitte Internal Audit Method, Volume 6 - Report Writing - J Bishop & J Crawford (DTT 1992-3)
* Stanton Consulting Partners Style Manual (J Bishop 1995)
* NAB IA Reporting Style Guide ( J Bishop -1999- & an Unknown NAB Staff Member)
* Bishop Phillips Consulting Style Manual (J Bishop 2000)
==Writing Style==
===Introduction===
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="right">
<tr>
<td>
<div class="left" style="background-color:#FFFF99" >
====Bishop's Writing Rules:====
# Rule: The Passive puts people to sleep.
# Rule: Ending a sentence with a preposition is a situation up with which I will not put.
# Rule: Objects like subjects
# Rule: One point to a paragraph
# Rule: Get to the bottom line first
# Rule: Just do it - say what you mean.
# Rule: Readers don’t read
# Rule: Three sentences are company, four is a crowd
# Rule: Conjunctions can't commence (a sentence)
# Rule: Conjunction collections confuse
# Rule: Personalise people not things
# Rule: Negativity negates.
# Rule: DON'T SHOUT
# Rule: Don't plan to make a plan.
# Rule: Consistency is king
# Rule: Death is in the details.
# Rule: Pronouns need a noun
# Don't split the infinitive
# Rule: Unintroduced acronyms are antisocial
# Rule: Generalities are generally imprecise
# Rule: Let the facts carry the case.
</div>
</td>
</tr>
</table>
In written expression, a few simple rules can make the difference between clarity and confusion. Applying the rules in this section will help us both record our ideas efficiently and convey our meaning clearly.
The rules are a mix of style and traditional grammar identified over many years of reviewing and writing audit reports. We will need a rudimentary understanding of grammar to apply a number of these rules effectively.
Syntax assists semantics. Grammar defines the syntax of the language. Good syntax describes the structures a sentence can follow and still be considered well formed.
Semantics is the meaning of a sentence. Syntax assists semantics by managing the flow of ideas, and distinguishing ambiguities.
Consider for a moment the classic poets' joke
"What is this thing called love?" - The plaintive cry of a tortured heart.
"What is this thing called, love?" -The question of a curious friend on sighting a never before seen object.
One stray comma makes all the difference to the meaning of the question. In speech we use tone, rhythm, intonation and body language to convey meaning. In written expression we rely on syntax - the rules of grammar
We can not solve all problems of ambiguity in language with punctuation, but with a better understanding of grammar we can avoid the ambiguity in the first place. Take, for example, the sentence: "Flying saucers can be thrilling". This sentence seemingly can have a number of meanings:
# The act of flying a saucer can thrill the pilot.
# Seeing a saucer in flight can thrill the observer.
# The idea of a saucer that flies thrills.
We will see, however, that even in this situation, the judicious application of some simple rules when forming the sentence can result in clarity:
"Flying a saucer can thrill the pilot."
What has changed? We have moved from the general ("flying saucers") to the specific ("flying a saucer") (rule 20). We have also introduced a subject (the pilot) to the sentence where only the object and verb existed (rule 3) and applied plurals consistently (rule 15). Lastly applying rule 1 eliminates the problem entirely:
"A pilot can be thrilled when flying a saucer.."
To understand how to do this, we need a little grammar.
Since we can not avoid grammar if we wish to understand how best to convey our meaning, our discussion will be facilitated by first establishing the definition of a few grammatical terms. This we do in the next sub-section. Armed with a few parts of speech we will then explore the 19 rules over the subsections thereafter.
==A Grammar Crash Course==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="right">
<tr>
<td>
<div class="center" >
[[Image:IARepWriteCavemen.png]]
</div>
</td>
</tr>
</table>
===Subject, Verb and Object===
'When nine hundred years you reach, look as good, you will not. Strong with the Force you are…."
Remember Yoda ? Among the little, wrinkly, green "Star Wars" character's more distinctive features was "Yoda Speak". To a linguist, Yoda represents an imaginary member of a very rare and select group: races with languages that use an "Object - Subject - Verb" structure.
The understanding of the difference between each of these components is the first step in mastering sentence structure.
The order of subject (S) - verb (V) - object (O) (SVO) is the classic "natural" english sentence:
<table border=1 align="center" >
<tr><td >"Management </td><td> is adhering </td><td> to </td><td> credit policies."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb </td><td></td><td align="center" > Object </td></tr>
</table>
Things work quite well if we think of a sentence as revolving around a verb. The subject of a verb is the noun (or noun substitute ) that directs the action of the verb. The object of a verb is a noun (or noun substitute) that receives the action, is effected by the action, or about which the action is concerned. In the majority of instances a noun substitute is a pronoun.
In the example "management" directs the action and is therefore the subject, while "credit policies" are the things being "adhered to" and therefore the object. As a rough rule of thumb, if the noun phrase starts with a preposition it is a fair bet that the noun concerned is the object. In the example sentence, "to" is the preposition.
===Prepositions===
A preposition relates a word or phrase to another part of the sentence.
<table border=1 align="center" >
<tr><td >"Management </td><td> is adhering </td><td align="center" > to </td><td> credit policies."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb </td><td align="center" >Preposition</td><td align="center" > Object </td></tr>
</table>
Words that are prepositions include: to, in, into, on, upon, over, before, after, of, with.
In the example the word to joins (or more accurately relates) the noun phrase "credit policies" to the rest of the sentence - "Management is adhering".
A note of caution - a word that is a preposition in one case can be a conjunction in another:
* The auditor arrived before [preposition] the meeting.
* The auditor arrived before [conjunction] the meeting began.
===Conjunctions===
Conjunctions are words that join two sentences, or nouns, but not in a causal relationship as with a preposition but either as equals or in a superior - subordinate relationship. Examples of the former include: and, but, or, nor, whereas, however. Examples of the latter include: because, when, where, if, although.
==Active and Passive Voices==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: The Passive puts people to sleep.'''
</td></tr>
</table>
Recall the earlier discussion about subjects and objects of a sentence. We observed that the "natural" order in English is Subject - Verb - Object (SVO). This is the active voice:
<table border=1 align="center" >
<tr><td >"This firm</td><td> will no longer pay </td><td align="center" > for </td><td> Overtime."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb phrase </td><td align="center" >Preposition</td><td align="center" > Object </td></tr>
</table>
Now we will switch the subject and the object and contrast this with the same sentence expressed in the passive voice:
<table border=1 align="center" >
<tr><td >"Overtime payments</td><td> will no longer be made </td><td align="center" > by </td><td> this firm."</td></tr>
<tr><td align="center" >Object </td><td align="center" > Verb phrase </td><td align="center" >Preposition</td><td align="center" > Subject </td></tr>
</table>
The passive voice essentially reverses the natural order from SVO to OVS.
There is nothing grammatically wrong with either construct, but even a few lines expressed in the passive voice will bore our readers to tears. This effect arises because the passive voice places the reader at a distance from the action by making the object of the sentence the primary focus rather then the subject. Consequently, things appear to come before people.
Consider the following passage (passive voice).
"Significantly more overtime than the firm average has been incurred by roboteller maintenance staff of the Antarctic Division. A number of anomalies in the time sheets including bank branches that have been closed for many years having work recorded for them by individual staff have been revealed by a detailed analysis of the time sheets. Overtime payments will no longer be made by the Antarctic Division as a consequence."
Versus the following version (active voice)
"Roboteller maintenance staff in the Antarctic Division have incurred significantly more overtime than the firm average. An analysis of the time sheets for individual staff shows a number of anomalies, including work conducted for bank branches that have been closed for a number of years. Consequently, the Antarctic Division will no longer pay for overtime."
Which one did you have to read twice? The passive voice is difficult for the reader taken even one paragraph at a time. Try reading it for an entire report and you will be angry, frustrated and tense (assuming you are still awake by the end of it).
The active voice involves the reader, it flows better than the passive, it encourages the writer to go straight to the point rather than inserting "filler words" whose sole purpose is to make the sentence hang together and it reduces the chance of repetition (as apparent in the passage above). The passive voice, however, is not only difficult to read, but it is far more difficult (and therefore slower) to write.
In the passive voice we express the idea of the sentence before we provide the context (subject). The direct result of this is that our thought pattern is reversed and our ideas do not seem to flow properly. We end up adding extra words, leaving sentences hang in mid air (such as when we finish with a preposition) and, most importantly, failing to convince our audience of our point because they have to try too hard to understand it.
A sentence is a "word painting" of an idea. Well formed it is a thing of beauty and, like a great painting, a joy to behold.
==Positioning of Prepositions==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Ending a sentence with a preposition is a situation up with which I will not put.'''<br>
* '''Rule: Objects like Subjects.'''
</td></tr>
</table>
One of the most common errors in common speech is to place the preposition at the end of a sentence. Prepositions, by definition connect and introduce a noun phrase in a sentence. After the use of the active voice, I consider that this is almost the single most important trick to forming logical, easily understood sentences quickly.
Given that it has become almost standard usage to let prepositions drift to the end of a sentence, why is it such a gross error?
You will recall that we defined a preposition as a word that joins and relates a noun phrase to the rest of the sentence. It literally "leads" a phrase. Without the preposition connecting the two ideas in a sentence the sentence appears stilted (or as in the following example the sentence actually seams to mean something completely different):
"Management is adhering credit policies."
Consider a few examples:
<table border=1 align="center" >
<tr><th >Bad Form</th ><th >Good Form</th ></tr>
<tr><td>Where have the auditors come from?</td><td>From where have the auditors come?</td></tr>
<tr><td>Peace is worth striving for.</td><td>It is worth striving for peace</td></tr>
<tr><td>Firm credit policies must be complied with.</td><td>Management must comply with firm credit policies.</td></tr>
</table>
The first two on the left-hand side are merely untidy, but the third highlights the problem with prepositions shifting to the end of a sentence. The version on the left-hand side leaves the sentence "hanging" and most importantly, leaves out the subject. The lack of a subject in the sentence means that it is unclear who should perform the action. (ie. Objects Like Subjects)
If we use the active voice, and lead the sentence with the subject, we will be far less likely to end up with the versions on the left hand side. Since a preposition generally connects the object to the subject, it is the habit of placing the object at the start of the sentence (i.e. the passive voice) that leads to sentences with the preposition at the end.
The second example on the right hand side is still unsatisfactory, because it does not identify the responsibility of the action, and consequently is a generalisation - which is too easy to fault. For whom is it better to strive for peace? An arms manufacturer may see things a little differently! A better rewrite would have been: "We will benefit both materially and socially if we strive for peace."
It is easy to put prepositions in the right place if we remember to use the words "which" and "whom":
This is the day for which we have been waiting. (Not: This is the day we have been waiting for.)
These are the results of which we heard. (Not: These are the results we heard of.)
The rule (attributed to Winston Churchill) "Ending a sentence with a preposition is a situation up with which I will not put" (instead of - "Ending a sentence with a proposition is a situation I will not put up with.") illustrates how to arrange the words to achieve the desired outcome. It also tends to stick in one's mind and so is easily remembered.
==The Formula For A Paragraph==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: One idea to a paragraph'''
* '''Rule: Get to the bottom line first'''
* '''Rule: Three sentences are company, four is a crowd'''
* '''Rule: Just Do It - saying what we mean.'''
* '''Rule: Readers Don't Read'''
</td></tr>
</table>
The purpose of dividing a body of writing into paragraphs is to help the reader absorb the points being made, and the writer to formulate them. These five rules are each about how to put together a paragraph that works.
A couple of simple formulae describing the sequence of sentences in a paragraph can show us what to do:
# Main Point + Counter Point + Conclusion.
# Main Point + Expansion + [Expansion].
In each case we are saying a paragraph should consist of between 2 and 3 sentences. Using more or less sentences in a paragraph is permissible, but to be discouraged unless it is absolutely essential for the purpose of the point. This is particularly true when we a planning to use more than three sentences. (ie Three sentences are company, four is a crowd)
A paragraph end forms a natural break in the flow of though. By implication, we are asking the reader to absorb the entire a paragraph as a single concept before they evaluate it in their minds. The longer is the paragraph, the longer the reader must store the ideas before evaluation.
We risk loosing the reader's attention and comprehension if we ask him or her to temporarily store the ideas for too long a time or to store too many ideas at once. Short, punchy paragraphs built around a single central idea help minimise waffle and assist the reader to rapidly absorb our message. (i.e. One idea to a paragraph))
<table border=0 align="left" width="200px" style="background-color:#FFFF99" >
<tr><td align="center">
<p >
<font color="#990000" size="+1" >'''''…short, punchy paragraphs built around a single idea…'''''</font >
</p >
</td></tr>
</table>
It is a courtesy to the reader, to endeavour to minimise the work they need to do to in reading our work. Opening the paragraph with the main point allows the reader to skip the rest of the sentences in the paragraph if they agree with the point. In each of the two formulae we open with the main point (ie. we get to the bottom line first).
The difference between the forms is that in the first formula we offer a counter point in the second sentence, which is then offset by the conclusion. In this case the conclusion should be consistent with the main point (rather than the second or counter point).
In the second formula we are presenting the main point supported by one or two additional arguments. Should we need six or seven sentences to support the point, these should be presented as a dot-point list, or subdivided into two or three logical groups and split across two or three paragraphs.
<table border=0 align="right" width="200px" style="background-color:#FFFF99" >
<tr><td align="center">
<p >
<font color="#990000" size="+1" >'''''…the most convincing expression of an idea is usually the simplest…'''''</font >
</p >
</td></tr>
</table>
The essence of these ideas is that the most convincing expression of an idea is usually the simplest. Winning a point through confusion is, at best, a Pyrrhic victory. If the issue is important, the reader will dwell on it, and form their own opinion. If they didn't understand your arguments, you will have no effective input into the formation of their position on the matter, other than to raise it in the first place.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="left">
<tr>
<td>
<div class="center" >
[[Image:IARepWriteSectionStructure.png]]
</div>
</td>
</tr>
</table>
The essence of newspaper journalism is that most readers will not read most of the articles in a paper or magazine completely. Consequently, from the headline down to the end of the article the item is arranged as a series of progressively more detailed "summaries" of the information. There are usually three to four layers.
The first layer is the headline, which attempts to summarise the entire issue in a few words. The second layer is the first paragraph which presents a twenty to thirty word summary of the issue. The third layer is the second, third and perhaps fourth paragraphs, which provide the full story and the fourth layer provides incidental minor details.
The purpose of the structure is to allow the readers to exit at several points when they have collected sufficient information for their interest level. The approach recognises that none of us has time to read every piece of information presented to us, and when we do we tend to skim the information for issues that are relevant to us. (ie. readers don't read)
We should design our reports so that the reader does not have to read all the way to the end to "get" the issue. We can imagine this pattern as a pyramid, with the highest level summary at the top, and progressively more detail to the bottom.
==Using Conjunctions==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Conjunctions can't commence (a sentence)'''
* '''Rule: Conjunction collections confuse'''
</td></tr>
</table>
<table border=0 align="right" width="400px" style="background-color:#FFFF99;margin-left:0.9em" cellpadding="2" cellspacing="10" >
<tr><td align="left">
===The Importance of Correct Punctuation===
'''''The following two passages were written by Rowland Croucher. They illustrate neatly the importance of punctuation in written expression. Only the punctuation changes between the passages….'''''
<em>Dear Thomas,
I want a man who knows what love is all about. You are generous, kind, and thoughtful. People who are not like you admit to being useless and inferior. You have ruined me for other men. I yearn for you. I have no feelings whatsoever when we're apart. I can be forever happy--will you let me be yours?
Maria
----
Dear Thomas,
I want a man who knows what love is. All about you are generous, kind and thoughtful people, who are not like you. Admit to being useless and inferior. You have ruined me. For other men, I yearn; for you, I have no feelings whatsoever. When we're apart, I can be forever happy. Will you let me be?
Yours,
Maria</em>
</td></tr>
</table>
Conjunctions are important time savers and can help the flow of ideas if used correctly, but should not be used more than once in a sentence unless splitting the sentence would detract from it's meaning.
One example where two conjunctions may appear in a sentence is where the sentence contains both a list and two joined or related ideas:
''"The credit approval process involves assessing the amount, viability, merit and purpose of the loan and verifying that the borrower's credit history is of sufficient standing."''
In this case the passage would be harder to follow (and perhaps even misleading) if we wrote it as:
''"The credit approval process involves assessing the amount, viability, merit and purpose of the loan. The credit approval process should also verify that the borrower's credit history is of sufficient standing."''
By splitting the sentence we seem to imply that the credit history is of secondary importance to the information collected about the purpose of the loan.
These situations are generally pretty clear when they arise, but they are rare. A sentence with too many conjunctions suffers from the same problems as a paragraph with too many sentences; we have lost the reader before the end.
Some years ago Professor Manning Clark gave a Boyer lecture concerning the use of English in academic papers. One of his particular annoyances was the use of conjunctions to commence a sentence. His point was simple - a conjunction joins two sentences. If it starts the sentence it is prima-facie not joining two sentences together.
While we all recognise words like "and", "or" and "but" as conjunctions, words such as "however" and "because" are more often missed. Consider the following passage:
''"Because they operate unattended, Roboteller machines are prime targets for fraud. However, if we attach cameras to them they become leading tools in the capture of the perpetrators."''
This can be rewritten to eliminate the problem:
''"Roboteller machines are prime targets for fraud because they operate unattended. If we attach cameras to them, however, the machines become leading tools in the capture of the perpetrators."''
In rewriting the passage we also (once again) moved the subject to the start of the sentences. The "however": is redundant and the passage can be further simplified by writing it thus:
''"Roboteller machines are prime targets for fraud because they operate unattended. The machines become leading tools in the capture of the defrauders if we attach cameras to them."''
This passage demonstrates the appropriate use of "however":
''"Overall corporate / strategic planning is adequately addressed within Premium and Private, however, management attention is required concerning:…"''
==A Few Points of Style==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Personalise people not things'''
* '''Rule: Don't plan to make a plan.'''
* '''Rule: Negativity negates.'''
* '''Rule: DON'T SHOUT'''
</td></tr>
</table>
The three rules of this subsection cover common, but minor, problems of style.
A common written mistake is for a human trait such as "need" or "requiring" to be attributed to an inanimate "thing" such that it takes on the air of an inviolate law. The practice leads to broad statements without justification and hence incomplete argument of a case. Consider:
''"The credit approvals process needs to be reviewed."''
The credit approval process can not need anything. Only living creatures can experience need. It may be appropriate for the process to be updated and management or the auditors may need this to occur, but the process can't spontaneously need such improvement of itself.
Once again we find, as with so many English language errors, that the problem has arisen because of a subject / object mix-up. In the example the credit approval process, which should have been the object has been transformed into the subject. When we rewrite it the way it should have been we find that we are missing a significant part of the message that should have been conveyed (and is now inserted in the rewrite):
''"Management needs to review the credit approvals process focusing on the weaknesses identified in the finding."''
The new version both identifies who should perform the action and guidelines they should follow. It also highlights another important rule (not really one of grammar but one of service quality); the recommendation as written is essentially a plan to make a plan.
Either management should make the changes identified, or they should not. If we merely request them to review the situation we are delivering no committed improvement for the current situation to the Board. We should not say "review" when we mean "implement":
''"Management should implement the identified corrections to rectify the weaknesses in the credit approvals process identified in this report."''
Finally, we briefly consider two ad-hoc matters. The first is to do with capitalisation, while the second concerns the use of negatives.
Capitalising Every Word In a Sentence or even a Random selection Of a few words does not serve to help our presentation. Excessive capitalisation is affronting to the reader. In internet terminology this is akin to SHOUTING AT THE READER. Capitals belong at the beginning of a sentence or when naming a person, place or the title of a "thing". Capitalisation is rarely appropriate in the middle of a sentence.
Secondly, sentences should be expressed in the positive rather than the negative wherever possible. It is a standard sales technique to ask a prospect a question framed in the direction one wishes the answer to go:
"Would you prefer that my quote is open ended?"
As opposed to:
"Would you prefer that my quote is fixed?"
People tend to immediately think in sympathy to the speaker (at least until he or she threatens them with capitals!). If we express our sentences as negatives not only do we lead the reader to naturally disagree (because they have been "trained" to say no by our text, but we also create a sea of double negatives. Which may or may not imply a positive.
==Carrying the Case==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Death is in the details.'''
* '''Rule: Generalities are generally imprecise'''
* '''Rule: Let the facts carry the case.'''
</td></tr>
</table>
Much of what has been written in this goes to the issue of precision. In consulting and audit papers, accuracy of detail can determine the credibility attached to the consultant's/auditor's findings as well as the advice offered. The best strategy is to let the facts, clearly articulated, carry the argument.
The facts should not be embellished with emotional and vague descriptive words such as "large", "most", "substantially". We should state the quanta instead - "70%", "five out of eight", etc.
Try to avoid non-specific or vague words and expressions. This is especially true of quantities and times.
'''Examples'''
<table align=center >
<tr >
<th>Non-specific or vague</th><th> </th><th>Could mean or become</th>
</tr>
<tr>
<td>increased volumes</td><td></td><td>300 or more</td>
</tr>
<tr>
<td>drop in profit</td><td></td><td>profit was 20% lower</td>
</tr>
<tr>
<td>frequently</td><td></td><td>daily/weekly/monthly</td>
</tr>
<tr>
<td>rarely</td><td></td><td>once a year/decade</td>
</tr>
<tr>
<td>recently</td><td></td><td>yesterday/last week/month</td>
</tr>
<tr>
<td>shortly</td><td></td><td>tomorrow/next week/month</td>
</tr>
</table>
In the absence of statistical support for a finding, generalisation emerges. The discussion of the matter with the client becomes sidetracked over the meaning of words like "large" or "significant", rather than focussing on the issue identified and the solution required by the adviser.
Linked to these ideas is the form of words used to convey your point. Never use a long word where a short word will do. Long words may be interpreted by the reader as a deliberate attempt to mask purility with false grandeur, because the underlying point is decrepid or flawed. (See what I mean?).
Having said that, do not be frightened of using a long or technically correct word, simply because it has more than one syllable. Your can always provide a clossary of terms at the start of the document (and frequently that is a good idea for even some commonly missused terms). If your reader needs to get a little more educated to understand your work then fine.
Writing is not about stooping to the lowest common denominator, but it is about communicating your point accurately and effectively. That is: you must actually get your point across; not merely make your reader feel inadequate. There is no point in being right, if nobody realises.
The point, then, is to use the shortest possible ''correct'' word - not merely the shortest word.
As a rule-of-thumb, if your reader has to seek out the meaning of more than 2 or three words in your report you have probably lost them...and they will probably resent you for it. Know your audience, prepare your audience for your language, and make sure they don't feel stupid by the end of it.
The customer for a consulting or audit report needs to be assured that adopting recommendations based upon the consultant's finding will add value to the business.
Auditors (particularly) need to go well beyond describing what is wrong. They need to explain the meaning of any finding: how it affects the organisation’s bottom line; estimating the potential cost of not addressing a problem; predicting the likelihood of exposure or error.
Likewise, consultants need to go well beyond simply parroting back the latest theory they discovered in the bottom of a glass of scotch or on the back of the cereal packet that morning. Consultants need to do a little more of the 'audit' thing and actually analyse what is really the issue/wrong before agruing convincingly for change.
Wherever possible in all such instances, be specific. Numerous, several, many are words lacking in specifics. If this flies in the face of other advice to be brief, so be it.
The auditor/consultant should attempt to quantify the financial impact of a finding. While it may not be possible to arrive at a figure with mathematical precision, an informed guess can help management make a decision.
To be specific, following are some examples of content.
'''Poor'''
Differences exist in the cost of processing biscuit requisitions in various regions.
'''Better'''
The cost of processing biscuit requisitions differs from region to region. Vancouver can process a cheque for AUD 8 cents while the equivalent in Australia is AUD 15 cents. Australia might save up to AUD $15 million by adopting Vancouver’s methods.
'''Poor'''
There is a lack of adequate management information to support activities and to facilitate meaningful comparisons between regional units.
'''Better'''
Management information is inadequate: staff costs are not analysed for benchmarking across various offices; calculation of product profitability does not include processing costs; and there is no allocation of fees and interest income by product type.
Finally, '''summaries''' are meant to be just that: a tight condensation of the main point or points of an issue. Be ruthless in getting rid of perhaps interesting but non-essential pieces of additional information – but retain the specifics.
==Tense, Pronouns and Infinitives==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Don't split the infinitive'''
* '''Rule: Consistency is king'''
* '''Rule: Pronouns need a noun'''
* '''Rule: Unintroduced acronyms are antisocial'''
</td></tr>
</table>
"To Boldly Go Where No Man Has Gone Before…" Perhaps one of the most recognised phrases in the English language, this bight of the Star Trek prime directive is also a prime example of atrocious English! This is a classic example of the split infinitive (not to mention the redundant preposition at the end of the sentence).
The directive should have read:
" Boldly To Go Where No Man Has Gone…" or less poetically, " To Go Boldly Where No Man Has Gone…"
Perhaps, it would be best as:
"Go boldly, where none have gone.."
The infinitive is the basic form of verb invariably commencing with "to". It generally has no subject, and should not be split according to luminaries on the subject. The reason is more stylistic than grammatic. The problem with split infinitives is more obvious when a few words are inserted between the "to" and it's verb:
"The Roboteller machines are expected to really try hard to accurately and silently recognise the customer's identity."
Can be improved by:
"The Roboteller machines are expected to try really hard to recognise the customer's identity accurately and silently."
There are two common ways to fix avoid the split infinitive. Both are presented in the rewrite above. The first is to simply move the offending adverb after the verb, although sometimes this leads to a stilted speech pattern. The second is to move the adverb(s) to the end of the sentence as above.
Pronouns are words like he, she, it, etc that substitute for a noun like Jim, Phred or bank branch. The noun to which a pronoun relates is established by the context in which the pronoun is placed. Consequently, if too many pronouns are used together it becomes very difficult to determine for which noun an individual pronoun substitutes. As a general rule the target noun should immediately precede it's related pronoun and be refreshed at least every two pronouns.
Similarly, and acronym (abbreviation substituting for a noun or phrase) should be preceded immediately the first time it is used by the originating word or phrase. For example:
"The National Australia Bank (NAB) is a large and wonderful establishment. The NAB has an effective and happy audit team."
A completely unrelated matter (but grouped here for convenience) is that of consistency in the use of plurals and tense. It should be apparent to all authors, that the use of the singular in a sentence should be reflected continuously throughout the rest of the sentence. It may be less obvious that the same rule applies to verb tense.
If we express a verb in one tense, such as the present continuous as in "I am having a good day", the balance of the argument should normally be presented in the same tense. This is not a strict rule, because there will be situations in which a finding will relate a historic situation in the lead sentence, while the discussion relates an assessment that is in the present tense.
It is reasonable to say that within a sentence changes in tense will generally create confusion, unless separated by a conjunction. For example:
"In Antarctic Division wire transfer requests were accepted via e-mail and customer signatures were not obtained at all times."
Not
"In Antarctic Division wire transfer requests were accepted via e-mail and customer signatures are not obtained at all times."
But the following would be ok because the first part states a continuous state, while the latter part describes an historic observation relating to the first situation.:
"In Antarctic Division wire transfer requests are accepted via e-mail and customer signatures were not obtained at all times."
Agreement of subject and verb: A singular subject demands a singular verb; a plural one demands a plural verb. Many such problems are caused by long sentences overloaded with adjectives and subordinate clauses where the subject is separated from its verbs. This is another reason for keeping sentences short.
Sometimes the rule is not immediately obvious, such as in the case of "None": "none were" should be "none was" (none=not one or no one)
Example None of us is perfect.
==Confusing Words==
These words are often confused
* Affect (to impact upon, to assume) / effect (to bring about a change in)
* Object (the purpose)/ objective (the point of an exercise - usually military)
* Idol (a religious artefact, or object of worship) / Idyll (an imaginary ideal, or pastoral setting) / Idle (lazy, not in motion)
* Whom (the objective form of the relative pronoun) / who (the subjective form of the relative pronoun)
===A note about affect & effect===
A frequent source of error is confusion in the use of the similar-sounding words affect, affected, effect and effected and continual and continuous.
A cause for confusion is that affect is always a verb while effect can be either a noun or a verb. Both continual and continuous are adjectives.
Affect is a verb in the sense of being to influence. Effect as a verb means to bring about; as a noun it is equivalent to the word result.
The following represent correct usage.
Examples:
* Errors in computing affected the accuracy of the result.
* The effect of errors in computing was to produce an inaccurate result.
* Smoking cigarettes may affect your lungs.
* Giving up smoking had no effect on her general health.
* I didn’t finish the report because of continual telephone interruptions.
* Lights are left on in traffic tunnels to provide continuous illumination.
===A note about "due to"===
"Due to" is often used in the sense of through, because of or owing to. Mostly those alternatives are to be preferred. But it is correct to use due to in the sense of being attributable to.
Example The plane crash was due to bad visibility.
Don’t rely on your computer’s spellchecker for advice on grammar or correct spelling. Some systems are misleading. For example, you may be advised to change personal to personnel (or the other way round).
===A note about who & whom===
"Captain Kirk is the man whom the federation pays to fly the Enterprise." (Whom is the object of pays - the pronoun effected by the action of payment)
And
"Captain Kirk is the man who we think flies the Enterprise." (Who is the subject of flies, not the object of think).
==Punctuation==
Punctuation matters.
* "What is this thing called love?" (As in: Let me count the ways...)
* "What! Is this thing called love?" (As in: Let me out of here...)
* "What is this thing called, love?" (As in: OMG! You are not comming near me with that!)
===Comma===
Used when essential for clarity or to indicate a small interruption in continuity of thought. Short sentence construction reduces the need for commas.
===Semicolon===
Using a semicolon indicates a pause greater than a comma but less than a colon or full stop. Often a semicolon helps to alert the reader to an alternative or compensating thought.
'''Example:''' ''The risk of lost muffins was high; however, quick action averted this crisis.''
Semicolons should be used at the end of each line in a series of bullet points as an alternative to commas. (see later).
'''Example:'''
<em>
The following actions are planned to clear the biscuits:
* Engage two temporary staff for six months;
* Schedule extra training for these and permanent staff;
* Upgrade software in the Biscuit Dispensing Machine;
* Simplify the standard form used for requisitioning for biscuits from the kitchen from ten pages to five; and
* Remove the requirement for VP Supply, VP HR, and CEO counter signing of all biscuit requisitions.
</em>
===Colon===
The colon is used to introduce a quotation, summary, conclusion or list of bullet points (as in the example above); or to introduce a list within a sentence.
'''Example:'''
''The report contains the following sections: employment, training, promotion, legal compliance, relations with other departments.''
===Full stop===
(Period in U.S. usage)
As well as indicating the end of a sentence, full stops are used in some abbreviations. It has become common for periods to be ommitted from word abreviations. We counsel against such a style because: with the plethora of acronyms and technical jargon in today's language signalling that a word is an abreviation of a possible familiar word, with the use of the period; rather than a technical term unknown to the reader, adds to clarity.
Where a bulleted list includes points that have more than one sentence, it is preferable to separate the points with full stops, not semi-colons as set out in the previous example.
Example:
<em>
The following actions are planned to clear the biscuits:
* Engage two temporary staff for six months. Qualifications include large appetities and general slothfullness. It is estimated that salaries will be approximately $13,000 per month each plus biscuits.
* Schedule extra training for these and permanent staff. It is anticipated the training officer will need to allocate three hours weekly to the task.
* Upgrade software . . . (etc)
</em>
Note that where a full stop is used in a dot-point list, no conjunction is used to join the last to items.
Regardless of which dot point separater is chosen, it MUST be used consistently throughout the list and ideally the document.
===Hyphen===
General usage previously demanded that a hyphen be used if a prefix or suffix had the same letter as the word to which it was attached. So cooperate and coordinate generally were spelt co-operate and co-ordinate; hyphens in these instances are unnecessary. While reinforce and react are other examples where hyphens are not needed, sometimes a hyphen provides a warning that a word should not be read as a single syllable (e.g. re-use). Words formed by using the prefix non- should nearly always be hyphenated (e.g. non-compliant, non-aligned) as with some words prefixed by pre- (e.g. pre-existing).
===Apostrophe===
Used to indicate possession or the omission of letters in a contraction.
'''Examples'''
<em>
* Bill’s car was taken to the wreckers.
* Bill hasn’t had time to replace his car yet.
</em>
There is often confusion about its and it’s. The simple test is whether the construction of a sentence means it is (or it has etc). If so, it’s is a contraction and needs an apostrophe; if not, its is a pronoun and needs no apostrophe. (Warning: Don’t get fooled by some computer spellchecking systems which get this wrong.)
A rough rule of thumb: if we are using "it" in the possessive sense (as in "its red tyre"), leave out the "'".
'''Examples'''
<em>
* It’s been a long time between drinks.
* The engine was tuned but its vibration wasn’t greatly reduced.
</em>
===Ellipses===
This is the term to indicate words have been omitted from a quotation and is represented by three full stops separated by spaces.
'''Example'''
''Now is the time . . . to come to the aid of the party.''
===Quote marks===
These should not be used for emphasis. Use bold type or italic instead. Use quotation marks only when you are quoting or, after very long consideration of alternatives, when you are using a word or phrase you consider less than ideal for the situation.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
{{BackLinks}}
</noinclude>
e15798437995096806e5475bc55d9ac4d9e4d994
354
344
2018-10-29T11:59:10Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==About The Author==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2007 - Moral Rights Retained
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
==About This Document==
This paper compliments the Internal Audit and Management Consulting guides and discussions throughout the RiskWiki. It presents a brief guide to issues of style and presentation in writing up findings generally and with a very few exceptions applies universally to consultant and management reports (as well as to Internal Audit Reports).
Texts used as the basis for some of the views presented in this document and worthy of further exploration include:
* The Penguin Working Words (Penguin 1993)
* Fowler's Modern English Usage 2nd Edition (Oxford University Press 1965)
* Oxford Dictionary (Oxford University Press)
* Style Manual 4th Edition (Australian Government Press Service 1988)
* Practical English Usage - Michael Swan (Oxford University Press 1980)
* The Cambridge Encyclopedia of Language - David Crystal (Cambridge University Press 1987)
* Deloitte Internal Audit Method, Volume 6 - Report Writing - J Bishop & J Crawford (DTT 1992-3)
* Stanton Consulting Partners Style Manual (J Bishop 1995)
* NAB IA Reporting Style Guide ( J Bishop -1999- & an Unknown NAB Staff Member)
* Bishop Phillips Consulting Style Manual (J Bishop 2000)
==Writing Style==
===Introduction===
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="right">
<tr>
<td>
<div class="left" style="background-color:#FFFF99" >
====Bishop's Writing Rules:====
# Rule: The Passive puts people to sleep.
# Rule: Ending a sentence with a preposition is a situation up with which I will not put.
# Rule: Objects like subjects
# Rule: One point to a paragraph
# Rule: Get to the bottom line first
# Rule: Just do it - say what you mean.
# Rule: Readers don’t read
# Rule: Three sentences are company, four is a crowd
# Rule: Conjunctions can't commence (a sentence)
# Rule: Conjunction collections confuse
# Rule: Personalise people not things
# Rule: Negativity negates.
# Rule: DON'T SHOUT
# Rule: Don't plan to make a plan.
# Rule: Consistency is king
# Rule: Death is in the details.
# Rule: Pronouns need a noun
# Don't split the infinitive
# Rule: Unintroduced acronyms are antisocial
# Rule: Generalities are generally imprecise
# Rule: Let the facts carry the case.
</div>
</td>
</tr>
</table>
In written expression, a few simple rules can make the difference between clarity and confusion. Applying the rules in this section will help us both record our ideas efficiently and convey our meaning clearly.
The rules are a mix of style and traditional grammar identified over many years of reviewing and writing audit reports. We will need a rudimentary understanding of grammar to apply a number of these rules effectively.
Syntax assists semantics. Grammar defines the syntax of the language. Good syntax describes the structures a sentence can follow and still be considered well formed.
Semantics is the meaning of a sentence. Syntax assists semantics by managing the flow of ideas, and distinguishing ambiguities.
Consider for a moment the classic poets' joke
"What is this thing called love?" - The plaintive cry of a tortured heart.
"What is this thing called, love?" -The question of a curious friend on sighting a never before seen object.
One stray comma makes all the difference to the meaning of the question. In speech we use tone, rhythm, intonation and body language to convey meaning. In written expression we rely on syntax - the rules of grammar
We can not solve all problems of ambiguity in language with punctuation, but with a better understanding of grammar we can avoid the ambiguity in the first place. Take, for example, the sentence: "Flying saucers can be thrilling". This sentence seemingly can have a number of meanings:
# The act of flying a saucer can thrill the pilot.
# Seeing a saucer in flight can thrill the observer.
# The idea of a saucer that flies thrills.
We will see, however, that even in this situation, the judicious application of some simple rules when forming the sentence can result in clarity:
"Flying a saucer can thrill the pilot."
What has changed? We have moved from the general ("flying saucers") to the specific ("flying a saucer") (rule 20). We have also introduced a subject (the pilot) to the sentence where only the object and verb existed (rule 3) and applied plurals consistently (rule 15). Lastly applying rule 1 eliminates the problem entirely:
"A pilot can be thrilled when flying a saucer.."
To understand how to do this, we need a little grammar.
Since we can not avoid grammar if we wish to understand how best to convey our meaning, our discussion will be facilitated by first establishing the definition of a few grammatical terms. This we do in the next sub-section. Armed with a few parts of speech we will then explore the 19 rules over the subsections thereafter.
==A Grammar Crash Course==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="right">
<tr>
<td>
<div class="center" >
[[Image:IARepWriteCavemen.png]]
</div>
</td>
</tr>
</table>
===Subject, Verb and Object===
'When nine hundred years you reach, look as good, you will not. Strong with the Force you are…."
Remember Yoda ? Among the little, wrinkly, green "Star Wars" character's more distinctive features was "Yoda Speak". To a linguist, Yoda represents an imaginary member of a very rare and select group: races with languages that use an "Object - Subject - Verb" structure.
The understanding of the difference between each of these components is the first step in mastering sentence structure.
The order of subject (S) - verb (V) - object (O) (SVO) is the classic "natural" english sentence:
<table border=1 align="center" >
<tr><td >"Management </td><td> is adhering </td><td> to </td><td> credit policies."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb </td><td></td><td align="center" > Object </td></tr>
</table>
Things work quite well if we think of a sentence as revolving around a verb. The subject of a verb is the noun (or noun substitute ) that directs the action of the verb. The object of a verb is a noun (or noun substitute) that receives the action, is effected by the action, or about which the action is concerned. In the majority of instances a noun substitute is a pronoun.
In the example "management" directs the action and is therefore the subject, while "credit policies" are the things being "adhered to" and therefore the object. As a rough rule of thumb, if the noun phrase starts with a preposition it is a fair bet that the noun concerned is the object. In the example sentence, "to" is the preposition.
===Prepositions===
A preposition relates a word or phrase to another part of the sentence.
<table border=1 align="center" >
<tr><td >"Management </td><td> is adhering </td><td align="center" > to </td><td> credit policies."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb </td><td align="center" >Preposition</td><td align="center" > Object </td></tr>
</table>
Words that are prepositions include: to, in, into, on, upon, over, before, after, of, with.
In the example the word to joins (or more accurately relates) the noun phrase "credit policies" to the rest of the sentence - "Management is adhering".
A note of caution - a word that is a preposition in one case can be a conjunction in another:
* The auditor arrived before [preposition] the meeting.
* The auditor arrived before [conjunction] the meeting began.
===Conjunctions===
Conjunctions are words that join two sentences, or nouns, but not in a causal relationship as with a preposition but either as equals or in a superior - subordinate relationship. Examples of the former include: and, but, or, nor, whereas, however. Examples of the latter include: because, when, where, if, although.
==Active and Passive Voices==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: The Passive puts people to sleep.'''
</td></tr>
</table>
Recall the earlier discussion about subjects and objects of a sentence. We observed that the "natural" order in English is Subject - Verb - Object (SVO). This is the active voice:
<table border=1 align="center" >
<tr><td >"This firm</td><td> will no longer pay </td><td align="center" > for </td><td> Overtime."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb phrase </td><td align="center" >Preposition</td><td align="center" > Object </td></tr>
</table>
Now we will switch the subject and the object and contrast this with the same sentence expressed in the passive voice:
<table border=1 align="center" >
<tr><td >"Overtime payments</td><td> will no longer be made </td><td align="center" > by </td><td> this firm."</td></tr>
<tr><td align="center" >Object </td><td align="center" > Verb phrase </td><td align="center" >Preposition</td><td align="center" > Subject </td></tr>
</table>
The passive voice essentially reverses the natural order from SVO to OVS.
There is nothing grammatically wrong with either construct, but even a few lines expressed in the passive voice will bore our readers to tears. This effect arises because the passive voice places the reader at a distance from the action by making the object of the sentence the primary focus rather then the subject. Consequently, things appear to come before people.
Consider the following passage (passive voice).
"Significantly more overtime than the firm average has been incurred by roboteller maintenance staff of the Antarctic Division. A number of anomalies in the time sheets including bank branches that have been closed for many years having work recorded for them by individual staff have been revealed by a detailed analysis of the time sheets. Overtime payments will no longer be made by the Antarctic Division as a consequence."
Versus the following version (active voice)
"Roboteller maintenance staff in the Antarctic Division have incurred significantly more overtime than the firm average. An analysis of the time sheets for individual staff shows a number of anomalies, including work conducted for bank branches that have been closed for a number of years. Consequently, the Antarctic Division will no longer pay for overtime."
Which one did you have to read twice? The passive voice is difficult for the reader taken even one paragraph at a time. Try reading it for an entire report and you will be angry, frustrated and tense (assuming you are still awake by the end of it).
The active voice involves the reader, it flows better than the passive, it encourages the writer to go straight to the point rather than inserting "filler words" whose sole purpose is to make the sentence hang together and it reduces the chance of repetition (as apparent in the passage above). The passive voice, however, is not only difficult to read, but it is far more difficult (and therefore slower) to write.
In the passive voice we express the idea of the sentence before we provide the context (subject). The direct result of this is that our thought pattern is reversed and our ideas do not seem to flow properly. We end up adding extra words, leaving sentences hang in mid air (such as when we finish with a preposition) and, most importantly, failing to convince our audience of our point because they have to try too hard to understand it.
A sentence is a "word painting" of an idea. Well formed it is a thing of beauty and, like a great painting, a joy to behold.
==Positioning of Prepositions==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Ending a sentence with a preposition is a situation up with which I will not put.'''<br>
* '''Rule: Objects like Subjects.'''
</td></tr>
</table>
One of the most common errors in common speech is to place the preposition at the end of a sentence. Prepositions, by definition connect and introduce a noun phrase in a sentence. After the use of the active voice, I consider that this is almost the single most important trick to forming logical, easily understood sentences quickly.
Given that it has become almost standard usage to let prepositions drift to the end of a sentence, why is it such a gross error?
You will recall that we defined a preposition as a word that joins and relates a noun phrase to the rest of the sentence. It literally "leads" a phrase. Without the preposition connecting the two ideas in a sentence the sentence appears stilted (or as in the following example the sentence actually seams to mean something completely different):
"Management is adhering credit policies."
Consider a few examples:
<table border=1 align="center" >
<tr><th >Bad Form</th ><th >Good Form</th ></tr>
<tr><td>Where have the auditors come from?</td><td>From where have the auditors come?</td></tr>
<tr><td>Peace is worth striving for.</td><td>It is worth striving for peace</td></tr>
<tr><td>Firm credit policies must be complied with.</td><td>Management must comply with firm credit policies.</td></tr>
</table>
The first two on the left-hand side are merely untidy, but the third highlights the problem with prepositions shifting to the end of a sentence. The version on the left-hand side leaves the sentence "hanging" and most importantly, leaves out the subject. The lack of a subject in the sentence means that it is unclear who should perform the action. (ie. Objects Like Subjects)
If we use the active voice, and lead the sentence with the subject, we will be far less likely to end up with the versions on the left hand side. Since a preposition generally connects the object to the subject, it is the habit of placing the object at the start of the sentence (i.e. the passive voice) that leads to sentences with the preposition at the end.
The second example on the right hand side is still unsatisfactory, because it does not identify the responsibility of the action, and consequently is a generalisation - which is too easy to fault. For whom is it better to strive for peace? An arms manufacturer may see things a little differently! A better rewrite would have been: "We will benefit both materially and socially if we strive for peace."
It is easy to put prepositions in the right place if we remember to use the words "which" and "whom":
This is the day for which we have been waiting. (Not: This is the day we have been waiting for.)
These are the results of which we heard. (Not: These are the results we heard of.)
The rule (attributed to Winston Churchill) "Ending a sentence with a preposition is a situation up with which I will not put" (instead of - "Ending a sentence with a proposition is a situation I will not put up with.") illustrates how to arrange the words to achieve the desired outcome. It also tends to stick in one's mind and so is easily remembered.
==The Formula For A Paragraph==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: One idea to a paragraph'''
* '''Rule: Get to the bottom line first'''
* '''Rule: Three sentences are company, four is a crowd'''
* '''Rule: Just Do It - saying what we mean.'''
* '''Rule: Readers Don't Read'''
</td></tr>
</table>
The purpose of dividing a body of writing into paragraphs is to help the reader absorb the points being made, and the writer to formulate them. These five rules are each about how to put together a paragraph that works.
A couple of simple formulae describing the sequence of sentences in a paragraph can show us what to do:
# Main Point + Counter Point + Conclusion.
# Main Point + Expansion + [Expansion].
In each case we are saying a paragraph should consist of between 2 and 3 sentences. Using more or less sentences in a paragraph is permissible, but to be discouraged unless it is absolutely essential for the purpose of the point. This is particularly true when we a planning to use more than three sentences. (ie Three sentences are company, four is a crowd)
A paragraph end forms a natural break in the flow of though. By implication, we are asking the reader to absorb the entire a paragraph as a single concept before they evaluate it in their minds. The longer is the paragraph, the longer the reader must store the ideas before evaluation.
We risk loosing the reader's attention and comprehension if we ask him or her to temporarily store the ideas for too long a time or to store too many ideas at once. Short, punchy paragraphs built around a single central idea help minimise waffle and assist the reader to rapidly absorb our message. (i.e. One idea to a paragraph))
<table border=0 align="left" width="200px" style="background-color:#FFFF99" >
<tr><td align="center">
<p >
<font color="#990000" size="+1" >'''''…short, punchy paragraphs built around a single idea…'''''</font >
</p >
</td></tr>
</table>
It is a courtesy to the reader, to endeavour to minimise the work they need to do to in reading our work. Opening the paragraph with the main point allows the reader to skip the rest of the sentences in the paragraph if they agree with the point. In each of the two formulae we open with the main point (ie. we get to the bottom line first).
The difference between the forms is that in the first formula we offer a counter point in the second sentence, which is then offset by the conclusion. In this case the conclusion should be consistent with the main point (rather than the second or counter point).
In the second formula we are presenting the main point supported by one or two additional arguments. Should we need six or seven sentences to support the point, these should be presented as a dot-point list, or subdivided into two or three logical groups and split across two or three paragraphs.
<table border=0 align="right" width="200px" style="background-color:#FFFF99" >
<tr><td align="center">
<p >
<font color="#990000" size="+1" >'''''…the most convincing expression of an idea is usually the simplest…'''''</font >
</p >
</td></tr>
</table>
The essence of these ideas is that the most convincing expression of an idea is usually the simplest. Winning a point through confusion is, at best, a Pyrrhic victory. If the issue is important, the reader will dwell on it, and form their own opinion. If they didn't understand your arguments, you will have no effective input into the formation of their position on the matter, other than to raise it in the first place.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="left">
<tr>
<td>
<div class="center" >
[[Image:IARepWriteSectionStructure.png]]
</div>
</td>
</tr>
</table>
The essence of newspaper journalism is that most readers will not read most of the articles in a paper or magazine completely. Consequently, from the headline down to the end of the article the item is arranged as a series of progressively more detailed "summaries" of the information. There are usually three to four layers.
The first layer is the headline, which attempts to summarise the entire issue in a few words. The second layer is the first paragraph which presents a twenty to thirty word summary of the issue. The third layer is the second, third and perhaps fourth paragraphs, which provide the full story and the fourth layer provides incidental minor details.
The purpose of the structure is to allow the readers to exit at several points when they have collected sufficient information for their interest level. The approach recognises that none of us has time to read every piece of information presented to us, and when we do we tend to skim the information for issues that are relevant to us. (ie. readers don't read)
We should design our reports so that the reader does not have to read all the way to the end to "get" the issue. We can imagine this pattern as a pyramid, with the highest level summary at the top, and progressively more detail to the bottom.
==Using Conjunctions==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Conjunctions can't commence (a sentence)'''
* '''Rule: Conjunction collections confuse'''
</td></tr>
</table>
<table border=0 align="right" width="400px" style="background-color:#FFFF99;margin-left:0.9em" cellpadding="2" cellspacing="10" >
<tr><td align="left">
===The Importance of Correct Punctuation===
'''''The following two passages were written by Rowland Croucher. They illustrate neatly the importance of punctuation in written expression. Only the punctuation changes between the passages….'''''
<em>Dear Thomas,
I want a man who knows what love is all about. You are generous, kind, and thoughtful. People who are not like you admit to being useless and inferior. You have ruined me for other men. I yearn for you. I have no feelings whatsoever when we're apart. I can be forever happy--will you let me be yours?
Maria
----
Dear Thomas,
I want a man who knows what love is. All about you are generous, kind and thoughtful people, who are not like you. Admit to being useless and inferior. You have ruined me. For other men, I yearn; for you, I have no feelings whatsoever. When we're apart, I can be forever happy. Will you let me be?
Yours,
Maria</em>
</td></tr>
</table>
Conjunctions are important time savers and can help the flow of ideas if used correctly, but should not be used more than once in a sentence unless splitting the sentence would detract from it's meaning.
One example where two conjunctions may appear in a sentence is where the sentence contains both a list and two joined or related ideas:
''"The credit approval process involves assessing the amount, viability, merit and purpose of the loan and verifying that the borrower's credit history is of sufficient standing."''
In this case the passage would be harder to follow (and perhaps even misleading) if we wrote it as:
''"The credit approval process involves assessing the amount, viability, merit and purpose of the loan. The credit approval process should also verify that the borrower's credit history is of sufficient standing."''
By splitting the sentence we seem to imply that the credit history is of secondary importance to the information collected about the purpose of the loan.
These situations are generally pretty clear when they arise, but they are rare. A sentence with too many conjunctions suffers from the same problems as a paragraph with too many sentences; we have lost the reader before the end.
Some years ago Professor Manning Clark gave a Boyer lecture concerning the use of English in academic papers. One of his particular annoyances was the use of conjunctions to commence a sentence. His point was simple - a conjunction joins two sentences. If it starts the sentence it is prima-facie not joining two sentences together.
While we all recognise words like "and", "or" and "but" as conjunctions, words such as "however" and "because" are more often missed. Consider the following passage:
''"Because they operate unattended, Roboteller machines are prime targets for fraud. However, if we attach cameras to them they become leading tools in the capture of the perpetrators."''
This can be rewritten to eliminate the problem:
''"Roboteller machines are prime targets for fraud because they operate unattended. If we attach cameras to them, however, the machines become leading tools in the capture of the perpetrators."''
In rewriting the passage we also (once again) moved the subject to the start of the sentences. The "however": is redundant and the passage can be further simplified by writing it thus:
''"Roboteller machines are prime targets for fraud because they operate unattended. The machines become leading tools in the capture of the defrauders if we attach cameras to them."''
This passage demonstrates the appropriate use of "however":
''"Overall corporate / strategic planning is adequately addressed within Premium and Private, however, management attention is required concerning:…"''
==A Few Points of Style==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Personalise people not things'''
* '''Rule: Don't plan to make a plan.'''
* '''Rule: Negativity negates.'''
* '''Rule: DON'T SHOUT'''
</td></tr>
</table>
The three rules of this subsection cover common, but minor, problems of style.
A common written mistake is for a human trait such as "need" or "requiring" to be attributed to an inanimate "thing" such that it takes on the air of an inviolate law. The practice leads to broad statements without justification and hence incomplete argument of a case. Consider:
''"The credit approvals process needs to be reviewed."''
The credit approval process can not need anything. Only living creatures can experience need. It may be appropriate for the process to be updated and management or the auditors may need this to occur, but the process can't spontaneously need such improvement of itself.
Once again we find, as with so many English language errors, that the problem has arisen because of a subject / object mix-up. In the example the credit approval process, which should have been the object has been transformed into the subject. When we rewrite it the way it should have been we find that we are missing a significant part of the message that should have been conveyed (and is now inserted in the rewrite):
''"Management needs to review the credit approvals process focusing on the weaknesses identified in the finding."''
The new version both identifies who should perform the action and guidelines they should follow. It also highlights another important rule (not really one of grammar but one of service quality); the recommendation as written is essentially a plan to make a plan.
Either management should make the changes identified, or they should not. If we merely request them to review the situation we are delivering no committed improvement for the current situation to the Board. We should not say "review" when we mean "implement":
''"Management should implement the identified corrections to rectify the weaknesses in the credit approvals process identified in this report."''
Finally, we briefly consider two ad-hoc matters. The first is to do with capitalisation, while the second concerns the use of negatives.
Capitalising Every Word In a Sentence or even a Random selection Of a few words does not serve to help our presentation. Excessive capitalisation is affronting to the reader. In internet terminology this is akin to SHOUTING AT THE READER. Capitals belong at the beginning of a sentence or when naming a person, place or the title of a "thing". Capitalisation is rarely appropriate in the middle of a sentence.
Secondly, sentences should be expressed in the positive rather than the negative wherever possible. It is a standard sales technique to ask a prospect a question framed in the direction one wishes the answer to go:
"Would you prefer that my quote is open ended?"
As opposed to:
"Would you prefer that my quote is fixed?"
People tend to immediately think in sympathy to the speaker (at least until he or she threatens them with capitals!). If we express our sentences as negatives not only do we lead the reader to naturally disagree (because they have been "trained" to say no by our text, but we also create a sea of double negatives. Which may or may not imply a positive.
==Carrying the Case==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Death is in the details.'''
* '''Rule: Generalities are generally imprecise'''
* '''Rule: Let the facts carry the case.'''
</td></tr>
</table>
Much of what has been written in this goes to the issue of precision. In consulting and audit papers, accuracy of detail can determine the credibility attached to the consultant's/auditor's findings as well as the advice offered. The best strategy is to let the facts, clearly articulated, carry the argument.
The facts should not be embellished with emotional and vague descriptive words such as "large", "most", "substantially". We should state the quanta instead - "70%", "five out of eight", etc.
Try to avoid non-specific or vague words and expressions. This is especially true of quantities and times.
'''Examples'''
<table align=center >
<tr >
<th>Non-specific or vague</th><th> </th><th>Could mean or become</th>
</tr>
<tr>
<td>increased volumes</td><td></td><td>300 or more</td>
</tr>
<tr>
<td>drop in profit</td><td></td><td>profit was 20% lower</td>
</tr>
<tr>
<td>frequently</td><td></td><td>daily/weekly/monthly</td>
</tr>
<tr>
<td>rarely</td><td></td><td>once a year/decade</td>
</tr>
<tr>
<td>recently</td><td></td><td>yesterday/last week/month</td>
</tr>
<tr>
<td>shortly</td><td></td><td>tomorrow/next week/month</td>
</tr>
</table>
In the absence of statistical support for a finding, generalisation emerges. The discussion of the matter with the client becomes sidetracked over the meaning of words like "large" or "significant", rather than focussing on the issue identified and the solution required by the adviser.
Linked to these ideas is the form of words used to convey your point. Never use a long word where a short word will do. Long words may be interpreted by the reader as a deliberate attempt to mask purility with false grandeur, because the underlying point is decrepid or flawed. (See what I mean?).
Having said that, do not be frightened of using a long or technically correct word, simply because it has more than one syllable. Your can always provide a clossary of terms at the start of the document (and frequently that is a good idea for even some commonly missused terms). If your reader needs to get a little more educated to understand your work then fine.
Writing is not about stooping to the lowest common denominator, but it is about communicating your point accurately and effectively. That is: you must actually get your point across; not merely make your reader feel inadequate. There is no point in being right, if nobody realises.
The point, then, is to use the shortest possible ''correct'' word - not merely the shortest word.
As a rule-of-thumb, if your reader has to seek out the meaning of more than 2 or three words in your report you have probably lost them...and they will probably resent you for it. Know your audience, prepare your audience for your language, and make sure they don't feel stupid by the end of it.
The customer for a consulting or audit report needs to be assured that adopting recommendations based upon the consultant's finding will add value to the business.
Auditors (particularly) need to go well beyond describing what is wrong. They need to explain the meaning of any finding: how it affects the organisation’s bottom line; estimating the potential cost of not addressing a problem; predicting the likelihood of exposure or error.
Likewise, consultants need to go well beyond simply parroting back the latest theory they discovered in the bottom of a glass of scotch or on the back of the cereal packet that morning. Consultants need to do a little more of the 'audit' thing and actually analyse what is really the issue/wrong before agruing convincingly for change.
Wherever possible in all such instances, be specific. Numerous, several, many are words lacking in specifics. If this flies in the face of other advice to be brief, so be it.
The auditor/consultant should attempt to quantify the financial impact of a finding. While it may not be possible to arrive at a figure with mathematical precision, an informed guess can help management make a decision.
To be specific, following are some examples of content.
'''Poor'''
Differences exist in the cost of processing biscuit requisitions in various regions.
'''Better'''
The cost of processing biscuit requisitions differs from region to region. Vancouver can process a cheque for AUD 8 cents while the equivalent in Australia is AUD 15 cents. Australia might save up to AUD $15 million by adopting Vancouver’s methods.
'''Poor'''
There is a lack of adequate management information to support activities and to facilitate meaningful comparisons between regional units.
'''Better'''
Management information is inadequate: staff costs are not analysed for benchmarking across various offices; calculation of product profitability does not include processing costs; and there is no allocation of fees and interest income by product type.
Finally, '''summaries''' are meant to be just that: a tight condensation of the main point or points of an issue. Be ruthless in getting rid of perhaps interesting but non-essential pieces of additional information – but retain the specifics.
==Tense, Pronouns and Infinitives==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Don't split the infinitive'''
* '''Rule: Consistency is king'''
* '''Rule: Pronouns need a noun'''
* '''Rule: Unintroduced acronyms are antisocial'''
</td></tr>
</table>
"To Boldly Go Where No Man Has Gone Before…" Perhaps one of the most recognised phrases in the English language, this bight of the Star Trek prime directive is also a prime example of atrocious English! This is a classic example of the split infinitive (not to mention the redundant preposition at the end of the sentence).
The directive should have read:
" Boldly To Go Where No Man Has Gone…" or less poetically, " To Go Boldly Where No Man Has Gone…"
Perhaps, it would be best as:
"Go boldly, where none have gone.."
The infinitive is the basic form of verb invariably commencing with "to". It generally has no subject, and should not be split according to luminaries on the subject. The reason is more stylistic than grammatic. The problem with split infinitives is more obvious when a few words are inserted between the "to" and it's verb:
"The Roboteller machines are expected to really try hard to accurately and silently recognise the customer's identity."
Can be improved by:
"The Roboteller machines are expected to try really hard to recognise the customer's identity accurately and silently."
There are two common ways to fix avoid the split infinitive. Both are presented in the rewrite above. The first is to simply move the offending adverb after the verb, although sometimes this leads to a stilted speech pattern. The second is to move the adverb(s) to the end of the sentence as above.
Pronouns are words like he, she, it, etc that substitute for a noun like Jim, Phred or bank branch. The noun to which a pronoun relates is established by the context in which the pronoun is placed. Consequently, if too many pronouns are used together it becomes very difficult to determine for which noun an individual pronoun substitutes. As a general rule the target noun should immediately precede it's related pronoun and be refreshed at least every two pronouns.
Similarly, and acronym (abbreviation substituting for a noun or phrase) should be preceded immediately the first time it is used by the originating word or phrase. For example:
"The National Australia Bank (NAB) is a large and wonderful establishment. The NAB has an effective and happy audit team."
A completely unrelated matter (but grouped here for convenience) is that of consistency in the use of plurals and tense. It should be apparent to all authors, that the use of the singular in a sentence should be reflected continuously throughout the rest of the sentence. It may be less obvious that the same rule applies to verb tense.
If we express a verb in one tense, such as the present continuous as in "I am having a good day", the balance of the argument should normally be presented in the same tense. This is not a strict rule, because there will be situations in which a finding will relate a historic situation in the lead sentence, while the discussion relates an assessment that is in the present tense.
It is reasonable to say that within a sentence changes in tense will generally create confusion, unless separated by a conjunction. For example:
"In Antarctic Division wire transfer requests were accepted via e-mail and customer signatures were not obtained at all times."
Not
"In Antarctic Division wire transfer requests were accepted via e-mail and customer signatures are not obtained at all times."
But the following would be ok because the first part states a continuous state, while the latter part describes an historic observation relating to the first situation.:
"In Antarctic Division wire transfer requests are accepted via e-mail and customer signatures were not obtained at all times."
Agreement of subject and verb: A singular subject demands a singular verb; a plural one demands a plural verb. Many such problems are caused by long sentences overloaded with adjectives and subordinate clauses where the subject is separated from its verbs. This is another reason for keeping sentences short.
Sometimes the rule is not immediately obvious, such as in the case of "None": "none were" should be "none was" (none=not one or no one)
Example None of us is perfect.
==Confusing Words==
These words are often confused
* Affect (to impact upon, to assume) / effect (to bring about a change in)
* Object (the purpose)/ objective (the point of an exercise - usually military)
* Idol (a religious artefact, or object of worship) / Idyll (an imaginary ideal, or pastoral setting) / Idle (lazy, not in motion)
* Whom (the objective form of the relative pronoun) / who (the subjective form of the relative pronoun)
===A note about affect & effect===
A frequent source of error is confusion in the use of the similar-sounding words affect, affected, effect and effected and continual and continuous.
A cause for confusion is that affect is always a verb while effect can be either a noun or a verb. Both continual and continuous are adjectives.
Affect is a verb in the sense of being to influence. Effect as a verb means to bring about; as a noun it is equivalent to the word result.
The following represent correct usage.
Examples:
* Errors in computing affected the accuracy of the result.
* The effect of errors in computing was to produce an inaccurate result.
* Smoking cigarettes may affect your lungs.
* Giving up smoking had no effect on her general health.
* I didn’t finish the report because of continual telephone interruptions.
* Lights are left on in traffic tunnels to provide continuous illumination.
===A note about "due to"===
"Due to" is often used in the sense of through, because of or owing to. Mostly those alternatives are to be preferred. But it is correct to use due to in the sense of being attributable to.
Example The plane crash was due to bad visibility.
Don’t rely on your computer’s spellchecker for advice on grammar or correct spelling. Some systems are misleading. For example, you may be advised to change personal to personnel (or the other way round).
===A note about who & whom===
"Captain Kirk is the man whom the federation pays to fly the Enterprise." (Whom is the object of pays - the pronoun effected by the action of payment)
And
"Captain Kirk is the man who we think flies the Enterprise." (Who is the subject of flies, not the object of think).
==Punctuation==
Punctuation matters.
* "What is this thing called love?" (As in: Let me count the ways...)
* "What! Is this thing called love?" (As in: Let me out of here...)
* "What is this thing called, love?" (As in: OMG! You are not comming near me with that!)
===Comma===
Used when essential for clarity or to indicate a small interruption in continuity of thought. Short sentence construction reduces the need for commas.
===Semicolon===
Using a semicolon indicates a pause greater than a comma but less than a colon or full stop. Often a semicolon helps to alert the reader to an alternative or compensating thought.
'''Example:''' ''The risk of lost muffins was high; however, quick action averted this crisis.''
Semicolons should be used at the end of each line in a series of bullet points as an alternative to commas. (see later).
'''Example:'''
<em>
The following actions are planned to clear the biscuits:
* Engage two temporary staff for six months;
* Schedule extra training for these and permanent staff;
* Upgrade software in the Biscuit Dispensing Machine;
* Simplify the standard form used for requisitioning for biscuits from the kitchen from ten pages to five; and
* Remove the requirement for VP Supply, VP HR, and CEO counter signing of all biscuit requisitions.
</em>
===Colon===
The colon is used to introduce a quotation, summary, conclusion or list of bullet points (as in the example above); or to introduce a list within a sentence.
'''Example:'''
''The report contains the following sections: employment, training, promotion, legal compliance, relations with other departments.''
===Full stop===
(Period in U.S. usage)
As well as indicating the end of a sentence, full stops are used in some abbreviations. It has become common for periods to be ommitted from word abreviations. We counsel against such a style because: with the plethora of acronyms and technical jargon in today's language signalling that a word is an abreviation of a possible familiar word, with the use of the period; rather than a technical term unknown to the reader, adds to clarity.
Where a bulleted list includes points that have more than one sentence, it is preferable to separate the points with full stops, not semi-colons as set out in the previous example.
Example:
<em>
The following actions are planned to clear the biscuits:
* Engage two temporary staff for six months. Qualifications include large appetities and general slothfullness. It is estimated that salaries will be approximately $13,000 per month each plus biscuits.
* Schedule extra training for these and permanent staff. It is anticipated the training officer will need to allocate three hours weekly to the task.
* Upgrade software . . . (etc)
</em>
Note that where a full stop is used in a dot-point list, no conjunction is used to join the last to items.
Regardless of which dot point separater is chosen, it MUST be used consistently throughout the list and ideally the document.
===Hyphen===
General usage previously demanded that a hyphen be used if a prefix or suffix had the same letter as the word to which it was attached. So cooperate and coordinate generally were spelt co-operate and co-ordinate; hyphens in these instances are unnecessary. While reinforce and react are other examples where hyphens are not needed, sometimes a hyphen provides a warning that a word should not be read as a single syllable (e.g. re-use). Words formed by using the prefix non- should nearly always be hyphenated (e.g. non-compliant, non-aligned) as with some words prefixed by pre- (e.g. pre-existing).
===Apostrophe===
Used to indicate possession or the omission of letters in a contraction.
'''Examples'''
<em>
* Bill’s car was taken to the wreckers.
* Bill hasn’t had time to replace his car yet.
</em>
There is often confusion about its and it’s. The simple test is whether the construction of a sentence means it is (or it has etc). If so, it’s is a contraction and needs an apostrophe; if not, its is a pronoun and needs no apostrophe. (Warning: Don’t get fooled by some computer spellchecking systems which get this wrong.)
A rough rule of thumb: if we are using "it" in the possessive sense (as in "its red tyre"), leave out the "'".
'''Examples'''
<em>
* It’s been a long time between drinks.
* The engine was tuned but its vibration wasn’t greatly reduced.
</em>
===Ellipses===
This is the term to indicate words have been omitted from a quotation and is represented by three full stops separated by spaces.
'''Example'''
''Now is the time . . . to come to the aid of the party.''
===Quote marks===
These should not be used for emphasis. Use bold type or italic instead. Use quotation marks only when you are quoting or, after very long consideration of alternatives, when you are using a word or phrase you consider less than ideal for the situation.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
{{BackLinks}}
</noinclude>
e15798437995096806e5475bc55d9ac4d9e4d994
392
354
2018-10-29T12:04:06Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==About The Author==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2007 - Moral Rights Retained
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
==About This Document==
This paper compliments the Internal Audit and Management Consulting guides and discussions throughout the RiskWiki. It presents a brief guide to issues of style and presentation in writing up findings generally and with a very few exceptions applies universally to consultant and management reports (as well as to Internal Audit Reports).
Texts used as the basis for some of the views presented in this document and worthy of further exploration include:
* The Penguin Working Words (Penguin 1993)
* Fowler's Modern English Usage 2nd Edition (Oxford University Press 1965)
* Oxford Dictionary (Oxford University Press)
* Style Manual 4th Edition (Australian Government Press Service 1988)
* Practical English Usage - Michael Swan (Oxford University Press 1980)
* The Cambridge Encyclopedia of Language - David Crystal (Cambridge University Press 1987)
* Deloitte Internal Audit Method, Volume 6 - Report Writing - J Bishop & J Crawford (DTT 1992-3)
* Stanton Consulting Partners Style Manual (J Bishop 1995)
* NAB IA Reporting Style Guide ( J Bishop -1999- & an Unknown NAB Staff Member)
* Bishop Phillips Consulting Style Manual (J Bishop 2000)
==Writing Style==
===Introduction===
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="right">
<tr>
<td>
<div class="left" style="background-color:#FFFF99" >
====Bishop's Writing Rules:====
# Rule: The Passive puts people to sleep.
# Rule: Ending a sentence with a preposition is a situation up with which I will not put.
# Rule: Objects like subjects
# Rule: One point to a paragraph
# Rule: Get to the bottom line first
# Rule: Just do it - say what you mean.
# Rule: Readers don’t read
# Rule: Three sentences are company, four is a crowd
# Rule: Conjunctions can't commence (a sentence)
# Rule: Conjunction collections confuse
# Rule: Personalise people not things
# Rule: Negativity negates.
# Rule: DON'T SHOUT
# Rule: Don't plan to make a plan.
# Rule: Consistency is king
# Rule: Death is in the details.
# Rule: Pronouns need a noun
# Don't split the infinitive
# Rule: Unintroduced acronyms are antisocial
# Rule: Generalities are generally imprecise
# Rule: Let the facts carry the case.
</div>
</td>
</tr>
</table>
In written expression, a few simple rules can make the difference between clarity and confusion. Applying the rules in this section will help us both record our ideas efficiently and convey our meaning clearly.
The rules are a mix of style and traditional grammar identified over many years of reviewing and writing audit reports. We will need a rudimentary understanding of grammar to apply a number of these rules effectively.
Syntax assists semantics. Grammar defines the syntax of the language. Good syntax describes the structures a sentence can follow and still be considered well formed.
Semantics is the meaning of a sentence. Syntax assists semantics by managing the flow of ideas, and distinguishing ambiguities.
Consider for a moment the classic poets' joke
"What is this thing called love?" - The plaintive cry of a tortured heart.
"What is this thing called, love?" -The question of a curious friend on sighting a never before seen object.
One stray comma makes all the difference to the meaning of the question. In speech we use tone, rhythm, intonation and body language to convey meaning. In written expression we rely on syntax - the rules of grammar
We can not solve all problems of ambiguity in language with punctuation, but with a better understanding of grammar we can avoid the ambiguity in the first place. Take, for example, the sentence: "Flying saucers can be thrilling". This sentence seemingly can have a number of meanings:
# The act of flying a saucer can thrill the pilot.
# Seeing a saucer in flight can thrill the observer.
# The idea of a saucer that flies thrills.
We will see, however, that even in this situation, the judicious application of some simple rules when forming the sentence can result in clarity:
"Flying a saucer can thrill the pilot."
What has changed? We have moved from the general ("flying saucers") to the specific ("flying a saucer") (rule 20). We have also introduced a subject (the pilot) to the sentence where only the object and verb existed (rule 3) and applied plurals consistently (rule 15). Lastly applying rule 1 eliminates the problem entirely:
"A pilot can be thrilled when flying a saucer.."
To understand how to do this, we need a little grammar.
Since we can not avoid grammar if we wish to understand how best to convey our meaning, our discussion will be facilitated by first establishing the definition of a few grammatical terms. This we do in the next sub-section. Armed with a few parts of speech we will then explore the 19 rules over the subsections thereafter.
==A Grammar Crash Course==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="right">
<tr>
<td>
<div class="center" >
[[Image:IARepWriteCavemen.png]]
</div>
</td>
</tr>
</table>
===Subject, Verb and Object===
'When nine hundred years you reach, look as good, you will not. Strong with the Force you are…."
Remember Yoda ? Among the little, wrinkly, green "Star Wars" character's more distinctive features was "Yoda Speak". To a linguist, Yoda represents an imaginary member of a very rare and select group: races with languages that use an "Object - Subject - Verb" structure.
The understanding of the difference between each of these components is the first step in mastering sentence structure.
The order of subject (S) - verb (V) - object (O) (SVO) is the classic "natural" english sentence:
<table border=1 align="center" >
<tr><td >"Management </td><td> is adhering </td><td> to </td><td> credit policies."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb </td><td></td><td align="center" > Object </td></tr>
</table>
Things work quite well if we think of a sentence as revolving around a verb. The subject of a verb is the noun (or noun substitute ) that directs the action of the verb. The object of a verb is a noun (or noun substitute) that receives the action, is effected by the action, or about which the action is concerned. In the majority of instances a noun substitute is a pronoun.
In the example "management" directs the action and is therefore the subject, while "credit policies" are the things being "adhered to" and therefore the object. As a rough rule of thumb, if the noun phrase starts with a preposition it is a fair bet that the noun concerned is the object. In the example sentence, "to" is the preposition.
===Prepositions===
A preposition relates a word or phrase to another part of the sentence.
<table border=1 align="center" >
<tr><td >"Management </td><td> is adhering </td><td align="center" > to </td><td> credit policies."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb </td><td align="center" >Preposition</td><td align="center" > Object </td></tr>
</table>
Words that are prepositions include: to, in, into, on, upon, over, before, after, of, with.
In the example the word to joins (or more accurately relates) the noun phrase "credit policies" to the rest of the sentence - "Management is adhering".
A note of caution - a word that is a preposition in one case can be a conjunction in another:
* The auditor arrived before [preposition] the meeting.
* The auditor arrived before [conjunction] the meeting began.
===Conjunctions===
Conjunctions are words that join two sentences, or nouns, but not in a causal relationship as with a preposition but either as equals or in a superior - subordinate relationship. Examples of the former include: and, but, or, nor, whereas, however. Examples of the latter include: because, when, where, if, although.
==Active and Passive Voices==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: The Passive puts people to sleep.'''
</td></tr>
</table>
Recall the earlier discussion about subjects and objects of a sentence. We observed that the "natural" order in English is Subject - Verb - Object (SVO). This is the active voice:
<table border=1 align="center" >
<tr><td >"This firm</td><td> will no longer pay </td><td align="center" > for </td><td> Overtime."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb phrase </td><td align="center" >Preposition</td><td align="center" > Object </td></tr>
</table>
Now we will switch the subject and the object and contrast this with the same sentence expressed in the passive voice:
<table border=1 align="center" >
<tr><td >"Overtime payments</td><td> will no longer be made </td><td align="center" > by </td><td> this firm."</td></tr>
<tr><td align="center" >Object </td><td align="center" > Verb phrase </td><td align="center" >Preposition</td><td align="center" > Subject </td></tr>
</table>
The passive voice essentially reverses the natural order from SVO to OVS.
There is nothing grammatically wrong with either construct, but even a few lines expressed in the passive voice will bore our readers to tears. This effect arises because the passive voice places the reader at a distance from the action by making the object of the sentence the primary focus rather then the subject. Consequently, things appear to come before people.
Consider the following passage (passive voice).
"Significantly more overtime than the firm average has been incurred by roboteller maintenance staff of the Antarctic Division. A number of anomalies in the time sheets including bank branches that have been closed for many years having work recorded for them by individual staff have been revealed by a detailed analysis of the time sheets. Overtime payments will no longer be made by the Antarctic Division as a consequence."
Versus the following version (active voice)
"Roboteller maintenance staff in the Antarctic Division have incurred significantly more overtime than the firm average. An analysis of the time sheets for individual staff shows a number of anomalies, including work conducted for bank branches that have been closed for a number of years. Consequently, the Antarctic Division will no longer pay for overtime."
Which one did you have to read twice? The passive voice is difficult for the reader taken even one paragraph at a time. Try reading it for an entire report and you will be angry, frustrated and tense (assuming you are still awake by the end of it).
The active voice involves the reader, it flows better than the passive, it encourages the writer to go straight to the point rather than inserting "filler words" whose sole purpose is to make the sentence hang together and it reduces the chance of repetition (as apparent in the passage above). The passive voice, however, is not only difficult to read, but it is far more difficult (and therefore slower) to write.
In the passive voice we express the idea of the sentence before we provide the context (subject). The direct result of this is that our thought pattern is reversed and our ideas do not seem to flow properly. We end up adding extra words, leaving sentences hang in mid air (such as when we finish with a preposition) and, most importantly, failing to convince our audience of our point because they have to try too hard to understand it.
A sentence is a "word painting" of an idea. Well formed it is a thing of beauty and, like a great painting, a joy to behold.
==Positioning of Prepositions==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Ending a sentence with a preposition is a situation up with which I will not put.'''<br>
* '''Rule: Objects like Subjects.'''
</td></tr>
</table>
One of the most common errors in common speech is to place the preposition at the end of a sentence. Prepositions, by definition connect and introduce a noun phrase in a sentence. After the use of the active voice, I consider that this is almost the single most important trick to forming logical, easily understood sentences quickly.
Given that it has become almost standard usage to let prepositions drift to the end of a sentence, why is it such a gross error?
You will recall that we defined a preposition as a word that joins and relates a noun phrase to the rest of the sentence. It literally "leads" a phrase. Without the preposition connecting the two ideas in a sentence the sentence appears stilted (or as in the following example the sentence actually seams to mean something completely different):
"Management is adhering credit policies."
Consider a few examples:
<table border=1 align="center" >
<tr><th >Bad Form</th ><th >Good Form</th ></tr>
<tr><td>Where have the auditors come from?</td><td>From where have the auditors come?</td></tr>
<tr><td>Peace is worth striving for.</td><td>It is worth striving for peace</td></tr>
<tr><td>Firm credit policies must be complied with.</td><td>Management must comply with firm credit policies.</td></tr>
</table>
The first two on the left-hand side are merely untidy, but the third highlights the problem with prepositions shifting to the end of a sentence. The version on the left-hand side leaves the sentence "hanging" and most importantly, leaves out the subject. The lack of a subject in the sentence means that it is unclear who should perform the action. (ie. Objects Like Subjects)
If we use the active voice, and lead the sentence with the subject, we will be far less likely to end up with the versions on the left hand side. Since a preposition generally connects the object to the subject, it is the habit of placing the object at the start of the sentence (i.e. the passive voice) that leads to sentences with the preposition at the end.
The second example on the right hand side is still unsatisfactory, because it does not identify the responsibility of the action, and consequently is a generalisation - which is too easy to fault. For whom is it better to strive for peace? An arms manufacturer may see things a little differently! A better rewrite would have been: "We will benefit both materially and socially if we strive for peace."
It is easy to put prepositions in the right place if we remember to use the words "which" and "whom":
This is the day for which we have been waiting. (Not: This is the day we have been waiting for.)
These are the results of which we heard. (Not: These are the results we heard of.)
The rule (attributed to Winston Churchill) "Ending a sentence with a preposition is a situation up with which I will not put" (instead of - "Ending a sentence with a proposition is a situation I will not put up with.") illustrates how to arrange the words to achieve the desired outcome. It also tends to stick in one's mind and so is easily remembered.
==The Formula For A Paragraph==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: One idea to a paragraph'''
* '''Rule: Get to the bottom line first'''
* '''Rule: Three sentences are company, four is a crowd'''
* '''Rule: Just Do It - saying what we mean.'''
* '''Rule: Readers Don't Read'''
</td></tr>
</table>
The purpose of dividing a body of writing into paragraphs is to help the reader absorb the points being made, and the writer to formulate them. These five rules are each about how to put together a paragraph that works.
A couple of simple formulae describing the sequence of sentences in a paragraph can show us what to do:
# Main Point + Counter Point + Conclusion.
# Main Point + Expansion + [Expansion].
In each case we are saying a paragraph should consist of between 2 and 3 sentences. Using more or less sentences in a paragraph is permissible, but to be discouraged unless it is absolutely essential for the purpose of the point. This is particularly true when we a planning to use more than three sentences. (ie Three sentences are company, four is a crowd)
A paragraph end forms a natural break in the flow of though. By implication, we are asking the reader to absorb the entire a paragraph as a single concept before they evaluate it in their minds. The longer is the paragraph, the longer the reader must store the ideas before evaluation.
We risk loosing the reader's attention and comprehension if we ask him or her to temporarily store the ideas for too long a time or to store too many ideas at once. Short, punchy paragraphs built around a single central idea help minimise waffle and assist the reader to rapidly absorb our message. (i.e. One idea to a paragraph))
<table border=0 align="left" width="200px" style="background-color:#FFFF99" >
<tr><td align="center">
<p >
<font color="#990000" size="+1" >'''''…short, punchy paragraphs built around a single idea…'''''</font >
</p >
</td></tr>
</table>
It is a courtesy to the reader, to endeavour to minimise the work they need to do to in reading our work. Opening the paragraph with the main point allows the reader to skip the rest of the sentences in the paragraph if they agree with the point. In each of the two formulae we open with the main point (ie. we get to the bottom line first).
The difference between the forms is that in the first formula we offer a counter point in the second sentence, which is then offset by the conclusion. In this case the conclusion should be consistent with the main point (rather than the second or counter point).
In the second formula we are presenting the main point supported by one or two additional arguments. Should we need six or seven sentences to support the point, these should be presented as a dot-point list, or subdivided into two or three logical groups and split across two or three paragraphs.
<table border=0 align="right" width="200px" style="background-color:#FFFF99" >
<tr><td align="center">
<p >
<font color="#990000" size="+1" >'''''…the most convincing expression of an idea is usually the simplest…'''''</font >
</p >
</td></tr>
</table>
The essence of these ideas is that the most convincing expression of an idea is usually the simplest. Winning a point through confusion is, at best, a Pyrrhic victory. If the issue is important, the reader will dwell on it, and form their own opinion. If they didn't understand your arguments, you will have no effective input into the formation of their position on the matter, other than to raise it in the first place.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="left">
<tr>
<td>
<div class="center" >
[[Image:IARepWriteSectionStructure.png]]
</div>
</td>
</tr>
</table>
The essence of newspaper journalism is that most readers will not read most of the articles in a paper or magazine completely. Consequently, from the headline down to the end of the article the item is arranged as a series of progressively more detailed "summaries" of the information. There are usually three to four layers.
The first layer is the headline, which attempts to summarise the entire issue in a few words. The second layer is the first paragraph which presents a twenty to thirty word summary of the issue. The third layer is the second, third and perhaps fourth paragraphs, which provide the full story and the fourth layer provides incidental minor details.
The purpose of the structure is to allow the readers to exit at several points when they have collected sufficient information for their interest level. The approach recognises that none of us has time to read every piece of information presented to us, and when we do we tend to skim the information for issues that are relevant to us. (ie. readers don't read)
We should design our reports so that the reader does not have to read all the way to the end to "get" the issue. We can imagine this pattern as a pyramid, with the highest level summary at the top, and progressively more detail to the bottom.
==Using Conjunctions==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Conjunctions can't commence (a sentence)'''
* '''Rule: Conjunction collections confuse'''
</td></tr>
</table>
<table border=0 align="right" width="400px" style="background-color:#FFFF99;margin-left:0.9em" cellpadding="2" cellspacing="10" >
<tr><td align="left">
===The Importance of Correct Punctuation===
'''''The following two passages were written by Rowland Croucher. They illustrate neatly the importance of punctuation in written expression. Only the punctuation changes between the passages….'''''
<em>Dear Thomas,
I want a man who knows what love is all about. You are generous, kind, and thoughtful. People who are not like you admit to being useless and inferior. You have ruined me for other men. I yearn for you. I have no feelings whatsoever when we're apart. I can be forever happy--will you let me be yours?
Maria
----
Dear Thomas,
I want a man who knows what love is. All about you are generous, kind and thoughtful people, who are not like you. Admit to being useless and inferior. You have ruined me. For other men, I yearn; for you, I have no feelings whatsoever. When we're apart, I can be forever happy. Will you let me be?
Yours,
Maria</em>
</td></tr>
</table>
Conjunctions are important time savers and can help the flow of ideas if used correctly, but should not be used more than once in a sentence unless splitting the sentence would detract from it's meaning.
One example where two conjunctions may appear in a sentence is where the sentence contains both a list and two joined or related ideas:
''"The credit approval process involves assessing the amount, viability, merit and purpose of the loan and verifying that the borrower's credit history is of sufficient standing."''
In this case the passage would be harder to follow (and perhaps even misleading) if we wrote it as:
''"The credit approval process involves assessing the amount, viability, merit and purpose of the loan. The credit approval process should also verify that the borrower's credit history is of sufficient standing."''
By splitting the sentence we seem to imply that the credit history is of secondary importance to the information collected about the purpose of the loan.
These situations are generally pretty clear when they arise, but they are rare. A sentence with too many conjunctions suffers from the same problems as a paragraph with too many sentences; we have lost the reader before the end.
Some years ago Professor Manning Clark gave a Boyer lecture concerning the use of English in academic papers. One of his particular annoyances was the use of conjunctions to commence a sentence. His point was simple - a conjunction joins two sentences. If it starts the sentence it is prima-facie not joining two sentences together.
While we all recognise words like "and", "or" and "but" as conjunctions, words such as "however" and "because" are more often missed. Consider the following passage:
''"Because they operate unattended, Roboteller machines are prime targets for fraud. However, if we attach cameras to them they become leading tools in the capture of the perpetrators."''
This can be rewritten to eliminate the problem:
''"Roboteller machines are prime targets for fraud because they operate unattended. If we attach cameras to them, however, the machines become leading tools in the capture of the perpetrators."''
In rewriting the passage we also (once again) moved the subject to the start of the sentences. The "however": is redundant and the passage can be further simplified by writing it thus:
''"Roboteller machines are prime targets for fraud because they operate unattended. The machines become leading tools in the capture of the defrauders if we attach cameras to them."''
This passage demonstrates the appropriate use of "however":
''"Overall corporate / strategic planning is adequately addressed within Premium and Private, however, management attention is required concerning:…"''
==A Few Points of Style==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Personalise people not things'''
* '''Rule: Don't plan to make a plan.'''
* '''Rule: Negativity negates.'''
* '''Rule: DON'T SHOUT'''
</td></tr>
</table>
The three rules of this subsection cover common, but minor, problems of style.
A common written mistake is for a human trait such as "need" or "requiring" to be attributed to an inanimate "thing" such that it takes on the air of an inviolate law. The practice leads to broad statements without justification and hence incomplete argument of a case. Consider:
''"The credit approvals process needs to be reviewed."''
The credit approval process can not need anything. Only living creatures can experience need. It may be appropriate for the process to be updated and management or the auditors may need this to occur, but the process can't spontaneously need such improvement of itself.
Once again we find, as with so many English language errors, that the problem has arisen because of a subject / object mix-up. In the example the credit approval process, which should have been the object has been transformed into the subject. When we rewrite it the way it should have been we find that we are missing a significant part of the message that should have been conveyed (and is now inserted in the rewrite):
''"Management needs to review the credit approvals process focusing on the weaknesses identified in the finding."''
The new version both identifies who should perform the action and guidelines they should follow. It also highlights another important rule (not really one of grammar but one of service quality); the recommendation as written is essentially a plan to make a plan.
Either management should make the changes identified, or they should not. If we merely request them to review the situation we are delivering no committed improvement for the current situation to the Board. We should not say "review" when we mean "implement":
''"Management should implement the identified corrections to rectify the weaknesses in the credit approvals process identified in this report."''
Finally, we briefly consider two ad-hoc matters. The first is to do with capitalisation, while the second concerns the use of negatives.
Capitalising Every Word In a Sentence or even a Random selection Of a few words does not serve to help our presentation. Excessive capitalisation is affronting to the reader. In internet terminology this is akin to SHOUTING AT THE READER. Capitals belong at the beginning of a sentence or when naming a person, place or the title of a "thing". Capitalisation is rarely appropriate in the middle of a sentence.
Secondly, sentences should be expressed in the positive rather than the negative wherever possible. It is a standard sales technique to ask a prospect a question framed in the direction one wishes the answer to go:
"Would you prefer that my quote is open ended?"
As opposed to:
"Would you prefer that my quote is fixed?"
People tend to immediately think in sympathy to the speaker (at least until he or she threatens them with capitals!). If we express our sentences as negatives not only do we lead the reader to naturally disagree (because they have been "trained" to say no by our text, but we also create a sea of double negatives. Which may or may not imply a positive.
==Carrying the Case==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Death is in the details.'''
* '''Rule: Generalities are generally imprecise'''
* '''Rule: Let the facts carry the case.'''
</td></tr>
</table>
Much of what has been written in this goes to the issue of precision. In consulting and audit papers, accuracy of detail can determine the credibility attached to the consultant's/auditor's findings as well as the advice offered. The best strategy is to let the facts, clearly articulated, carry the argument.
The facts should not be embellished with emotional and vague descriptive words such as "large", "most", "substantially". We should state the quanta instead - "70%", "five out of eight", etc.
Try to avoid non-specific or vague words and expressions. This is especially true of quantities and times.
'''Examples'''
<table align=center >
<tr >
<th>Non-specific or vague</th><th> </th><th>Could mean or become</th>
</tr>
<tr>
<td>increased volumes</td><td></td><td>300 or more</td>
</tr>
<tr>
<td>drop in profit</td><td></td><td>profit was 20% lower</td>
</tr>
<tr>
<td>frequently</td><td></td><td>daily/weekly/monthly</td>
</tr>
<tr>
<td>rarely</td><td></td><td>once a year/decade</td>
</tr>
<tr>
<td>recently</td><td></td><td>yesterday/last week/month</td>
</tr>
<tr>
<td>shortly</td><td></td><td>tomorrow/next week/month</td>
</tr>
</table>
In the absence of statistical support for a finding, generalisation emerges. The discussion of the matter with the client becomes sidetracked over the meaning of words like "large" or "significant", rather than focussing on the issue identified and the solution required by the adviser.
Linked to these ideas is the form of words used to convey your point. Never use a long word where a short word will do. Long words may be interpreted by the reader as a deliberate attempt to mask purility with false grandeur, because the underlying point is decrepid or flawed. (See what I mean?).
Having said that, do not be frightened of using a long or technically correct word, simply because it has more than one syllable. Your can always provide a clossary of terms at the start of the document (and frequently that is a good idea for even some commonly missused terms). If your reader needs to get a little more educated to understand your work then fine.
Writing is not about stooping to the lowest common denominator, but it is about communicating your point accurately and effectively. That is: you must actually get your point across; not merely make your reader feel inadequate. There is no point in being right, if nobody realises.
The point, then, is to use the shortest possible ''correct'' word - not merely the shortest word.
As a rule-of-thumb, if your reader has to seek out the meaning of more than 2 or three words in your report you have probably lost them...and they will probably resent you for it. Know your audience, prepare your audience for your language, and make sure they don't feel stupid by the end of it.
The customer for a consulting or audit report needs to be assured that adopting recommendations based upon the consultant's finding will add value to the business.
Auditors (particularly) need to go well beyond describing what is wrong. They need to explain the meaning of any finding: how it affects the organisation’s bottom line; estimating the potential cost of not addressing a problem; predicting the likelihood of exposure or error.
Likewise, consultants need to go well beyond simply parroting back the latest theory they discovered in the bottom of a glass of scotch or on the back of the cereal packet that morning. Consultants need to do a little more of the 'audit' thing and actually analyse what is really the issue/wrong before agruing convincingly for change.
Wherever possible in all such instances, be specific. Numerous, several, many are words lacking in specifics. If this flies in the face of other advice to be brief, so be it.
The auditor/consultant should attempt to quantify the financial impact of a finding. While it may not be possible to arrive at a figure with mathematical precision, an informed guess can help management make a decision.
To be specific, following are some examples of content.
'''Poor'''
Differences exist in the cost of processing biscuit requisitions in various regions.
'''Better'''
The cost of processing biscuit requisitions differs from region to region. Vancouver can process a cheque for AUD 8 cents while the equivalent in Australia is AUD 15 cents. Australia might save up to AUD $15 million by adopting Vancouver’s methods.
'''Poor'''
There is a lack of adequate management information to support activities and to facilitate meaningful comparisons between regional units.
'''Better'''
Management information is inadequate: staff costs are not analysed for benchmarking across various offices; calculation of product profitability does not include processing costs; and there is no allocation of fees and interest income by product type.
Finally, '''summaries''' are meant to be just that: a tight condensation of the main point or points of an issue. Be ruthless in getting rid of perhaps interesting but non-essential pieces of additional information – but retain the specifics.
==Tense, Pronouns and Infinitives==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Don't split the infinitive'''
* '''Rule: Consistency is king'''
* '''Rule: Pronouns need a noun'''
* '''Rule: Unintroduced acronyms are antisocial'''
</td></tr>
</table>
"To Boldly Go Where No Man Has Gone Before…" Perhaps one of the most recognised phrases in the English language, this bight of the Star Trek prime directive is also a prime example of atrocious English! This is a classic example of the split infinitive (not to mention the redundant preposition at the end of the sentence).
The directive should have read:
" Boldly To Go Where No Man Has Gone…" or less poetically, " To Go Boldly Where No Man Has Gone…"
Perhaps, it would be best as:
"Go boldly, where none have gone.."
The infinitive is the basic form of verb invariably commencing with "to". It generally has no subject, and should not be split according to luminaries on the subject. The reason is more stylistic than grammatic. The problem with split infinitives is more obvious when a few words are inserted between the "to" and it's verb:
"The Roboteller machines are expected to really try hard to accurately and silently recognise the customer's identity."
Can be improved by:
"The Roboteller machines are expected to try really hard to recognise the customer's identity accurately and silently."
There are two common ways to fix avoid the split infinitive. Both are presented in the rewrite above. The first is to simply move the offending adverb after the verb, although sometimes this leads to a stilted speech pattern. The second is to move the adverb(s) to the end of the sentence as above.
Pronouns are words like he, she, it, etc that substitute for a noun like Jim, Phred or bank branch. The noun to which a pronoun relates is established by the context in which the pronoun is placed. Consequently, if too many pronouns are used together it becomes very difficult to determine for which noun an individual pronoun substitutes. As a general rule the target noun should immediately precede it's related pronoun and be refreshed at least every two pronouns.
Similarly, and acronym (abbreviation substituting for a noun or phrase) should be preceded immediately the first time it is used by the originating word or phrase. For example:
"The National Australia Bank (NAB) is a large and wonderful establishment. The NAB has an effective and happy audit team."
A completely unrelated matter (but grouped here for convenience) is that of consistency in the use of plurals and tense. It should be apparent to all authors, that the use of the singular in a sentence should be reflected continuously throughout the rest of the sentence. It may be less obvious that the same rule applies to verb tense.
If we express a verb in one tense, such as the present continuous as in "I am having a good day", the balance of the argument should normally be presented in the same tense. This is not a strict rule, because there will be situations in which a finding will relate a historic situation in the lead sentence, while the discussion relates an assessment that is in the present tense.
It is reasonable to say that within a sentence changes in tense will generally create confusion, unless separated by a conjunction. For example:
"In Antarctic Division wire transfer requests were accepted via e-mail and customer signatures were not obtained at all times."
Not
"In Antarctic Division wire transfer requests were accepted via e-mail and customer signatures are not obtained at all times."
But the following would be ok because the first part states a continuous state, while the latter part describes an historic observation relating to the first situation.:
"In Antarctic Division wire transfer requests are accepted via e-mail and customer signatures were not obtained at all times."
Agreement of subject and verb: A singular subject demands a singular verb; a plural one demands a plural verb. Many such problems are caused by long sentences overloaded with adjectives and subordinate clauses where the subject is separated from its verbs. This is another reason for keeping sentences short.
Sometimes the rule is not immediately obvious, such as in the case of "None": "none were" should be "none was" (none=not one or no one)
Example None of us is perfect.
==Confusing Words==
These words are often confused
* Affect (to impact upon, to assume) / effect (to bring about a change in)
* Object (the purpose)/ objective (the point of an exercise - usually military)
* Idol (a religious artefact, or object of worship) / Idyll (an imaginary ideal, or pastoral setting) / Idle (lazy, not in motion)
* Whom (the objective form of the relative pronoun) / who (the subjective form of the relative pronoun)
===A note about affect & effect===
A frequent source of error is confusion in the use of the similar-sounding words affect, affected, effect and effected and continual and continuous.
A cause for confusion is that affect is always a verb while effect can be either a noun or a verb. Both continual and continuous are adjectives.
Affect is a verb in the sense of being to influence. Effect as a verb means to bring about; as a noun it is equivalent to the word result.
The following represent correct usage.
Examples:
* Errors in computing affected the accuracy of the result.
* The effect of errors in computing was to produce an inaccurate result.
* Smoking cigarettes may affect your lungs.
* Giving up smoking had no effect on her general health.
* I didn’t finish the report because of continual telephone interruptions.
* Lights are left on in traffic tunnels to provide continuous illumination.
===A note about "due to"===
"Due to" is often used in the sense of through, because of or owing to. Mostly those alternatives are to be preferred. But it is correct to use due to in the sense of being attributable to.
Example The plane crash was due to bad visibility.
Don’t rely on your computer’s spellchecker for advice on grammar or correct spelling. Some systems are misleading. For example, you may be advised to change personal to personnel (or the other way round).
===A note about who & whom===
"Captain Kirk is the man whom the federation pays to fly the Enterprise." (Whom is the object of pays - the pronoun effected by the action of payment)
And
"Captain Kirk is the man who we think flies the Enterprise." (Who is the subject of flies, not the object of think).
==Punctuation==
Punctuation matters.
* "What is this thing called love?" (As in: Let me count the ways...)
* "What! Is this thing called love?" (As in: Let me out of here...)
* "What is this thing called, love?" (As in: OMG! You are not comming near me with that!)
===Comma===
Used when essential for clarity or to indicate a small interruption in continuity of thought. Short sentence construction reduces the need for commas.
===Semicolon===
Using a semicolon indicates a pause greater than a comma but less than a colon or full stop. Often a semicolon helps to alert the reader to an alternative or compensating thought.
'''Example:''' ''The risk of lost muffins was high; however, quick action averted this crisis.''
Semicolons should be used at the end of each line in a series of bullet points as an alternative to commas. (see later).
'''Example:'''
<em>
The following actions are planned to clear the biscuits:
* Engage two temporary staff for six months;
* Schedule extra training for these and permanent staff;
* Upgrade software in the Biscuit Dispensing Machine;
* Simplify the standard form used for requisitioning for biscuits from the kitchen from ten pages to five; and
* Remove the requirement for VP Supply, VP HR, and CEO counter signing of all biscuit requisitions.
</em>
===Colon===
The colon is used to introduce a quotation, summary, conclusion or list of bullet points (as in the example above); or to introduce a list within a sentence.
'''Example:'''
''The report contains the following sections: employment, training, promotion, legal compliance, relations with other departments.''
===Full stop===
(Period in U.S. usage)
As well as indicating the end of a sentence, full stops are used in some abbreviations. It has become common for periods to be ommitted from word abreviations. We counsel against such a style because: with the plethora of acronyms and technical jargon in today's language signalling that a word is an abreviation of a possible familiar word, with the use of the period; rather than a technical term unknown to the reader, adds to clarity.
Where a bulleted list includes points that have more than one sentence, it is preferable to separate the points with full stops, not semi-colons as set out in the previous example.
Example:
<em>
The following actions are planned to clear the biscuits:
* Engage two temporary staff for six months. Qualifications include large appetities and general slothfullness. It is estimated that salaries will be approximately $13,000 per month each plus biscuits.
* Schedule extra training for these and permanent staff. It is anticipated the training officer will need to allocate three hours weekly to the task.
* Upgrade software . . . (etc)
</em>
Note that where a full stop is used in a dot-point list, no conjunction is used to join the last to items.
Regardless of which dot point separater is chosen, it MUST be used consistently throughout the list and ideally the document.
===Hyphen===
General usage previously demanded that a hyphen be used if a prefix or suffix had the same letter as the word to which it was attached. So cooperate and coordinate generally were spelt co-operate and co-ordinate; hyphens in these instances are unnecessary. While reinforce and react are other examples where hyphens are not needed, sometimes a hyphen provides a warning that a word should not be read as a single syllable (e.g. re-use). Words formed by using the prefix non- should nearly always be hyphenated (e.g. non-compliant, non-aligned) as with some words prefixed by pre- (e.g. pre-existing).
===Apostrophe===
Used to indicate possession or the omission of letters in a contraction.
'''Examples'''
<em>
* Bill’s car was taken to the wreckers.
* Bill hasn’t had time to replace his car yet.
</em>
There is often confusion about its and it’s. The simple test is whether the construction of a sentence means it is (or it has etc). If so, it’s is a contraction and needs an apostrophe; if not, its is a pronoun and needs no apostrophe. (Warning: Don’t get fooled by some computer spellchecking systems which get this wrong.)
A rough rule of thumb: if we are using "it" in the possessive sense (as in "its red tyre"), leave out the "'".
'''Examples'''
<em>
* It’s been a long time between drinks.
* The engine was tuned but its vibration wasn’t greatly reduced.
</em>
===Ellipses===
This is the term to indicate words have been omitted from a quotation and is represented by three full stops separated by spaces.
'''Example'''
''Now is the time . . . to come to the aid of the party.''
===Quote marks===
These should not be used for emphasis. Use bold type or italic instead. Use quotation marks only when you are quoting or, after very long consideration of alternatives, when you are using a word or phrase you consider less than ideal for the situation.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
{{BackLinks}}
</noinclude>
e15798437995096806e5475bc55d9ac4d9e4d994
Managing Risk in Mergers & Acquisitions - A Success Strategy
0
295
382
381
2018-10-29T12:04:04Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2007 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
THIS ARTICLE IS NOT YET COMPLETE
=Pre-Merger Actions=
==Pre-merger Requisits==
* Beyond Financial Due-diligence (history & forecast)
** Financial,
** Legal,
** Cultural,
** Infrastructure, etc
* Include the cost of integration (including IT) in the forecasts
* Understand the financial structure, performance drivers and debt levels
* Understand the hidden control & decision relationships (why the acquired business really works)
* Understand all the stakeholders and implied or expressed service agreements
* Understand the meaning of merger success (in this context and for both parties)
* Agree the merger strategy (on both sides of the table)
* Don’t kill it during negotiation (greed is not good in this case)
==Bishop’s Stakeholder Communities Model==
===Analysing Strategy, Culture & Processes===
We see a business or business unit as only having activities designed to service these communities. Some
Processes are purely to foster community interaction & membership, others are designed for services the
community needs like payroll, leave applications, advertisements, policy creation, complaints, help, performance
information and dissemination. With a little thought and consistent application the model proves both universal
and scalable.. You may use this model freely as long as the original author is always accredited.
A business consists only of stakeholder communities:
<table>
<tr>
<td>
# Workforce
## Employees
## Contractors
# Suppliers
# Partners
## Business network
## Cooperative
# Customers
## Pay for goods & service
# Clients
## Receive goods & service
# Governance
## Regulators
## Board
## Senior exec
# Government
# Wealth / Enterprise Custodians
## Asset managers
## Treasury, equipment, IP
# The Public
## The ultimate source & influence on all other stakeholders
</td>
<td>
[[Image:BishopsStakeholderCommunityModel.png]]
</td>
</tr>
</table>
=Post Merger Actions=
==Introduction==
* Understand the required degree of integration for the intended merger outcome
* Assess and monitor merger & integration risk
** Including: triggering events, consequences, remediation, responsibility, escalations
** Consider carefully the role of internal & external brands
* Empower the merger from the top
** Establish an merger or integration steering committee
*** Comprising board + stakeholder executive (include IT)
* Establish an integration manager / office
** Assemble the right-skilled integration team
** Focus Internal PR on bonding and service crossflow (not happy sheets)
** Establish a specific IT integration/interfacing advisory panel include business leaders
** Establish an integration ‘help-desk’ & communicate its existence
* Re-Perform cultural due diligence (where high integration exists)
* Perform targeted redundancies early & together – then tell the team it is over
* Revise Management Performance Reporting
** Target at the required integration degree
* Implement an integration strategy
** Work in many short (100 day) projects
* Implement a merger tracking programme
** Defined performance measures with targets (automate)
** Risk & remediation managed (automate)
** Progress & outcome communications
* Monitor progress and revise strategy
==Empower from the Top==
Weber (1996) concluded merger successes were generally CEO lead who:
* Dedicate executive time and focus
* Put together a leadership team to drive it
* Focus management attention on formal success factors
* Create a sense of human purpose and direction
* Model desired behaviour and ‘rules of the road’
==Distilling the Risks==
(Weber (96) & Bishop)
1 Is the combination achieving financial and operational goals? R1
2 Are schedules on target and are changes being implemented effectively? R2
3 Do employees understand and support the need for change? R3
4 What is the effect on people’s well-being and esprit-de-corps? R4
5 Are managers at all levels taking steps to minimise negative reactions and build positive feelings? R5
6 Are productivity or work quality being affected? R6
7 Do people understand their new roles and what is expected? R7
8 Are client and staff complaint levels stable or dropping? R8
9 Is the IT Business Process value map stable or declining? (See next slide for an example) R9
10 Is the post-merger integration investment budget on track? R10
==The IT and Business Process Value Map==
$NTV – Net Time Value (of net contribution over life of IT system)
This table runs at the businees process and business unit, etc levels
DO NOT UNDERESTIMATE THE IMPACT OF IT ISSUES
BP1 BP2 BP3 BP4
IT Sys1 $NTv $NTv $NTv $NTv $TNTV
IT Sys2 $NTv $NTv $NTv $NTv $TNTV
IT Sys3 $NTv $NTv $NTv $NTv $TNTV
IT Sys4 $TNTV
IT Sys5 $TNTV
IT Sys6 $TNTV
IT Sys7 $TNTV
IT Sys8 $TNTV
IT Sys9 $TNTV
$TNTV $TNTV $TNTV $TNTV
==Tracking Success – The Scorecard==
* Market measures
* Integration measures
* Operational measures
* Process measures
* Cultural measures
* Financial measures
* Purpose measures
==Role of the Integration Manager==
(Ashkenis & Francis 2001)
* Inject Speed
** Ramp up planning
** Accelerate implementation
** Push for decisions & actions
** Monitor progress & report to CEO/Steering
* Engineer Success
** Identify critical business synergies
** Define and launch 100 day projects
** Orchestrate BP transformation to combine entity Best Practice
* Make Social Connections
** Serve as a travelling ambassador between locations and businesses
** Serve as a lightning rod for hot issues (& venting)
** Interpret the customs language and culture of both companies
* Create Structure
** Provide flexible integration frameworks
** Mobilize joint teams
** Create key events and timelines
** Facilitate team and exec review
==Engaging The Right Skills==
* Project management
* Risk management
* Process reengineering
* IT interfacing / integrating
* Marketing & Brand management
* Intra-Corporate & Public Relations
* Corporate Governance
* Conglomerate Accounting & Finance
* Legal & HR
==Constraining Risk Events==
-Setting Strategic Priorities-
* Address:
** Corporate PR, marketing & sales quickly – these are the company to most external stakeholders
* Focus on retaining key staff
* Focus on customer retention
* Focus on IT change cost
* Do not disconnect business process from IT systems during transition (and understand the ISNTV)
* Forge a new corporate identity – or know why you aren’t
* Focus/ Build on similarities – not differences
* Align capabilities, services and products
* Promote successes and strengths in the acquired entity
* There is no business more important than the firm’s business.
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
8a2e82879b309fbbf8459f35718ec9324f63473e
Managing Risk in Mergers & Acquisitions - A Review of the Literature
0
296
384
383
2018-10-29T12:04:05Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
==About The Author & This Article==
Rachel Curry, Research Consultant, Bishop Phillips Consulting
This article presents a summary of the literature examining the risks in corporate mergers and acquisitions over a 20 year period up until 2003. It was originally prepared by Rachel Curry of our research team as background detail for a briefing provided to the Members or the Bendigo Stock Exchange by [[Jonathan Bishop]]. The subheadings represent the names of the articles or papers summarised. Document links were added after the initial paper was prepared, and some references may be in error. The original summaries were compiled from printed editions of the papers or texts, and some page references may differ from the online references. Most of the links will navigate to subscription or book distributers as appropriate. Please advise any identified discrepancies.
</noinclude>
==MERGER FAILURE RATES AND REASONS FOR FAILURE==
===Managing Mergers, Acquisitions & Strategic Alliances===
[http://books.google.com/books?id=w2YR9LwY7FQC&dq=MERGER+FAILURE+RATES+AND+REASONS+FOR+FAILURE&pg=PA5&ots=CSqEPdOcJl&sig=cZKsAhRXXl1LH_lmGHgwNjIOhxI&prev=http://www.google.com/search%3Fsourceid%3Dnavclient%26ie%3DUTF-8%26rls%3DGGLG,GGLG:2005-34,GGLG:en%26q%3DMERGER%2BFAILURE%2BRATES%2BAND%2BREASONS%2BFOR%2BFAILURE&sa=X&oi=print&ct=result&cd=3&cad=legacy]
Sue Cartwright, Cary L. Cooper
Diagnosis and analysis of merger failure has traditionally focused on financial and strategic factors, with mergers considered to fail for rational economic reasons such as economies of scale not achieved to the magnitude expected, poor strategic fit or unexpected changes in market conditions. However, considering financial and strategic factors only is insufficient to achieve a successful merger or acquisition. Two important human factors to merger and acquisition success which impact on integration are:
<ul>
<li> ‘The culture compatibility of the combining organizations, and the resultant cultural dynamics.’
<li> ‘The way in which the merger/acquisition integration process is managed.’
</ul>
A lack of cultural compatibility can inhibit the creation of a ‘cohesive and coherent organizational entity’. A survey conducted by the British Institute of Management (1986) determined that ‘managerial underestimation of the difficulties of merging two cultures was a major contributory factor to merger and acquisition failure.’
The factors often held responsible for merger and joint venture failure include the selection of inappropriate venture partners, cultural incompatibility, and general “parenting” problems. (p.18)
There has been much debate about the most appropriate and accurate way to assess the gains arising from mergers, including both managerial and mathematical methods. Despite the method selected, many studies indicate mergers have an unfavourable impact on profitability, with research conducted by Mecks (1977) and Sinetar (1981) concluding that mergers have been associated with lowered productivity, worse strike records, higher absenteeism, and poorer accident rates.
Further research conducted by Ellis and Pekar (1978) and Marks (1988) suggest that in the long term between 50 and 80 per cent of all mergers and takeovers are considered financially unsuccessful, while a study conducted by the Department of Trade and Industry, published by the British Institute of Management (1988) and another by Hunt (1988) determined the success rates post-acquisition to be around 50 per cent. More current studies show similar trends continuing, with Cartwright and Cooper (1996) determining, on the basis of financial results in the first year of combined trading, that only half of mergers and acquisitions studied were successful.
Estimate by Davy et al (1988) held ‘employee problems’ to be responsible for between one-third and half of all merger failures, while a discussion paper by the British Institute of Management (1986) identified sixteen factors related to unsuccessful mergers and acquisitions, including (p.28):
<ul>
<li> underestimating the difficulties of merging two cultures
<li> underestimating the problems of skill transfer
<li> demotivation of employees of acquired company
<li> departure of key people in acquired company
<li> too much energy devoted to ‘doing the deal’, not enough to post-acquisition planning and integration
<li> decision making delayed by unclear responsibilities and post-acquisition conflicts
<li> neglecting existing business due to the amount of attention going into the acquired company
<li> insufficient research about the acquired company
</ul>
‘Ability to integrate the new company’ (p.28) was ranked as the most important factor for acquisition success according to a study by Booz, Allen and Hamilton (1985) while Kitching (1967) determined ‘the key to merger success was essentially the way in which the “transitional process” was managed and the quality of the working relationship between the partnering organizations.’
===Consulting in Mergers and Acquisitions===
[http://www.ingentaconnect.com/content/mcb/023/1997/00000010/00000003/art00006]
Marks M.L.
Three studies (Davidson, 1991; Elsass and Veiga, 1994; Lubatkin, 1983) have found that ‘fewer than 20 per cent of corporate combinations achieve their desired financial or strategic objectives.’
Zweig (1995) studied deals value at $500 million or more, and found that half of these deals destroyed shareholder value, 30 per cent had a minimal impact and only 17 per cent created shareholder value.
Many factors attributable to this low success rate, including (p.1):
<ul>
<li> paying the wrong price
<li> buying for the wrong reasons
<li> selecting the wrong partner
<li> buying at the wrong time
<li> managing the post-merger integration process inappropriately
</ul>
Marks (1997) together with previous studies (Marks and Mirvis, 1997; Mirvis and Marks, 1992) found the common factor restricting ability to achieve hoped-for synergies and financial gains to be (p. 1- 2):
<ul>
<li> ‘underestimating the multitude of integration issues and problems that arise as organizations come together;
<li> underestimating the drain on resources and the distraction from performance required to manage the transition from pre- to post-merger status; and
<li> underestimating the pervasiveness and depth of the human issues triggered in a merger or acquisition.’
</ul>
Since mid-1980s, many aspects of mergers and acquisitions have changed, including (p.3):
<ul>
<li> ‘deals are more strategically driven
<li> technological advances are driving deals
<li> globalization is driving more deals
<li> deals are involving larger organizations
<li> entire industries are put into play (deregulation, social policies and changing customer demands)
<li> managers are smarter about doing deals and managing integration
<li> human assets are even more crucial to merger and acquisition success than before.’
</ul>
“Consultations to facilitate mergers and acquisitions emanate from sound change management principles, yet must be sensitive to the special requirements of combining complex organizations.” (p.4)
===Enhancing the Success of Mergers and Acquisitions===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=A600DFCDB0CD4D4945CE767ABBAC9918?contentType=Article&hdAction=lnkhtml&contentId=865419]
Mike Schraeder, Dennis R. Self
Research conducted by Carleton (1997) indicate between 55 – 70 per cent of mergers and acquisitions fail to meet their anticipated purpose.
Number of researchers determine that cultural incompatibility of the companies involved in the merger/acquisition are partly responsible for financial benefits anticipated not being achieved (Fralicx and Bolster, 1997; Cartwright and Cooper, 1993). Chatterje et al (1992) also agree that poor cultural fit has contributed to several merger and acquisition failures where the companies involved appeared to be suitable strategic partners.
Mirvis (1985) highlighted four factors that were believed to impact on the integration of organizations:
<ul>
<li> top management relations (including reporting relationships, decision making and flexibility)
<li> compatibility of business systems
<li> existence of a culture that will support the integration of business systems
<li> goals the respective parties intend to achieve
</ul>
Several other factors impacting on integration that have been identified through other research are:
<ul>
<li> compatibility of respective business systems (Mirvis, 1985)
<li> organizational members experience difficulty adjusting to new procedures and performance standards (Marks and Mirvis, 1992)
<li> differences in managerial styles and accounting practices (Cartwright and Cooper, 1993)
</ul>
Weber (1996) identifies that anticipated benefits from mergers and acquisitions are other unrealized because of productivity losses and the ‘traumatic effect of mergers and acquisitions on a firm’s human resources.’ Also finds that ‘the magnitude of cultural differences can effectively impede a successful integration during mergers and acquisitions, resulting in poor financial performance.’
Coopers and Lybrand (1992) studied failed mergers and acquisitions, and over 80 per cent of the executives involved identified that different management practices and styles as the primary contributor to integration issues.
To achieve merger and acquisition success, several researchers have determined the following factors need to be considered:
<ul>
<li> develop a flexible and comprehensive integration plan
<li> share information and encourage communication
<li> encourage participation by involving others in the process
<li> enhance commitment by establishing relationships and building trust
</ul>
===Due Diligence: The Devil in the Details===
[http://www.workforce.com/archive/feature/22/22/68/index.php]
Greengard, Samuel
“HR has a critical role in due diligence – both from the benefits and compensation side and the cultural side” – Deborah Rochelle, senior merger and acquisition consultant, Watson Wyatt Worldwide. She believes that ‘due diligence must encompass people, programs, plans, policies and processes.’
Clemente (1999) states that ‘ultimately, many mergers fail because of human resource–related issues, such as culture clash.’
Studies have found that between 50 and 75 per cent of all merging companies fail to retain book value two years after merging, and ‘many others are torpedoed by ongoing culture clash and an erosion of top talent.’ (p. 2)
Mitchell Lee Marks, management consultant, believes a number of failed mergers aren’t because of inept management or inadequate due diligence, but because the two organizations haven’t determined whether they have compatible cultures or how to overcome these differences if the cultures aren’t compatible.
Organizations should develop a detailed checklist to work through due diligence process to allow the organization to evaluate which factors are most important.
===On Managing Cultural Integration and Cultural Change Process in M & A===
Bijilsma-Frankema, K. (2001)
Journal of European Industrial Training, Vol.25
Magnet (1984) and Gilkey, 1991) have found that between 60 per cent and two-thirds of mergers and acquisitions fail to meet expectations.
Gilkey argues that:
‘the high percentage of failure is mainly due to the fact that mergers and acquisitions are still designed with business and financial fit as primary conditions, leaving psychological and cultural issues as secondary concerns. A close examination of these issues could have brought about a learning process, directed at successfully managing such ventures.’ (Gilkey, 1991, p.331)
Eisele (1996) found three factors that generally influence the success of mergers and acquisitions (p.6):
<ul>
<li> cultural fit
<li> cultural potential
<li> competent managers to guide the process
</ul>
===The Effective Management of Mergers===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=D784A9C7145AEEB97AB42AC75F0E6A95?contentType=Article&hdAction=lnkhtml&contentId=1410708]
Han Nguyen, Brian H. Kleiner
YTD 2002, there were over 4,363 mergers and acquisitions, worth over $291.7 billion.
Prime reason for most mergers and acquisitions is to maintain or increase market share, and to increase shareholder value by cutting costs, and introducing new, expanded and improved services.
Study by KPMG (publishing in PR Newswire, 1999) found that between 75 and 83 per cent of mergers and acquisitions failed, where failure meant lowered productivity, labour unrest, higher absenteeism and loss of shareholder value, or even a dissolution of the companies involved.
Merger success is directly correlated with the level and quality of planning, with insufficient time often being spent analyzing current and future market trends and integration issues. Failure is often also due to an insufficient due diligence (Oon, 1998).
Simpson (2000) found the opportunity for mergers to fail is greatest during the integration phase because of improper managing and strategy, culture differences, delays in communications, and lack of clear vision.
Bijilsma-Frankema (2001) found ‘increasing evidence that cultural incompatibility is single largest cause of lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’
KPMG developed best practice guidelines, with the following main keys necessary for successful integration (p.4):
<ul>
<li> ‘Directors must get out of the boardroom
<li> Set direction for the new business
<li> Understand the emotional political and rational issues
<li> Maximize involvement
<li> Focus on communication
<li> Provide clarity around roles and decision lines
<li> Continue to focus on customers
<li> Be flexible’
</ul>
Communication is listed as the key factor to make integration effective and successful.
===Managing Merger Madness===
[http://www.emeraldinsight.com/Insight/viewContentItem.do?contentType=Article&hdAction=lnkhtml&contentId=869290]
Journal: Strategic Direction (Author unkown)
Successful mergers and acquisitions consist of (p.1):
<ul>
<li> Acquisition target being carefully and dispassionately selected
<li> A post-acquisition strategy relevant to the newly merger organization need to be developed from the start
</ul>
In pre-merger planning stage, the most common mistakes are (p.1):
<ul>
<li> Failure to conduct a detailed risk assessment and management profile of the acquisition target
<li> Allowing pressure to increase share value to take the place of a convincing strategy
<li> Assuming total synergy
</ul>
The most common mistakes in integration processes are (p.1):
<ul>
<li> Slow post-merger integration
<li> Cultural conflicts
<li> No risk management strategy
</ul>
===Merging for Success===
[http://www.ingentaconnect.com/content/mcb/056/2002/00000018/00000006/art00003]
Author: Unknown
Found that in the first few months following the announcement of an acquisition, productivity falls by up to 50 per cent. Most mergers and acquisitions fail for reasons other than money, such as leadership issues involving unclear objectives or cultural clashes.
===Anatomy of a Merger===
Unknown.
Success of mergers and acquisitions range from 20 to 60 per cent (British Institute of Management, 1986; Hunt, 1988; Marks, 1988; Weber, 1996). Poor results have now generally come to be attributed to poor human resource planning.
Research identifies communication to be the most important factor during the merger and acquisition process.
Both Balmer and Dinnie (1999) and De Voge and Spreier (1999) indicate that communication is the key to a successful integration of two clashing cultures.
Ernst and Young (1994) identified cultural incompatibility as the single largest cause of ‘lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’ (p. 3)
For sustained competitive advantage to be achieved, it is imperative the mergers and acquisitions be implemented from a financially and legally sound standpoint, as well as a behavioural approach.
Leadership from top-level management is also important for merger success. Weber (1996) found the higher the commitment of the acquired firm’s top management, the higher the effectiveness and the financial performance of the merged entity. Success mergers are led by CEOs who (p.6, Part II):
<ul>
<li> Dedicate executive time and focus
<li> Put together a leadership team
<li> Focus management attention on success factors
<li> Create a sense of human purpose and direction
<li> Model desired behaviour and ‘rules of the road’
</ul>
It is recommended a merger-tracking program be implemented to determine whether the organization is working towards its goals, and what the merger outcomes were. It should cover things such as (p.7 – 8, Part II):
<ul>
<li> ‘Is the combination achieving financial and operational goals?
<li> Are schedules on target, and are changes being implemented effectively?
<li> Do employees understand and support the need for change?
<li> What is the effect on people’s well-being and esprit de corps?
<li> Are managers at all levels taking steps to minimize negative reactions and build positive feelings?
<li> Are productivity or work quality being affected?
<li> Do people understand their new roles and what is expected of them?
</ul>
==ATTRIBUTES LEADING TO SUCCESS OR FAILURE==
===Mergers and Acquisitions: A Guide to Creating Value for Stakeholders===
[http://www.questia.com/PM.qst?a=o&d=106499472#]
Michael A. Hitt, Jeffrey S. Harrison, R. Duane Ireland
Some important factors that can contribute to success or failure in mergers and acquisitions are:
'''Due Diligence'''
Lack of due diligence has caused many merger failures. Involves comprehensive analysis of firm characteristics such as financial condition, management capabilities, physical assets and intangible assets.
'''Financing'''
Manageable debt levels should be ensured.
'''Complementary Resources'''
Occurs when the ‘primary resources of the acquiring and target firms are somewhat different, yet simultaneously supportive of one another.’ (p.179) This tends to create economic value to a greater value that exists when the merging firms have identical or unrelated resources.
'''Friendly/Hostile Acquisitions'''
Friendly acquisitions tend to create greater economic value. A hostile acquisition can reduce the transfer of information during due diligence and merger integration, and increase turnover of key executives in the firm being acquired.
'''Synergy Creation'''
Four foundations to creation of synergy are strategic fit, organizational fit, managerial actions and value creation.
'''Organizational Learning'''
Many people should participate in the acquisition process to ensure knowledge about acquisitions is being spread throughout the firm, and isn’t lost if one of the key people typically involved leaves. The learning process should be managed, with steps taken to study and learn from acquisitions, with the information gained recorded.
'''Focus on Core Business'''
Cultural and management differences are more greatly magnified the less firms have in common, therefore constraining the sharing of resources and capabilities. ‘Result is that positive benefits from financial synergy are not enough to offset the negative effects of diversification.’ (p.181)
'''Emphasis on Innovation'''
Innovation is critical to organizational competitiveness. ‘Companies that innovate enjoy the first-mover advantages of acquiring a deep knowledge of new markets and developing strong relationships with key stakeholders in those markets’ (p. 181)
'''Ethical Concerns / Opportunism'''
Risk in mergers and acquisitions is that the information received may be incorrect, misleading or deceptive. Steps should be taken to ensure that the information is accurate and hasn’t been manipulated by management with the aim to making performance appear higher than it is.
===The Complete Guide to Mergers & Acquisitions: Process Tools to Support M&A: Integration at every level===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The likelihood of a successful merger is increased by considering the following ten key recommendations (p. 196 – 197):
<ul>
<li> ‘Conduct due-diligence analyses in the financial and human-capital-related areas.
<li> Determine the required or desired degree of integration.
<li> Speed up decisions instead of focusing on precision.
<li> Get support and commitment from senior managers.
<li> Clearly define an approach to integration.
<li> Select a highly respected and capable integration leader.
<li> Select dedicated, capable people for the integration core team and task forces.
<li> Use best practices.
<li> Set measurable goals and objectives.
<li> Provide continuous communication and feedback.’
</ul>
'''Due Diligence'''
Human resource due diligence analysis as well as financial due diligence is important. It provides details about where the companies converge or diverge in areas such as leadership, communication, training and performance management. Identifying this can allow the companies to plan for any conflicts that might occur during the integration phase in respect to these matters.
'''Speedy Decisions'''
Tends to allow faster integration, and enables people to refocus more quickly on work, customers and results.
'''Clearly Defined Approach'''
Allows faster decision making and organizes the entire integration process. ‘Without a defined approach that includes clear deliverables, due dates, milestones, information flows, and so on, each function of the enterprise will be working on a different schedule and producing deliverables that vary widely in terms of quality and content.’ (p.198)
'''Capable Leadership'''
‘The integration leader should be an excellent project manager with a broad view of the enterprise and good people skills.’ (p. 198)
'''Measurable Goals and Objectives'''
Measurable goals and objectives let people involved know what a successful integration consists of, and how long it should take.
==COMMON PROBLEMS AND CHALLENGES IN ACQUISITIONS==
===Managing Acquisitions: Creating Value Through Corporate Renewal===
[http://www.amazon.com/Managing-Acquisitions-Creating-Through-Corporate/dp/0029141656]
David B. Jemison, Philippe C. Haspeslagh
Four common challenges in managing acquisitions are (p. 8):
<ul>
<li> ‘Ensuring that acquisitions support the firm’s overall corporate renewal strategy
<li> Developing a pre-acquisition decision-making process that will allow consideration of the “right” acquisitions and that will develop for any particular acquisition a meaningful justification, given limited information and the need for speed and secrecy.
<li> Managing the post-acquisition integration process to create the value hoped for when the acquisition was conceived.
<li> Fostering both acquisition-specific and broader organizational learning from the exposure to the acquisition.’
</ul>
‘The key to integration is to obtain the participation of the people involved without compromising the strategic task.’ (p.11)
Acquisition integration has several challenges (p.11):
<ul>
<li> ‘Adapting pre-acquisition views to embrace reality,
<li> An ability to create the atmosphere necessary for capability transfer,
<li> The leadership to provide a common vision,
<li> And careful management of the interactions between the organizations.’
</ul>
'''Process Perspective'''
‘Adopting a process perspective shifts the focus from an acquisition’s results to the drivers that cause these results: the transfer of capabilities that will lead to competitive advantage. In the process perspective, acquisitions are not independent, one-off deals. Instead, they are a means to the end of corporate renewal. The transaction itself does not bring the expected benefits; instead, actions and activities of the managers after the agreement determine the results.’ (p.12)
(A summary of the entire chapter is provided on p. 15)
===Winning at Mergers and Acquisitions: The Guide to Market-Focused |Planning and Integration===
[http://www.wiley.com/WileyCDA/WileyTitle/productCd-047119056X.html]
Mark N. Clemente, David S. Greenspan
Key to successful mergers and acquisitions is ‘being able to take the differences inherent in the two companies and meld them to create an enhanced capability.’ (p. 43)
Problem is often that stakeholders focus on the short-term benefits from mergers and acquisitions such as cost reduction, which results in decisions being made that can sacrifice long-term goals to achieve short-term savings.
‘When companies seek to merge or acquire, and can cite more than two strategic drivers as reasons to come together, then the chances of success are higher.’ (p.44)
Twelve common challenges present in the majority of mergers and acquisitions are (p.163):
<ul>
<li> ‘Embracing the concept of change
<li> Setting priorities
<li> Sharing information and effecting corporate understanding
<li> Melding cultures
<li> Forging a new corporate identity
<li> Determining managerial roles and responsibilities
<li> Effecting teamwork and cooperation
<li> Combining corporate functions and internal processes
<li> Aligning capabilities, services, and products
<li> Measuring results
<li> Acknowledging the two levels of integration
<li> Maintaining flexibility’
</ul>
The long-term success or failure of mergers and acquisitions can be determined by the steps put in place to meet these challenges – each challenge should be ‘met with a clear focus and forward-thinking tactics.’ (p.163)
'''Setting Priorities'''
Integration planning is the number-one priority once a deal has been closed. The critical steps in the integration process itself are:
<ul>
<li> Address corporate information, marketing, and sales departments quickly, as these represent the company to stakeholders
<li> Corporate image and branding aspects are important to begin promoting the new image. This allows the company to display ‘the best face on the merger to external audiences while you grapple with many of the longer-term internal and operational issues.’ (p.165)
<li> Focus on retaining key employees
<li> Focus on customer retention – this is critical to maintain the value of the acquired company.
</ul>
'''Sharing Information and Effecting Corporate Understanding'''
The two companies need to share information, and understand the nature of the new corporate relationship. This should address issues such as ‘What is the company’s corporate philosophy? What are the strategic intentions of senior management? Why has the company come to develop, commercialize, and invest in the products and services it does? How are the sales and production people compensated and why?’ (p. 166)
'''Melding Cultures'''
‘Cultural compatibility is one of the most significant determinants of a successful M&A transaction.’ (p.167)
‘Acknowledging whether cultural compatibility can exist should be a factor in determining whether to pursue a given deal. Integration can never be attaining – and growth strategies never realized – if two companies are worlds apart culturally.’ (p.167)
This alignment of cultures can be achieved through information sharing, emphasizing similarities and ‘mitigating dissimilarities’ (p.167) through effective communication.
'''Determining Managerial Roles and Responsibilities'''
‘Allowing the acquired company’s managers to maintain responsibility for activities central to its core operations will help to accelerate integration by minimizing gaps in performance or production. Ideally, the acquiring management should audit and counsel the existing management, augmenting it where it is weak but leaving the previous management team intact until key processes have been successfully incorporated into the merged firm’s operational infrastructure.’ (p. 169)
Defining the character traits required in the new organization, and then identifying people possessing these assists in the selection of the management team that will best achieve strategic objectives.
Staffing decisions must be made early in the integration process to avoid employee uncertainty, which can impact on productivity.
'''Measuring Results'''
The integration program must have measurable criteria to assess the progress of the merger. ‘Must strive to set forth measurement criteria wherever it is possible to do so, whether it is by setting time parameters by which certain integration tasks must be completed, by gauging attitude changes via employee research, or by tracking the number of people who stay with the merged company against expected levels of attrition.’ (p. 175)
'''Acknowledging the Two Levels of Integration'''
‘The key to a prompt and effective integration launch is focusing on the similarities inherent in each organization and building on them.’ (p.175)
‘The key to successful integration is identifying the similarities inherent in each organization and building on them while maintaining a disciplined yet flexible approach…’ (p.177)
‘Isolating common factors and focusing on similarities provides the essence of the growth planning approach to devising and implementing a successful integration strategy.’ (p. 177)
==MEASURING MERGER SUCCESS==
===Keeping Track of Success: Merger Measurement Systems===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The benefits that arise from a formal tracking process are (p.145):
<ul>
<li> ‘Determining whether the transition is proceeding according to plan
<li> Identifying “hot spots” before they flare out of control
<li> Ensuring a good flow of communication
<li> Highlighting the need for midcourse corrections
<li> Demonstrating interest in the human side of change
<li> Involving more people in the combination process
<li> Sending a message about the new company’s culture.’
</ul>
‘Four areas for which separate but interrelated measurement processes must be continually managed during merger integration’: (p.145)
<ul>
<li> Integration measures: assess the integration events and determine whether ‘overall integration approach is accomplishing its mission of leading the organization through change.’ (p.145)
<li> Operational measures: track ‘any potential merger-related impact on the organization’s ability to conduct its continuing, day-to-day business.’ (p.145)
<li> Process and cultural measures: determine the ‘status of merger-driven efforts to redesign business processes or elements of the organizational culture.’ (p.145)
<li> Financial measures – track and report whether the company is achieving its expected synergies.
</ul>
(Examples of measures used for the above are included on p.145)
'''Integration Measures'''
‘Merger measurement systems need to evolve as the integration evolves into each successive phase.’ (p.146)
‘Near the end of the project, it is essential to capture feedback, learning, and process upgrades that can be used to build an ongoing institutional knowledge base regarding the integration process itself.’ (p.150)
Refer to p.150 for Automated Feedback Channels – several interesting points regarding use of IT in integration.
'''Operational Measures'''
The company should establish and communicate critical success factors. These critical success factors ‘summarize the essential strategic business outcomes that must be achieved.’ (p.152)
(Diagram on p.153 provides a summary of the process involved in defining operational measures)
'''Process and Cultural Measures'''
A ‘formal process for measuring the effectiveness of major merger-related redesign and cultural integration efforts’ (p.154) should be created by the company to track progress.
One method for this is the ‘Merger Integration Scorecard’ which provides a status update showing the progress of the most important critical success factors in key measurement categories. An example of this is provided on p.159-161.
'''Financial Measures'''
Four components are recommended to ensure a company identifies and achieves its essential objectives (p.162):
<ul>
<li> ‘An education process
<li> A verification process
<li> Document templates for submitting, tracking, and summarizing the achievement of synergies
<li> A process for reporting and communicating the achievement of synergies.’
</ul>
It is also important to identify the sources of synergies. Synergies typically come from: (p.163)
<ul>
<li> Income generation – ‘produce efficiencies whereby increased production is achieved via changes to processes, new or different equipment, new products, new channels for sales or distribution, enhanced quality, new management techniques, or best practices.’ (p.163)
<li> Expense reductions unrelated to reductions in staffing expenses – result from the avoidance and reduction of costs that were made possible due to the integration.
<li> Avoidance of capital outlay – ‘involve any reduction in planned use of capital, or in the scope of capital projects, that is made possible by improvements in plant use or by the sharing of resources.’ (p.163)
<li> Expense reductions related to reductions in staffing expenses – ‘involves the elimination of redundant roles, positions, or units when these reductions are attributable to the integration.’ (p.163)
</ul>
==BENEFITS FROM INTEGRATION MANAGEMENT==
===Integration Managers: Special Leaders for Special Times===
[http://www1.ximb.ac.in/users/fac/dpdash/dpdash.nsf/23e5e39594c064ee852564ae004fa010/7216b2f7b30b5247e52568b2001830f5/$FILE/ATT8WDSA/Integration_Managers.pdf]
Ronald N. Ashkenas, Suzanne C. Francis
(Article basically covers the role of integration managers, and looks at case studies involving integration managers)
‘Integration managers help the process in four principal ways: they speed it up, create a structure for it, forge social connections between the two organization, and help engineer short-term successes that produce business results.’ (p.183-184)
‘The integration manager can clear paths between the two cultures by facilitating the social connections among people on both sides.’ (p.191) This can help to overcome the problem of culture clash.
Five personality factors that are likely to increase the success of individuals in the role of integration manager are (p.196 – 201):
<ul>
<li> Deep knowledge of the acquiring company
<li> No need for credit – ‘The integration manager cannot be concerned with getting credit – or even recognition – for an effective integration.’ (p.198)
<li> Comfort with chaos – The integration manager need to have strong project management and organizational skills. ‘The best integration managers keep the process moving by constantly recalibrating their plans.’ (p.199)
<li> A responsible independence – Needs to be able to take initiative and make independent judgments, as there is no one providing instructions for what they need to do. It is also ‘vitally important that the integration manager have – or win – the trust of the most senior executives in his or her company.’ (p.200)
<li> Emotional and cultural intelligence – Integration manager must be able to understand the emotional and cultural issues that are involved in a merger, and recognize that it isn’t just an ‘engineering exercise’, but involves people.
</ul>
Summary, p. 202 – 203 ‘What Integration Managers Do’
'''Inject Speed'''
<ul>
<li> Ramp up planning efforts
<li> Accelerate implementation
<li> Push for decisions and actions
<li> Monitor progress against goals, and pace the integration efforts to meet deadlines
</ul>
'''Engineer Success'''
<ul>
<li> Help identify critical business synergies
<li> Launch 100-day projects to achieve short-term bottom-line results
<li> Orchestrate transfers of best practices between companies
</ul>
'''Make Social Connections'''
<ul>
<li> Act as traveling ambassador between locations and businesses
<li> Serve as a lighting rod for hot issues; allow employees to vent
<li> Interpret the customs, language, and cultures of both companies
</ul>
Cr'''eate Structure'''
<ul>
<li> Provide flexible integration frameworks
<li> Mobilize joint teams
<li> Create key events and timelines
<li> Facilitate team and executive reviews’ (p.202 – 203)
</ul>
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
6e31233413f04229fc45c9c48f3a9109be21ba02
Managing Risk in Mergers & Acquisitions
0
297
386
385
2018-10-29T12:04:05Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Topics==
* [[Managing Risk in Mergers & Acquisitions - Causes of Success & Failure]]
* [[Managing Risk in Mergers & Acquisitions - A Success Strategy]]
* [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
975422383bcff83e8288f0207aa4f21d1f209d44
Risk Management
0
298
390
389
2018-10-29T12:04:05Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Risk Management=
==The Risk Management View - How the Machine Looks From the Inside==
Risk Management is a philosophy of management science that sees an organisation's state in terms of the balance of its risk and opportunity portfolio. An organisation with in a steady state will experience a rise in the value of opportunities commensurate with a rise in the volume or value of risk, while a destructively unstable scenario would be rising risks with falling opportunity and while rising value of opportunities with steady or falling risks might indicate either a desirable growth pattern or under achievement of opportunities.
In its most common implementation today, risk management focuses on the risk side of the equation. With this constraint to its domain, risk management sees the universe as a variably dangerous place measured in terms of the likelihood of an event that might be a cause of some consequence that will have a measurable impact. A group of such events with shared impacts is a risk. A risk might have a severity (based on the likelihood of its various triggering events and the worst case scenario of the impacts of those causal triggers) and it might have a value based on the impacts. With or without the value one view of risk management might claim that risk management is about cost minimisation (in terms of anything measurable like money, brand value, social standing, votes won, etc). Minimising cost does not mean minimising risk itself necessarily as other factors may influence that decision such as the risk appetite (willingness to tolerate a level or type of risk), and confidence in the dependent opportunities (not measured in a risk-only model).
The causes and consequences of a risk might be seen, through their likelihood and impact respectively, to imply a particular inherent level of risk,
once we know the risks we naturally do things to either prevent the triggers from occurring, to know when they have, and to respond with corrective action in the event that a risk manifests as an occurrence. We call these things controls or strategies, and would be right to think that this should moderate our value for a given risk in some way.
The risk manager might accommodate this control impact in multiple ways depending on the risk model in use:
#By rating the controls themselves and reducing the total risk rating by applying this value in some way to the inherent risk and getting a rating of the risk remaining after controls are added - commonly known as the residual risk. The ratings of controls and strategies is in-exact in itself and the addition of additional data for control ratings may be no more reliable than the instinctive feel for the control impact required in approach 2. Considerably more rigour may be needed in the controls understanding than is common in management.
#By rating the likelihood and impact of a risk again AFTER the raters have considered the controls thus having two ratings measuring likelihood and impact : inherent and residual. Under this approach the control impact is assumed in the revised likelihood and impact ratings. Controls should not be rated as a risk group, but can be rated separately to inform the residual likelihood and impact ratings. This method provides no way to reliably analyse the cost-effectiveness of individual control strategies from the resulting ratings.
Together these components describe the essence of the model through which risk managers view the organisation and thence the universe through which the organisation moves. With a risk only view the risk manager sees a health index in terms of risk to the organisation.
==The Risk Management Function - Keeping the Machine Healthy==
The risk manager uses the risk model to view the health state of an organisation. The risk manager improves and protects that state by managing essentially the input variables of the model. This includes:
#facilitating the process of identifying risks and their properties and the process of rating the risks.
#ensuring that every risk has a clear management responsibility attached to it.
#ensuring strategies have been devised to prevent (to some degree) causes where possible, to detect causes when they trigger and to mitigate consequential impacts.
#ensuring executive and governors are properly informed of the risk profile and changes therein over time.
#ensuring the accuracy of the model through actions such as regular review and re-rating of risks, monitoring strategy progress.
==Articles in this topic:==
Topics covered by articles include:
* [[Risk Management - Introduction]]
* [[BPC RiskManager Software Suite]]
* [[Managing Risk in Mergers & Acquisitions]]
The full category is available from:
[[:Category:Risk Management|Risk Management Topics]]
<noinclude>
[[Category:Management Science]]
[[Category:Risk Management]]
{{BackLinks}}
</noinclude>
5b321f41e0e0f3fa2c6fbf0d749aee11df42db35
The Stakeholder Community Network Model
0
288
394
346
2018-10-29T12:04:06Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Introduction - What is the Stakeholder Community Network Model?=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this and other topics. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Re engineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, style and the detail provided, as the original text was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time.
'''''Author's Note:''''' The stakeholder community network concept was originally mapped out in the mid to late 1990's and reflected both my own search for a paradigm for online and virtual corporations. It effectively pre-dates the rise of cloud computing and social network sites as a component of business (for which it almost seems to have been designed) by some five to eight years. It did, however, benefit from existence of the fore-runners of these concepts. It was developed in the context of the observed behaviours of successful online ventures such as DELL and CISCO, the Victorian whole of government reform agenda, the tail end of the TQM experiment, shift from paper to online work flow both intra and inter business, rise of risk management, progressive adoption of balanced score cards, appearance of network trading organisations (groups of independent complementary businesses that traded together as a unit cross-feeding work and niching away from each other through specialisation - they flourished briefly locally in the mid-1990's), and the rise of on-line portals, peer managed corporate forums, application service providers, enterprise scale ERP and CRM systems, and web based B2B systems and the emergence of cataloguing standards. I have used it heavily over the years. It has been modified over time, to accommodate learnings from organisations that survived economic, technological social and political reversals and fertilised throughout by proven tactical and management philosophies, the stakeholder community network model would now seem to have come of age.
</noinclude>
==What and Why==
===What is the Community Network Theory of Organisations?===
====Organisational Community Network Theory====
'''''Organisational Community Network Theory premises that an organisation is a network of one or more communities existing in a network of other communities. The network links communities along lines of exchange such as communication, dependence, and obligation. Communities are collections autonomous agents and/or other communities that interact and share a sense of group identity, or share at least one purpose in common.'''''
Agents are essentially people, but the category could easily accommodate AI devices as these develop appropriate capabilities.
====Characteristics of a Community in Organisational Design====
Communities provide a natural, spontaneously-forming, self-organising, and evolving human organisational structure that forms because something is shared by the participants. Through the things the participants share in common, the community unit provides a framework for standardisation, streamlining, automating, and specialising in delivery of services and products to meet the shared purposes and operational needs of the individual community, and groups of communities.
Communities form initially because there is one or more needs in common among the participants (possibly only the need to identify and classify each other!). They are not inherently permanent structures, however there are some communities, that because of their survival through multiple generations or over multiple business cycles are effectively permanent. Such a list might include cities, countries, religions, professional associations, sporting clubs, and some government agencies, for example. At the other end of the continuum are communities that form spontaneously and last for little longer than the span of the first and only meeting. Examples might include emergency assemblies, concerts, demonstrations, staff inductions and rallies, etc.
Members of a community may be individuals or other communities. Communities contain eight non-exclusive classes of participant:
# Members - All participants are members, regardless of whether they are also members of the other classes.
# Beneficiaries - Information, goods and services consumers
# Suppliers - Information, goods and services providers
# Patrons - Funding providers who therefore also tend to direct
# Governors - Providers who administer, moderate, direct, control access, monitor, and tune.
# Custodians - Provide the infrastructure, durable assets, information warehouse, community tools.
# Partners - Provide compatible, complementary non competitive services or goods consumed by members in association with those of the community, but not as part of the community.
# Public - Comprised of potential participants, and participants who may also spontaneously form communities that compete with or otherwise influence the context of the community.
The more mature the community, the more clearly these roles are differentiated and actively operating. For a community to reach stability over an extended time the more important it is for these duties implied in these roles to be fulfilled.
Members of a community:
*share in a communal identity,
*have a shared purpose with other members,
*need similar access to information, and
*draw from a common set of tools.
The community will interact with other communities both individually and as a group. The more cohesive and mature the community is, however, the more likely it is that it will interact as community with other communities through nominated representatives.
The community is the fundamental building block of an organisation, but communities are structurally recursive and fluid. Communities themselves naturally subdivide into teams that service particular interests or needs of the community. These teams from their own communities, and together these internal communities form a network of interacting communities. The larger and more heterogeneous the parent community the more noticeable, numerous, segregated, larger and autonomous these internal communities become.
These internal communities may also interact directly with external communities, and have external participants in otherwise internal communities. The more predominant the external participation is, the more likely is the internal community to transition though the parent community boundary to become an external community (with respect to the originating parent community). Similarly the higher the proportion of community participation from a single community in an external community, the more likely that external community will transition to an internal contextually constrained community.
Each community is, therefore, comprised of a fluid network of communities contextually constrained by, and in some way supporting the activities of the parent community.
Community based organisational structures extend horizontally through unconstrained networks of interactions and vertically through community subdivision and absorption into constrained networks of specialised communities.
====Making and Strengthening a Community====
The longer a community survivies - the more mature it becomes - the more clearly the community identity, roles and rules become. For example, a group of people with a common interest in a playing of cricket meet by chance through visits to a local field - perhaps looking for a game being played. Over time they tend to arrive more regularly and predictably at around the same time in greater numbers. Some start bringing equipment and start a game, while others join in fielding or watching. As the predictability of the presence of other interested parties grows, participants start arriving in the expectation that others will also be present, while other participants bring supporting material - like refreshments, etc. Gradually, a community is forming with self nominated and perhaps suggested or allocated roles.
Eventually the group might suggest a common name - the Sometimes Cricket Club - and others might attempt to organise more sophisticated or permanent resources, and eventually the funding needs of the group might dictate an expansion in its membership and the need to more formally manage finances on behalf of the group, etc. Rules might initially be common-sense and unspoken (like not stealing the bat and ball from the guy that supplied it), others may be agreed through shared experience. Sharing or common interests and the need to improve predicatbility of participants in games will encourage the group members to share contact details and channels of communication. The more individuals invest their time, energy and resources on behalf of the group, the more they will expect later joining members to make a catch-up contribution for the existing investment - and the community may start placing barriers to entry in the form of membership criteria and fees.
As the group grows handshake agreements may need to be formally agreed and recorded, and individuals will be formally allocated roles and leadership agreed. Along the way as disagreements arise (like who should bat first) dispute resolution mechanisms will be required.
Thus a community has been formed and gradually self-organised. If the initial casual group fails to ever define roles, find equipment supplier(s), it will be most unlikely to ever get to the stage of even the first game. If it fails to agree its meeting place and times of meetings it will probably not achieve the second game. If it fails to identify its membership and establish an identity (and therefore a brand) and all the other functions of a cricket club it will be unlikely to last out a season.
To make an effective long term community we need to pay attention to the characteristics that form a community and ensure that these characteristics are serviced. From the simple example above we see that a community has:
*Members
*Shared resources
*Identity / Brand
*Communication
*Define and shared purpose
*Location - a meeting place (which may be virtual)
*Roles
*Rules
*Governance structure
*Barriers to entry (note this might be as small as deciding to participate)
*Patron (implied or formal)
We grow and strengthen a community by addressing these characteristics directly. Ignoring any one of these will result in the failure of the community over time. For a community that assembles for a single purpose for only a short period of time - such as a demonstration, or an entertainment event this may not be a concern. If we wish the community to have any kind of longevity we will need to consider how we enable the defining charcteristices of the community.
It is with some surprise that we note that when we look at the permanent communities within many organisations we will find that several of these characteristics are only weakly addressed - if at all - rarely understood, and even more rarely considered. Herein lies the key to the internal structural failure of many organisations that have grown much beyond the oversite of their founders splitting into many semi-autonomous communities.
====The Organisation as a Community====
Here we distinguish a physical organisation from the organisation of its operations and resources.
A physical organisation - such as a company, government agency, not-for-profit, or even a political party - is:
# a community containing a network of communities,
# a patron of both internal and external communities
# a custodian of information and provider of infrastructure for communities
# a governor of community mandate, direction, performance, and culture, etc.
The physical organisation is, by definition, a community, but its boundaries may be so fuzzily defined that as a community it is little more than a container for a network of communities, whose primary allegiances are directed outside of the physical organisational boundary. Some communities in the organisation's network are planned and facilitated communities, while others are not planned but facilitated (such as professional associations, unions, standards bodies) and others are neither planned nor facilitated (but, perhaps, accommodated) (such as schools, sporting clubs, arts groups, social movements, etc.).
As a patron the physical organisation plays it primary role. Patronage is provided through a funded pool of resources that can be applied to communities as participants and enablers of community infrastructure, and through direct funding of community operations, or through funding infrastructure provision, etc. Patronage is about funding, and every gift "in kind" of resources or equipment, etc is an implied gift of funding as well. Patronage is accompanied by some ability to influence direction - if only from the implied threat of future funding cessation.
As a custodian, the physical organisation will also provide services to communities of storing knowledge, providing and maintaining technical and physical infrastructure used by communities, and management of liquid assets, etc. These are called custodian functions because they are about the preservation of assets, wealth, capability and capacity.
In its governance function the physical organisation imposes accountability for patronage, standards, policy compliance, legal compliance, strategic direction, performance measurement, financial control and resource utilisation, etc
All organisations are simultaneously intersected by many special interest communities:
*The average workforce is riddled with communities some intersecting the organisation, some not - union(s), professional bodies, schools (if staff have school age children), political, sporting, social, OHS cases, divisional, project, etc.
*Industrial associations, standards committees, regulators, etc.
*The company is surrounded by public interest groups, political and semi political groups, consumer advocacy groups, and the public relations industries.
*Internally the organisation might have communities of buyers, marketing and sales, logistics, process & quality improvement, governance, safety, research and development, financial control, etc.
Communities do not respect the conventional boundaries of corporate or governmental agencies. Communities that interact with external stakeholders, for example, draw in members of the public and convert them into organisational stakeholders in the process, but not employees (at least in the conventional sense).
====The Advantages from using Communities to Model Organisations====
In some organisational theories, communities are represented as external and internal forces or drivers, but are not directly modelled into the organisational structure. The organisation is seen as a collection of consumer-provider relationships - whether those relationships are about transmitting instructions, funding, goods, services, resources, etc. The relationships are essentially hierarchical - even in matrix organisations - and feed back and feed forward control systems have to be imposed on the structures to make them work. Structural entropy gradually causes the structure to disassemble without constant maintenance on the organisation structure itself.
The community is an advance on the classic consumer-provider interactive model, because it:
*assumes most business relationships are multi-directional exchanges between the provider and the consumer and other providers and consumers extending over a period of time;
*recognises that all transactions between parties involve a series of micro exchanges going in both directions, not a single uni-directional exchange. For example, a purchase involves the consumer providing information (identity, location, preferences, competitor data, demand level, buying cycle, etc.) and possibly funding, a sales team matching the need to available offerings and defining and providing the promise, a legal team defining the obligations, a delivery team to deliver the good or service, a quality and support team providing quality management, logistics team providing transport, etc. All of these are participants of the same community involved in meeting client needs.
*delivers the benefits of the one-stop-shop process models, without the training cost, and inherent quality variability, by forming a community of specialists to collectively provide the single point solution.
*provides a model for structuring the online presence of an organisation.
*provides an organisational architecture that distributes the costs of providing and consuming goods and services across the community rather than exclusively concentrated in the larger party. For example, a buying community might assume some of the costs of sales by providing their details online directly into the client database, select from available product (by watching videos, reading information and product comparisons provided from central location), or submit special orders online, respond to questions from other clients in hosted forums, and advertise the organisation's products and quality in organised reviewer sites, or social networking sites.
*places the provider and consumer into the same "team" and positions them as jointly trying to meet a need. The community model facilitates all participants contributing jointly and sharing ownership of the outcome - rather than one party meeting the needs of the other.
Each community is a collection of participants (members) who share common operational characteristics, goals, interests and/or functional needs. The greater the extent to which the participants share characteristics, interests, needs and goals in common the greater the cohesion in and resilience of the community - in simple terms the community is active, "tight", involved, and the members share a sense of identity, belonging and, most importantly, ownership.
Communities are semi-autonomous, self-selecting, self directed, and inclusive. This does not mean communities are necessarily "open-access". In fact communities with higher barriers to entry often have the highest sense of cohesion because membership is something hard to attain and therefore something of value. Cohesion does not necessarily mean active, however, and lack of activity generally makes a community less interesting organisationally. Communities survive by exchanging things. The greater the volume of services, tangible goods or intangible goods (such as information), that flows through and around the community the stronger the community becomes. In the community model an organisation therefore benefits by fostering participation and particularly communication among all its members.
===What is the Stakeholder Community Network Model?===
'''''The stakeholder community network model is an organisational design and analysis paradigm that sees the organisation as a network of co-dependent stakeholder communities positioned in a larger network of interacting (but not necessarily co-dependent) communities. Within this paradigm, all of an organisation's services, functions and facilities exist to service the needs of the various stakeholder communities in the network.'''''
It should be noted from the outset, that co-dependent does not mean cooperative. As with domestic co-dependent relationships, the community network may include some positively destructive co-dependent community relationships.
The model defines an organisation as consisting of a network of operations that may extend beyond the boundaries of the organisation's body corporate. One such situation might arise in franchised operations or trading networks where an external entity provides critical services on which the corporate organisation depends.
The model works as an organisational design paradigm, a process design framework, an IT strategic design paradigm and a risk and performance analysis framework. It is directly suited to modern network, online, virtual, service operational models as well as bricks and mortar industries including utilities, government, general and project manufacturing, and education. It has not been tested in the resources sector or transport sector.
As an analysis tool identification and labelling of existing implicit and explicit communities and the physical and virtual flows between them against current planning, score cards, policies, performance measurement systems, service agreements, compliance frameworks, risk models, quality, control and feedback systems highlight areas of dysfunction, duplication, redundant effort, counter-productive strategies, missed opportunities, and structural inefficiency and ineffectiveness.
As a design tool it results in the alignment of organisation wide activities to identifiable purposes with targeted participants and measurable performance. It facilitates structurally many different and potentially divergent simultaneous strategies while painting a boundary and direction for such divergence. Such support in organisational design is essential for dealing in global, highly cyclic, or political markets where cultures, rules and geographic features may require the ability to operate as "her to him and him to her", and to retire and replace entire limbs rapidly.
As a customer, partner and supplier service process model it results in bound customers and suppliers and well integrated partners while distributing a significant portion of the organisations costs to the participants.
As an IT systems framework it provides an efficient protocol for defining shared services, community portal service architectures, intra-cloud and cloud services, virtualisation clusters, etc.
==Definitions==
===The Organisation===
Organisations are networks of communities. These communities are comprised of members drawn from inside and outside the organisation's corporate legal identity, and may include communities of which the organisation has no effective control (in traditional terms).
Under the stakeholder community network model we view an organisation as a community comprised exclusively of interconnected sub-communities of people providing and consuming goods and services. Each sub-community forms multiple sub-sub-communities within it, and the community subdivision continues recursively until the costs of organising communities out way the benefits gained from the additional community.
Contrast this view of an organisation with that of other models that classify organisations in terms of bureaucratic, divisional, matrix, and similar structures. Under the stakeholder network view all of these structures can coexist in an organisation simultaneously as they are simply overlapping communities defined around structural paradigms. The stakeholder community network model does not replace such paradigms - it absorbs them.
In the stakeholder community view an organisation is a free-flowing evolving network of teams forming and disbanding as required, with some acquiring near-permanent status, while others enjoy but a single day in the sunshine. Community membership is not exclusive and it is normal for members of one community to also be members of other communities.
===The Community===
The model first defines a structural unit (the community) that possesses identifiable and comparable characteristics, such as focus, information need, functional need, etc. Secondly, the model looks to the mechanisms of facilitating stakeholder communities in a cost effective and consistently reliable and predictable way, utilising common services designed to enable and utilise the shared or distinguishing characteristics. So initially, at least, the model is community structure agnostic.
Communities form for multiple reasons, including:
*shared geographic proximity
*shared heritage
*shared communications technology
*shared language
*shared interests
*shared skills
The things we share are like gravitational attractors around which people cluster in self organising social units we are calling communities.
As communities grow beyond a few members they form sub communities whose members service the parent community or concentrate in some specialised capacity in addition their other roles as members of the community.
The communities in which we are most commonly interested (in the general organisational performance improvement context) are those forming around shared interests and skills. Within an organisation the geographic, and language communities may be crucially important, and in some contexts would be directly accommodated but they will also usually need some form of communities formed around skills and interests (like, at the very least, consuming or providing something), in order to assist the organisation achieve its purpose.
Within each community formed around shared interests or skills are a further set shared interests such as membership, meeting space, information, branding, commercial services, engagement, arbitration, and support. As these needs are common (with minor variations) across all communities they are an attractive first target for shared service provision across all communities. In designing these shared services one should remember that a properly harnessed community can be self managing, peer supporting and self selecting. Shared service provided to communities should be designed to encourage this ownership by the community membership.
A community model assumes a multi-way conversation within the community among the community members - not a massively parallel bilateral conversation between the community members and the organisation. The latter is a client-supplier relationship and by excluding inter-member interaction it embeds the costly push model of marketing, sales and service delivery. By encouraging intra-community conversation we harness the consumers in the community into one or more of the many supply roles in the community. In a customer/client oriented community supply roles span such things as marketing assistance with reviews, discussions and forum participation to support assistance in peer help spaces, and even product improvement and testing such as in software Beta programmes. On the supplier and partner side, supplier side community roles include online supply of certifications, supplier self-registration of details, self selection of available contracts, online invoice entry directly by suppliers, and suppliers providing new product information feeds matching community standardised classifications and measures, etc.
===The Stakeholder Community===
A stakeholder community, is a collection of people, agencies, or units of an agency, that share three traits in common:
# They have an interest in the organisation being modelled or analysed (IE: they are stakeholders).
# As a group, they are co-dependent with other groups of the same organisation. (IE: the groups can not operate with complete autonomy as they depend on each other for their functioning and survival).
# They possess additional distinguishing dimensions of their interest in the organisation that allow them to be functionally separated from some members of the collection and similarly grouped with others (IE: they form an identifiable and functionally similar subgroup of stakeholders).
A stakeholder community of an organisation might be defined as geographically based, and representing all customers within a geographic area, or it might be an enterprise wide collection of staff injured in forklift truck accidents, or a worldwide extra net of ECL policy advisers, or suppliers and corporate buyers for raw materials,... or any one of a long list of possible organisation specific or related groupings.
We call the members of a community "Resources". A resource may be a person or another collection of resources such an organisation, a unit of an organisation, another community. In all cases where a collection of resources is a member of a community, that collection will participate through one or more "community representatives". So in a sense resources can be seen as ultimately comprising people (even though they may be members fulfilling constrained roles).
===The Stakeholder Community Network===
A stakeholder community network is a collection of stakeholder communities that form a network of loosely co-dependent communities.
The communities comprising the network preserve the rules of membership of a stakeholder community domain (as defined above). The links between member communities represent the co-dependencies. The dependencies are functional in nature and may be about information, goods or services - provision or supply, etc. They therefore represent the first layer of potential service level agreements in an organisation.
Technically speaking, the graph connecting all members of the stakeholder network is a digraph (directed graph) when the functional attribute of the network relationship is included in the inter-community link definition.
===The Well-formed Stakeholder Network===
In the universe consisting of all possible stakeholder communities of an organisation, a complete network would include all communities in the network topography. Such a network is said to be "theoretically complete".
Theoretical completeness is neither practical nor possible to achieve in practice. We can not know, and thus enumerate, every possible stakeholder community as each resource and every possible combination of two or more resources up to and including the entire membership of the organisation's stakeholder domain is potentially a community.
Another way of viewing completeness is to first test to ensure that all members of the stakeholder community are also members in one or more of the other communities in the network. This network is then complete in terms of a organisation's resource coverage.
It is worth noting that an organisation's stakeholder resource list may include both members of the public and entities that have no direct dealing with the organisation as well as staff, clients and suppliers (etc.) of an organisation.
===The Stakeholder Community Network Model===
The stakeholder community network model views an organisation in terms of stakeholder communities with shared needs, interests and/or purposes.
The model is a government and business meta-organisational model for organisational design, performance analysis and competitive strategy. It founded on a theory of operational design that embraces networked co-dependent business structures (such as outsourcing, join-ventures and social networking), while not mandating them. The step into communities, however, fundamentally changes the organisational focus from internal structure management to external service delivery. By rejecting all activity not designed to service an identifiable community it forces the entire enterprise to embrace a service culture at every level - everybody is a client of somebody else and in a stakeholder relationship (and usually responsible to someone, or responsible for something) with many other people.
The community structure inherently distributes some of the costs of marketing, sales and servicing, from the net providers to the net consumers within the community, but is effectively a premium willingly paid by community net consumers for greater influence over service form, more relevant and timely information, improved service speed, and risk perception confirmation (the role of public forums), etc.
Communities are essentially self determining and semi-autonomous so a community network modelled organisation naturally accommodates multiple value streams simultaneously. The ability for a community to recursively sub-divide into smaller overlapping specialised communities means the enclosing community structure can accommodate not only multiple value streams internally, but also multiple agendas. Thus financial performance can be enhanced, while quality improvement, social policy or research (and other long term strategies) are driven with equal priority. Further, new value streams can be added to the structure without compromising the integrity or culture of the existing structure.
The semi autonomous nature of communities means that both competitive and and non-competitive business architectures are compatible with the community network model.
We say it is a "meta-organisational model" because, while you might design your physical organisational structure around the model (particularly at the business unit level, or in the online context), it is more common to use it to redesign the roles, service agreements and strategies of existing organisational structures in an organisation. The meta-organisational model is one that floats through a physical organisation providing a new virtualisation of the organisation by re-engineering the service agreements, social networks and logistical networks in an organisation.
One way to think of this is that the impact of applying the community stakeholder thought process is to rearrange the plumbing, the lifts, the corridors and the internal doorways inside a heritage listed building. It is still the same building on the outside, but now you don't get lost inside it, and clients and customers start sharing your destination, not just what you do.
Sure you could tear down the building and replace it with a campus that modelled your stakeholder community structure exactly, but you do not need to do so to get the benefits, and in fact doing so might be counter productive to your market.
The model does tend to have certain organisational impacts - even as a thought exercise:
*The model encourages networked structures and specialisation of semi-autonomous co-dependent internal units.
*The communities share common servicing needs and efficiency dictates some form of shared service provision for these common needs. These structures imply additional cost, which in a zero-sum change process implies that resources will have to be transferred from somewhere else.
*The network model will tend to reach across multiple divisions of an organisation in defining communities.
In the normal entity (government or business) an individual or even business unit might participate in multiple stakeholder communities at once. So the communities are not necessarily defining an organisational structure as much as a set of interlocking co-dependence structures around which services can be consolidated and streamlined, duplication identified and removed, and context specific organisational purposes can be clearly articulated.
=Applying the Stakeholder Community Network in Practice=
==Step 1. Identifying and Defining Stakeholder Communities==
We must fist decide whether we a looking for a directed outcome such "quality improvement" or an undirected (normal) outcome. This impacts the design of every community.
In a directed outcome model the directed outcome becomes a community in its own right that is automatically a participant in every other community. This allows consideration of the requirements of the directed outcome community to be capture and implemented in every other community structure.
In the undirected model no such imposed membership is mandated and the community architecture is left to optimise the framework with which it has been equipped.
In most situations we use the undirected model for analysis and the directed model in conceptual design (refactoring into an undirected model once the directed redesign has been finished).
==Step 2. Identifying and Defining the Community Ennoblement Functions==
In the model, the central object of the organisation is to ensure communities are facilitated, serviced, and harnessed for the purposes of the organisation as best it can, or otherwise "actively managed". The model sees only communities - so every participant within and without the organisation must be able to be defined as falling into one or more stakeholder communities if the model is to be considered "well-formed" (read "complete").
Within the model, the aim of the enterprise is to facilitate communities (generally) and a defined set of communities specifically - which translates into:
*identifying stakeholder communities
*mapping new and existing stakeholder communities to organisations objectives, mandate and purpose as they change
*mapping inter-community work flows testing and identifying duplicated communities, duplicated flows, and under resourcing, etc
*seeding communities as required
*funding stakeholder communities (eg seed capital, cross charging, external billing, etc)
*organising stakeholder communities
*branding stakeholder communities
*fostering community participation and outcome ownership
*providing the and possibly managing the infrastructure for community self organisation
*liaising/interfacing between stakeholder communities (eg. client community versus customer community)
*delivering the community's requested service or goods
*harnessing community ownership of the service/product improvement process
*trapping and archiving expert knowledge from both internal (to the organisation) and external community participants over time
Within an organisation adopting the stakeholder community network paradigm operationally, the stakeholder community network must be actively managed. This means it must be facilitated, moderated and funded. Resourcing is required to make it fast and efficient to implement and equip new communities and retire existing ones. Part of equipping a community is establishing its charter, budget, performance measures, governance, operating rules (constitution), core membership, decision model, meeting space, common (shared) tools and specialised applications or services need.
This necessitates the creation of a new centralised or distributed role of community facilitator(s) and a central role of community registrar (manager). The former is about equipping and assisting new communities, identifying and seed communities as required and advising and improving existing communities. The latter is about containing, policing, funding, planning, judging and budgeting communities.
==Step 3. Considerations in Designing the Stakeholder Community Analytical Structure==
Once we have a standard definition of the community concept as it applies in our analysis and organisation, the next step is to define a framework of communities through which to analyse the organisation.
As each community shares facilities between their members, the fewer top level communities there are the better the efficiency gains in the entire model will be. Unless, of course, their are too few and the resulting groupings are not homogeneous over sufficient characteristics, or the communities are badly chosen with many shared characteristics between the groups rather than within the groups.
Secondly, the choice of communities can slant the servicing view internally or externally, or indeed could simply mirror existing organisation structures. None of these effects are likely to produce efficiency gains sufficient to justify the operational overhead of the stakeholder community support systems. The gain comes from achieving 100% coverage of participants, with communities comprised of both external and internal participants, with the minimum need for intra-community process or system customisation. By demanding the mixing of internal and external members aim to eliminate duplication between external and internal systems and processes servicing the same need.
So, ultimately, the choice of top level stakeholder communities proves to be crucial to the outcome of the model - on all fronts.
In our experience, if the model is well designed the chosen top level community groups will tend to be highly co-dependent which automatically provides a structure and focus for service level agreements, and intra-community risk profiles will be highly consistent.
The choice of stakeholder communities used is prima-facie up to the organisation and the purpose of the analysis. While generalisation is possible at the highest level, as the view descends through the communities into their member sub-communities the groupings become quite specific to an organisation.
After many years of using and refining the concept we have settled on a standard top level stakeholder community model we call SCNM03. It has proven to be work predictably in both government and commercial agencies in both physical (eg manufacturing) and virtual (eg software) organisations. Alternative models include the groupings under Porter's Theory of Competitive Advantage.
=Standard Stakeholder Community Network Model: SCNM03 in Practice=
==SCNM03: Bishop's Model Stakeholder Network==
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
In the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
==Risk and the Stakeholder Community Network Model==
Risk in the model tends to vary with time and the degree of influence the organsiation (the meta-community) has in the specific community being examined. This influence will vary over time.
Consequently, in the longer time frames (ie. the strategic time frame) the Public and Governance communities are usually the highest inherent strategic risk communities in the model. The organisation tends to have the least influence over the sub-communities contained there-in and may participate only as a guest (information receiver, price-taking customer, subject of legislation, etc.), or not at all. Public attitudes can swing against the activities of the organisation, and influence the legislaters, who, in turn, can legislate the marketplace or the organisation out of existance. COnsumer preferences can change as technology progresses, making the organisation's business model irrelevant. The stakeholder network model therefore naturally tends to encourage both lobbying and active public relations management (or the exact opposite: invisibility!), and participation in external communities for information gathering.
Where timeframes being considered are shorter, ie. from an operational or tactical risk perspective, Workforce will rank as one of the highest risk spaces. If we think of Workforce as being comprised of smaller communities - say contractors and employees, and then each of these in turn being comprised of even smaller communities - say divisions, units and ultimately individuals we see that the more we subdivide the group the closer we get to a community of one member - the individual. In the very short term humans thus represent a highly variable factor.
In the micro-community of one person. the only member of the community that exists inside the employee's head is him or her self. All the risk minimisation and behaviour modification controls naturally present in a larger commnunity are dependent on that one member. In that community one person fulfills all the roles of the multi-member community. Strategies such as training, and standard processes work over an extended time frame to reduce the probability of incidents and create predictability across the workforce as a group, but in the very short or immediate timeframe the individual is still entirely responsible for each action with little chance for other community members to intercede (because there aren't any!). In the instant, this micro-community can make an unsafe decision that impacts the well being of the larger organisation (as well as themselves). Planning, thorough and extended training, careful member selection, 'idiot-proof' machine and user interface design will improve the predictability of the individual - but all these strategies take time to design, implement and achieve their effects. So, over the shortest unit of time - say, a second into the future - the individual can make a very bad decision with disastrous outcomes. This is a technical way of saying that people do dumb things that can be prevented with enough preparation and training - but only if enough time is available.
==Competition and the Stakeholder Community Network Model==
The SCNM03 model captures a deliberately divergent view of competitive strategy from that presented by many earlier authors. In this model, competitors are seen as potential suppliers, partners, clients, customers or workforce and strategies to bring them into one or more of those communities would be pursued.
Crucial to understanding the SCNM03 stakeholder model is that, purely applied, the model sees the entire universe in terms of these communities. It starts with the ideal vision built-in and therefore models a best fit to that scenario.
One obvious issue, then, is that there is clearly no community of "competitors". Under the pure SCNM03 stakeholder network model our aim is to make competitors a member of one or more of the other communities. We are therefore encouraged to both define our service offering away from competition and structure ourselves as complementary to another's offering or needs. The extent to which we are not able to achieve this influences the inherent risk that lays in the public communities.
We do not loose the unresolved participants, instead they appear as sub-communities of the public community and are subject to a range of risk mitigation strategies.
==Stakeholder Communities and Sub-Communities in SCNM03==
Each of these 8 communities is comprised of smaller communities with more specialised shared needs. For example, workforce is comprised of two specialised communities: contractors and staff (or other appropriate terminology). While many requirements of these groups are the same, there are specific differences in engagement, management, ancillary services, social interaction and disclosure levels between these groups to warrant separate community identities.
Conceptually the stakeholder network organisation is (almost) a franchiser of community management systems within a defined product/service space and in a given organisational cultural context. An organisation adopting this model will naturally look to standardise the managerial and technological profile of the communities it manages.
Applying the stakeholder network model in process design, performance analysis, compliance management or risk assessment often results in process structures and views that differ dramatically from the Divisional, Matrix, Hierarchical and Service models under which the organisation may operate. The community network model is agnostic when it comes to organisational structure (with the one exception being an organisation exactly mirroring the network model itself).
By way of example, an organisation that produces widgets, might traditionally see itself in terms of functions and processes concerning widgets. It has widget raw materials planning and acquisition, inventory management, widget production, widget distribution, widget order management and sales, etc. The same organisation in the stakeholder network model would see the world in terms of satisfying the needs of defined stakeholder groups first - not the things they were manufacturing.
In the SCNM03 stakeholder network model the natural home of the manufacturing functions is in the customer community where they are firmly focused to the customer (note - not client) desires, and materials acquisition function might be seen to contract the services of both the partner and supplier communities to satisfy material demand.
A couple of outcomes of the model are immediately apparent from this example, the model blurs the distinction between internal sourcing and external sourcing,
From a computing perspective, the model automatically leads to service portal based architectures, systems consolidation, cloud structuring (whether internal or externally hosted), and highlights the places where inter-system integration and system standardisation are needed. From an operations perspective it leads to service focused organisational architectures with defined client groups and document service standard agreements.
==The SCNM03 Communities Explained==
An individual is often a member of multiple communities (eg Customers and Clients). Our standard stakeholder communities (which in 12 years have yet to be wrong) are:
{|
|-
|Clients
|style="padding-bottom: 10px; padding-top: 10px; border-bottom: 1px solid black;bottommargin:10px;"|Stakeholders who receive or deliver services Clients are interested in rapidly finding information, requesting service, reporting hazards / incidents / events / ideas.
A classic result of the client stakeholder focus are client portals. In a local government these might take the form of a resident portal, where a city rate payer can find in one spot all the online systems for garbage collection, events, bylaws, parking permits, voting, pet registration, planning applications and objection lodgment, etc. In a direct-to-customer manufacturer the client might have access to a portal with product information, product enhancements, support, manuals, training, online-store, peer forums, product reviews, newsletter/blog, and peer/expert hints and suggestions all in one spot.
|-
|Customers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Stakeholders who pay for services that clients receive. This is separation is very common.
Customers want to pay for things in as convenient and consolidated a way as possible, and have mechanisms available for enquiring, revoking or monitoring services for which they pay. Companies that send multiple bills for the different services they provide are examples of firms that seriously need to look at their customers as a stakeholder group.
Governments provide the classic examples of customer and client separation: A State Government might pay for (or part-pay for) some services that are received by citizens of a city government. The state government is the customer, while the citizen is the client.
|-
|Suppliers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Suppliers of services and materials to the organisation. Suppliers have common service interests such as finding tenders, quotes, interfacing supply catalogues to purchase order systems, checking on payment status, locating standard contracts, etc.
|-
|Partners
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Partners are providers of complementary services. A “meals on wheels” charity provider may function as a partner to a local government, delivering services complementary to those of the city government, but funded by non-City sources.
Partners are mainly interested in ensuring their services stay complementary and not competitive with the organisation. So information on strategies, management of joint projects, identification of opportunities, etc are of interest.
Roads constructions authorities are partners who provide accident minimisation services, and traffic impact control services, etc. that complement those of the local or city government roads teams.
The relationship between insurance companies and the fire service is another example of a partnering structure. Insurance companies have an interest in facilitating the fire control services as they reduce their insured risks.
Franchised sales teams for a retailer, independent software manufacturers for a computer or games console manufacturer, and joint-ventures are all examples of partner community networks.
|-
|Workforce
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Workforce include both employees, contractors and consultants. HR systems, payroll, contract management, OHS, incident management, etc. are examples of services needed by this community.
|-
|Treasury/Custodians
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Treasury & other custodians are always an internal community. Their members are charged with maintaining assets and lowest level enabling systems for the other communities.
IT/IS, Building Management, Maintenance and Treasury are always members of the custodians group. They protect assets and provide the infrastructure on which the community specific applications reside.
Email, communications, data storage, server management clearly fit under this group.
|-
|Governance
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The governance community, like the workforce community includes multiple sub-communities, such as the executive, regulators, government bodies, risk management, compliance management, etc. These communities use services related to the provision of control and performance monitoring. Finance, council management, boards, executive team, performance review committee, inter-government reporting, risk, and compliance systems, and planning/budgeting systems are typically included here. Governance community members are both internal and external bodies with which the organisation has an accounting and reporting relationship.
|-
|The Public
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The public includes everyone else. This is a very important community as it has the ultimate power to remove the entire organisation from existence, or cause government to legislate it out of existence.
It is also the group from which all the other stakeholders originally come. From a strategic perspective, the aim of every organisation should be to get every member of the public community to transition to one of the other stakeholder groups.
The public need to know about the services an organisation provides, its ethics, and social performance.
While most membership of this community is reasonably obvious, the presence of public relations teams, lobbying and marketing in this community may be less so.
An organisation is always a member of the public stakeholder communities of all other organisations.
|}
=Applying the Stakeholder Network Model=
The stakeholder networks model is recursive. It applies organisation wide and through each sub grouping down to the individual business unit level (in fact it can also work at the individual level – but not generally in an IS context). Just as the organisation has these broad stakeholder groups, each business unit has the same stakeholder breakdown, all be it with most stakeholders in the various communities being internal to the organisation – rather than external to the City.
The stakeholder community network has clear relationships between the elements - particularly as realised in SCNM03 - and provides a model under which social networking and portal systems naturally fit. The model leads naturally to both network organisations (those using mixed in and out-sourcing, shared service models and joint-ventures as their standard business model.
The stakeholder community model has a number of applications:
#As an IT system design paradigm and idea promoter.
#As a full organisational modelling paradigm. In this form it results in dramatically different organisation models from those in general usage and is thus often too radical for executive comfort.
#As an analytic “best practice” benchmark it is outstanding, and even when only partly applied results in improved and more cost efficient process design.
#In designing and online and web service business presence. With a little thought it should be apparent how effective the stakeholder model is in designing an online presence and structuring of mutual obligation social networks.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
f54d609f8d240d7ba168fd4d101ce36b7edfe76b
Business Process Reengineering - Process Charting
0
289
396
348
2018-10-29T12:04:06Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Introduction - Business Process Charting=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
==Charting the Business Process - A Unified and Holistic Approach==
===Why Chart?===
There are many reasons we may wish to chart a business and its businesses processes including mapping of data flows, documenting process steps, designing automated and hybrid systems, defining intra and inter-organisational relationships, defining or analysing service agreements, etc.
===What is a (Business) Process Chart?===
A process chart is a diagramatic representation of a set of processes, that models the enveloping organisations as if it were a machine with a functional domain that encompassed the diagrammed processes.
From a computational perspective, a business process chart is a diagramatic program describing human, machine, natural, organisational, functional and non-functional systems using digraphs.
===What are the Characterisitics of a Good Process Charting Method?===
====Objectives====
This author proposes that the objectives of a good process charting system should be to:
* improve the understanding and clarity of the data represented in the chart,
* enable domain specific analysis (such as efficiency, economy, effectiveness, reliability, etc),
* enable viewing of the processes at multiple levels of detail simultaneously,
* chart the target analysis domain completely,
* seemlessly represent both automated and non automated processes in the same chart,
* enable the automated modelling of the system directly from the chart (which implies the charting "meta-language" should have a consistent "syntax" and semantics - similar to an "ideal" computer language),
* represent processes across diverse operations, industries, products and services without context specific modification of the syntax or semantics,
* produce charts from unfamiliar industries (etc) that are understandable to a moderately experienced chart reader, with no prior background in the subject charted, and
* enable the construction of "proofs" of the processes.
In this author's view these objectives are assisted when the charting system assumes the properties and conventions of well designed computer programming language - albeit a visual one. These properties include the grammatic (semantic and syntactic) consistency, structured functional encapsulation, object reuse and polymorphism, conceptual inheritance, simplicity and functional expansion.
====Consistent Identifiable Grammar====
The grammar of a process charting method defines the symbols, their meaning, and the rules for "legal" combinations of these symbols and meaning of such combinations.
In computational languages the atomic element in a programming language's grammar is called a token. In a text based computational language these tokens are strings of one or more characters, some of which are defined in the language with a special meaning. The tokens comprise the syntactic elements of the grammar. The grammar itself defines a consistent semantic interpretation of the syntactic elements when combined in pre-defined combinations.
In a process chart the atomic element is a symbol that maps to a real world object such as an organisation, a person, a data element, a process (or function), a data store, etc. These symbols comprise the syntactic elements of the charting method's grammar, and the charting rules document a grammar which delivers a consistent semantic interpretation of the syntactic elements when combined in the pre-defined combinations.
====Completeness====
A well designed charting system is internally consistent in atomic structure and behaviours, while mapping completely (in a mathematical sense) to the real world scenario being modelled.
To be conceptually useful, "completeness" chould be able to be "proven" - at least theoretically. This explanation implies an algebraic representation (eg predicate calculus) of the process charted should be derivable from the charting language. Having said that, it should be noted that few computing languages have such a mathematical validity test available (SQL being one notable exception).
====Minimal Syntactic Complexity====
Completeness in oricess modelling is a complex topic, and one fraught with some potentially counter productive implied solutions.
For example, a charting system with a unique symbol for every-process might achieve completeness, but it would achieve this at the expense of very high grammatic complexity.
The strength of process charting approach lies specifically in its ability to categorise, simplyify, and standardise our view of a social system. If one measure of language complexity lies in the number of rules in a grammar, then the greater the range of predefined (or reserved) symbols in the language, the greater the number of rulee that will be required to define their use.
Complexity, under such a measure, is minimised when the number of unique predifined "terms" is minimised. The mover restricted is symbol set, however, the more symbols must be used to represent simple everyday-repeating processes.
===The BPC Business Process Charting Method===
The core symbols of the process charting language are defined in the BPR overview. This author postulates that all human-machine processes can be documented with this minimum set of symbols. The simplicty of its symbol set (and therefore grammar) can lead to diagramatic complexity.
Certain objects and their processes occur with such rapidity, that diagrammatic complexity is reduced significantly by expamding the core set of symbols as shown in [[Business Process Reengineering - Chart Key]].
==Charting Example - Electronic Grants Management System==
The process charting method included on the following pages demonstrates the business process charting method as designed by this author and improved with input from clients and staff of BPC over 24 years. The example charts represent the BPC Process Reengineering Modelling and the BPC Stakeholder Community model in action in a real world situation. The resulting demonstration is a fully functional government grants management process for whole-of-government administration of government grants to the public.
*[[Business Process Reengineering - Chart Key]]
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
73ac152a4b245146897670bf781740106b14b9ef
Business Process Reengineering - Chart Key
0
290
398
350
2018-10-29T12:04:06Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Chart Symbols and Their Meanings==
[[IMAGE:BPRChartKeyV4.gif]]
==Process Charting Design Rules==
===Introduction - Key Concept===
The full process charting model forms a language for accurately describing processes and other object relationships. The language can be represented either diagrammatically or descriptively (textually). A chart drawn according to the charting method describes a network of unstructured interacting objects (processes, people, etc) and the data output states of this network as it consumes data through its inputs.
The charting method goes beyond a standard process flowchart in that its symbol grammar is sufficiently consistent and structured as to enable the translation of the chart to a text description. The text description takes the form of a program that in turn could be executed directly or translated / re-coded into a standard application programming language as an executable application.
This ability to reliably define a program simply by documenting a real world process according to the design rules below allows an automated modelling testbed to be constructed from the chart, and then stress tested with different data loads, or different error types, or checked for deadlocks, bottle knecks or compared against alternate process designs, etc. Such testing and anlysis can be done either manually or via automation.
There are a number of different symbols and descriptive encoding rules, but in essence many of thesee enhancements are for diagramtic efficiency. The core of the charting system revolves around one meta (undrawn) symbol - data - a few drawn symbols. The full model merely expands on these to provide a richer descriptive set, and more analytic detail with fewer individual diagramatic elements being required to represent the idea than otherwise.
All symbols are one of three classes:
* Objects - Things that originate, transform, store or consume data
* Events - Both consumers and orginators of event data. Events may receive and/or generate an excite or inhibit signal.
* Connectors - Lines joining events and objects through which data flows
===The importance of Data===
The life blood of the process diagram (or description) is "data". It is data that flows through the connectors to join event or object to event or object. Data is created when an event fires, or a data orgination object manufactures or otherwsie supplies data. Data is stored in data stores and transformed in processes. Data is discarded in data sinks.
Data is inherently transient and never drawn as a symbol, although it is documented. When data is stationary it is held in a data store. A document with writing on it is therefore a data store - not the data itself. Likewise a database record is a data store, not the data itself.
Data is virtual and can take many forms. It may be a piece of information a human would understand or an electronic blib with a voltage value to excite or inhibit the recipient proportionately.
Data is infinitely divisable, imutable and transformable.
Like energy, data can neither be created or destroyed across the entire universe of processes, but within the context of any subset of processes less than the infinite set of all possible processes, data can be orginated and discarded.
When data is held in a data store it transforms the data store in some way. In a paper document datastore, it results in a blank sheet displaying written or image data. In a manufactured item "data store" it results in the transformation of petro chemicals and metals into a consumer item like a lamp shade or a car.
===The Class of Objects===
<div class="mainfloatright" style="width:40%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" align=right>
[[Image:RecursiveShapes.png]]
''All objects are recursive and containers.''
[[Image:BPC4KeyChartObj.png]]
''All objects or events are connected by lines called connectors.''</div>
The key chart comes with a number of design usage rules that are perhaps a little unusual and therefore should be considered carefully:
* All symbols are either events, objects or connectors ( lines or arrows).
* All objects are (except events) are recursive - meaning that they can include nested members of the same type as the parent (as well as other types), a constrained subset of the child objects or, in some cases, unrestrained subsets. In computational terms a recursive function is one that invokes itself, while this form of pure recursion of objects is rare in process maps, it is legal within the charting rules.
* All objects are potentially containers of other objects and, therefore, all objects are notionally sets of one or more objects. (Object encapsulation)
* Objects contained within a parent inherit the in and out flows (connectors) of the parent - or rather they inherit the right to use the flows. (Object inheritance)
* All objects and/or events are connected by lines called connectors, or by being recursively embedded in a parent object - which then becomes a container for that object.
* Data flows through the connecting lines into the objects where it is stored, and/or transformed and/or distributed. Data is ethereal and moves from one place to another transforming and being transformed by the vessels in which it is store. A document, for example, is therefore considered to be a data store - not the data itself. A manufactured item, is also a data store, containing the end result of multiple processes each transforming the storage vessel. This is the key concept that enables this process charting method to transcend both service and manufacturing process modelling domains.
* The arrows connecting objects are data-flows - referring to the movement of information, not explicitly the media on which the information is stored at the time.
* Connecting Arrows can take a number of annotations, including:
** identification of the data stream (or data streams)
** a filter condition for access
** selector bars
** optional (conditional) flags
** authorisation signature lock
** global type flags (like E for error flows) and/or
** weight and fuzzyfiers (mainly used for neural and bayesian process modelling)
* Objects are scriptable
* All objects (and ideally, but not mandated - connectors) have unique identifiers.
* All objects can be contained in multiple container objects simultaneously - but each occurrence of object is globally unique - and therefore has the same definition everywhere where it appears.
* All objects can be containers and as such may be "drilled through" to their content
* A process object may be a "map" (tranformational or distributive) or a "controller" (quality governor).
* A process fires or executes when all required inflows have data present (asynchronous).
* Events impose a block on some or all functions of the connected object until the event fires.
* All processes are assumed to operate concurrently when data is present on their incoming connectors, or an event fires, unless also constrained by other events blocking the object's functions. Events may thus operate as a clock, or trigger and as a governor or inhibitor.
* The data-flow method is capable of modelling both excitatory networks and inhibitory process networks.
* Everything, that is not a connector or event, is an object of one type or another - including the organisation itself.
===Object Hierarchy===
There is an implied object as container hierarchy (although not in any way mandatory):
* Entities can contain processes and all other objects
* Processes can contain processes and all other objects
* Data-stores can contain data-store objects
This hierarchy is very much a rough rule of thumb, for there are many cases where a data-store will be modelled with containing processes and data-stores - such as where the data-store is intelligent. Entities like organisations or people are, however better seen as external to the process unless they are containers of the process, as they will always have some processes that are not modelled in any given chart and therefore are potentially unreliable.
===Entities and Entity Groups===
Notionally, every process, can have a controlling entity (particularly where a person is actually doing the process itself). In the charting method, processes are not "owned" by people (although this is how one tends to conceptualise them), so much as controlled by them. In its pure form the process chart would show "process owners" as controlling entities connecting to their processes and thus, like events, constraining their execution unless present and active. To avoid diagrammatic clutter, where a process is controlled by a single entity (or single entity group), that entity (or entity group) can be identified in the process "owner-controller" property in the process description.
An entity group might be a typing pool, call centre staff pool, a community, etc. Each member of the entity group is inter-changeable for each other member with respect to the process concerned. Individual entities within the entity group may have other filters, conditions and constraints that subsequently exclude them from actually controlling the process. An entity group may be a sub-group of another entity group such as C-level executives in a company entity, or administration staff in a stakeholder community.
With the exception of community entities (which are effectively both an entity and an entity group), all entities and entity groups are presented using the same symbol. This is consistent with the central assumptions about entities with respect to the view of the process flows presented in a chart.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
2a16cb2e0b8c5acd961534a7b1bbbfc19b9883c1
Managing Risk in Mergers & Acquisitions - Causes of Success & Failure
0
291
400
376
2018-10-29T12:12:38Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2010 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
For the purposes of this article we will use the terms merger, acquisition and M&A interchangeably for the general activity of conducting a merger or acquisition of one legal business entity by another. The discussion will focus on the M&A activities between distinct legal entities rather than business units within a legel entity, as the issues in the latter case are fundamentally different from those in the former case.
Strictly speaking, a merger differs from an acquisition in that in an acquisition one entity assumes control and absorbs another entity, usually expunging the acquired entities operational distinctiveness. In a merger two or more entities join their business and control structures in a manner that delivers some level of shared control and business profile. In reality, the actual outcomes are rarely purely those of an acquisition or a merger - regardless of the original intentions. The act of acquiring or merging almost always results in irrevocable cultural and operational change for all entities involved - not just the entity acquired.
For this reason, and for reasons that will become apparent later on, we shall treat both activities as essentially the same.
Irrespective of the retoric for the merger, in order to succeed, it is critical for the parties to the merger (and particularly the dominant party) to understand clearly why they are really merging. Typical reasons for merging include (in no particular order):
* Economies of scale through larger productive capacity or ability to share services
* Vertical integration of productive capacity or the supply chain
* Market share / elimination of direct or indirect competition
* Securing supply
* Asset acquisition or stripping
* Strategic hedging through addition of counter cyclical products to the group mix
* Acquisition of access to Intellectual Proiperty
* Geographic expansion or access to markets with entry barriers
* Accumulation of complimentary product/service sets
* Suppression of emerging product line / Intellectual Property threats
* Acquisition of customers
Not all of these motivations will pass traditional measures of success such as "improved productivity" or "staff retention" - as clearly in a number of these cases the underlying purpose of the merger has nothing to do with establishing a bigger, better, more efficient business - just a safer business environment.
If your purpose is merely to eliminate a competitor, or acquire their IP, or strip their assets, etc. much of the discussion in the paper will be of limited applicability to your situation. Your objectives are met if the price you pay for acquisition and business wind-up delivers these outcomes for less than you gain in return. If your purpose is to gain productivity improvement, or economies of scale, complimentary product mix outcomes and retain as much of the acquired (or junior partner's) business / delivery capability as possible (etc.) then this paper is relevant to your circumstances.
=M&A - The State of the Industry=
==What Measure Success?==
The most obvious outcome of any M&A is prima-facie the elimination of an actual or potential competitor from the competitive mix.
In 1999 KPMG published a study of merger outcomes over the preceding 10 years. The study identified that 75% to 83% of mergers fail where failure was measured by lower productivity, labour unrest, higher absenteeism & loss of shareholder value or even dissolution of companies.
This and other studies highlight a central question in determining the strategy for a successful merger - what is the basis for measuring the success of an M&A project?
<table>
<tr>
<th>
Success Measure
</th>
<th>
Survey Outcome
</th>
<th>
Year of Study
</th>
</tr>
<tr>
<td>
Achievement of anticipated purpose
</td>
<td>
30-45%
</td>
<td>
1997
</td>
</tr>
<tr>
<td>
Achievement of strategic or financial object
</td>
<td>
<20%
</td>
<td>
1983, 1991, 1994
</td>
</tr>
<tr>
<td>
Preserve or Enhance book value
</td>
<td>
25%-45%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Enhance shareholder value
</td>
<td>
17%
</td>
<td>
1995
</td>
</tr>
<tr>
<td>
Preserve or improve NPAT
</td>
<td>
<50%
</td>
<td>
1996, 1999
</td>
</tr>
<tr>
<td>
Preserve or improve productivity
</td>
<td>
<25%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Preserve strike, absenteeism and accidents levels
</td>
<td>
<50%
</td>
<td>
1977, 1981, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Long Term
</td>
<td>
20-50%
</td>
<td>
1978, 1988, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Short Term
</td>
<td>
50%
</td>
<td>
1996
</td>
</tr>
</table>
A summary of the conclusions from a number of these studies can be found in [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
It is clear from the range of studies and the span of years they cover, that successful mergers, are distressingly and consistently unlikely - at least with respect to these measures of success. A Merger, like life, is not a dress rehearsal. Unfortunately, as most executives go through a merger only rarely, mistakes are common, and the first time you do it, it will be for real. It is therefore important to learn, as far as possible, from the conclusions of others that have gone before - because the odds of success are not in your favour.
Both the Zweig (1995) and KPMG (1999) study found in their respective studies of merger outcomes that on 17% of mergers resulted in an enhancement of either shareholder value or key performance drivers. Perhaps of even greater concern, Zweig found that shareholder value was actually destroyed in 53% of cases, and KPMG determined the performance drivers actually weakened in 78% of cases:
<table>
<tr>
<td>
[[image:Zweig95_M&A_ImpactOnShareValue.jpg]]
</td>
<td>
[[image:KPMG99_M&A_ImpactOnKPI.jpg]]
</td>
</tr>
</table>
=Why Merge=
Studies of merger outcomes in terms of only classical performance or direct shareholder value enhancement imply a need for successful integration of the pre-merger businesses. This assumption does not capture the total range of success measures that might properly apply to merger motivations (regardless of the public retoric of the entities involved). The need for successful integration of the pre-merger businesses depends on the true underlying motivation for the merger:
[[Image:MnA WhyMerge.jpg]]
The fundamental driver for measuring post-merger success is to first clearly define the reason(s) for the merger. As success integration of the merged businesses is possibly among the hardest to the successful outcomes to achieve, it is essential to map the requirement for this strategy to the reason for the merger. Ordered from least to highest need for integration, typical merger motivations might include:
# Eliminate a competitor
# Hedge market cycles
# Acquire brand
# Enter a geographic market
# Integrate vertically
# Opportunistic
# Grow market share
# Cut costs – economies of scale
# Grow size (defensive)
# Acquire technical or management expertise
=Reasons For Failure=
==A Summary of the Recent Studies==
Integration of the of the pre-merger businesses in the post-merger entity is a precursor to success in (possibly) the majority of merger strategies. From a comprehensive review of the literature we have identified the the most common reasons sited for integration failure, (with two added by the author from direct (anecdotal) experience).
{| border="1"
! !! Reason !! %
|-
|1 || Poorly planned and managed integration || 100
|-
|2 || Neglect of existing business due to the attention being paid to the acquired business || 68
|-
|3 || Underestimating the depth & pervasiveness of human issues triggered by the merger || 50
|-
|4 || Loss of key staff in acquired business || 50
|-
|5 || Demotivation of employees of acquired business || 50
|-
|6 || Underestimating problems of skill transfer || 34
|-
|7 || Selecting the wrong partner || 34
|-
|8 || Cultural incompatibility || 17
|-
|9 || Delayed decisions due to breakdown of responsibilities, delegations & authority || 17
|-
|10 || Too much focus on doing the deal - not enough on to integration planning & management || 17
|-
|11 || Insufficient research (due diligence) into the acquired business || 17
|-
|12 || Paying the wrong price or at the wrong time || 17
|-
|13 || Buying for the wrong reasons || 17
|-
|14 || Incompatible business and IT systems || JB
|-
|15 || Doomed by negotiation || JB
|-
|}
IT systems are likely to increase in importance because in the last 10-15 years they have become more entwined with business models & processes than was possibly the case when some of these studies on which this data is based were conducted, and in larger organisations can represent a key (and diferentiating) part of the businesses infrastructure investment. Incompatibility can be a critical financial and technical barrier to successful integration.
The last point emphasises that where one party in the pre-merger negotiation wins, the merged entities generally lose.
==Failure in a Nutshell==
Where business integration is a key ingredient of the post-merger mix, the studies allow us to identify the top 5 risks of that result in merger failure:
# Integration poorly planned and managed
# Underestimated cultural & human risks
# Loss of key success enablers (eg staff)
# Inaccurate financial due diligence
# Neglecting current business
As these studies examined mergers that actually completed (i.e. the tacke over survived the acquisition process), the studies ignored a common reason for merger failure: That of non-completion. Reasons for non completion might include:
# Legal (non participating competitor) or regulatory intervention
# Unacceptable risks, asset/liability valuations or cultural issues emerging during sue-dilligence
# Exogenous market shifts during the merger process (such as changes in market conditions of demand, financing, etc.)
# Death or departure of key personnel from the target entitites
# Excessive regulatory or judicial hurdles causing the process to extend unacceptably for the participants
# Failure, or inability to offer sufficient compensation to the vendors
# Gazumping by competitor acquirers
=Reasons for Success=
Conversely both formal studies and deductive reasoning allows us to identify the key reasons for successful mergers.
* No need to achieve an integrated business, and "right" price paid
* Nature of post merger structure (vertical, conglomerate or geographic, etc)
* Clearly enunciated & communicated direction
* Acquisition-specific & flexible integration strategy
* Clear decision structure and role definitions
* A sense of urgency and outcome ownership
* Compatible business systems
* Compatible business cultures
* Compatible accounting practices
* Integration ready culture
* Commonality of merger goals
* Active risk management strategy
* Actively managed, tracked & resourced integration project
* Minimised debt service load
* Pre-existing partnering or cohabitation
=Further Reading=
In our next article [[Managing Risk in Mergers & Acquisitions - A Success Strategy]], we examine how to apply this knowledge to create a successful merger strategy.
A cross linked review of the of the literature over a span of 20 years is available at [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]].
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
c05ec1f1dcaeb33c93138ca7f37f54140649ecab
472
400
2018-10-29T12:17:43Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2010 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
For the purposes of this article we will use the terms merger, acquisition and M&A interchangeably for the general activity of conducting a merger or acquisition of one legal business entity by another. The discussion will focus on the M&A activities between distinct legal entities rather than business units within a legel entity, as the issues in the latter case are fundamentally different from those in the former case.
Strictly speaking, a merger differs from an acquisition in that in an acquisition one entity assumes control and absorbs another entity, usually expunging the acquired entities operational distinctiveness. In a merger two or more entities join their business and control structures in a manner that delivers some level of shared control and business profile. In reality, the actual outcomes are rarely purely those of an acquisition or a merger - regardless of the original intentions. The act of acquiring or merging almost always results in irrevocable cultural and operational change for all entities involved - not just the entity acquired.
For this reason, and for reasons that will become apparent later on, we shall treat both activities as essentially the same.
Irrespective of the retoric for the merger, in order to succeed, it is critical for the parties to the merger (and particularly the dominant party) to understand clearly why they are really merging. Typical reasons for merging include (in no particular order):
* Economies of scale through larger productive capacity or ability to share services
* Vertical integration of productive capacity or the supply chain
* Market share / elimination of direct or indirect competition
* Securing supply
* Asset acquisition or stripping
* Strategic hedging through addition of counter cyclical products to the group mix
* Acquisition of access to Intellectual Proiperty
* Geographic expansion or access to markets with entry barriers
* Accumulation of complimentary product/service sets
* Suppression of emerging product line / Intellectual Property threats
* Acquisition of customers
Not all of these motivations will pass traditional measures of success such as "improved productivity" or "staff retention" - as clearly in a number of these cases the underlying purpose of the merger has nothing to do with establishing a bigger, better, more efficient business - just a safer business environment.
If your purpose is merely to eliminate a competitor, or acquire their IP, or strip their assets, etc. much of the discussion in the paper will be of limited applicability to your situation. Your objectives are met if the price you pay for acquisition and business wind-up delivers these outcomes for less than you gain in return. If your purpose is to gain productivity improvement, or economies of scale, complimentary product mix outcomes and retain as much of the acquired (or junior partner's) business / delivery capability as possible (etc.) then this paper is relevant to your circumstances.
=M&A - The State of the Industry=
==What Measure Success?==
The most obvious outcome of any M&A is prima-facie the elimination of an actual or potential competitor from the competitive mix.
In 1999 KPMG published a study of merger outcomes over the preceding 10 years. The study identified that 75% to 83% of mergers fail where failure was measured by lower productivity, labour unrest, higher absenteeism & loss of shareholder value or even dissolution of companies.
This and other studies highlight a central question in determining the strategy for a successful merger - what is the basis for measuring the success of an M&A project?
<table>
<tr>
<th>
Success Measure
</th>
<th>
Survey Outcome
</th>
<th>
Year of Study
</th>
</tr>
<tr>
<td>
Achievement of anticipated purpose
</td>
<td>
30-45%
</td>
<td>
1997
</td>
</tr>
<tr>
<td>
Achievement of strategic or financial object
</td>
<td>
<20%
</td>
<td>
1983, 1991, 1994
</td>
</tr>
<tr>
<td>
Preserve or Enhance book value
</td>
<td>
25%-45%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Enhance shareholder value
</td>
<td>
17%
</td>
<td>
1995
</td>
</tr>
<tr>
<td>
Preserve or improve NPAT
</td>
<td>
<50%
</td>
<td>
1996, 1999
</td>
</tr>
<tr>
<td>
Preserve or improve productivity
</td>
<td>
<25%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Preserve strike, absenteeism and accidents levels
</td>
<td>
<50%
</td>
<td>
1977, 1981, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Long Term
</td>
<td>
20-50%
</td>
<td>
1978, 1988, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Short Term
</td>
<td>
50%
</td>
<td>
1996
</td>
</tr>
</table>
A summary of the conclusions from a number of these studies can be found in [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
It is clear from the range of studies and the span of years they cover, that successful mergers, are distressingly and consistently unlikely - at least with respect to these measures of success. A Merger, like life, is not a dress rehearsal. Unfortunately, as most executives go through a merger only rarely, mistakes are common, and the first time you do it, it will be for real. It is therefore important to learn, as far as possible, from the conclusions of others that have gone before - because the odds of success are not in your favour.
Both the Zweig (1995) and KPMG (1999) study found in their respective studies of merger outcomes that on 17% of mergers resulted in an enhancement of either shareholder value or key performance drivers. Perhaps of even greater concern, Zweig found that shareholder value was actually destroyed in 53% of cases, and KPMG determined the performance drivers actually weakened in 78% of cases:
<table>
<tr>
<td>
[[image:Zweig95_M&A_ImpactOnShareValue.jpg]]
</td>
<td>
[[image:KPMG99_M&A_ImpactOnKPI.jpg]]
</td>
</tr>
</table>
=Why Merge=
Studies of merger outcomes in terms of only classical performance or direct shareholder value enhancement imply a need for successful integration of the pre-merger businesses. This assumption does not capture the total range of success measures that might properly apply to merger motivations (regardless of the public retoric of the entities involved). The need for successful integration of the pre-merger businesses depends on the true underlying motivation for the merger:
[[Image:MnA WhyMerge.jpg]]
The fundamental driver for measuring post-merger success is to first clearly define the reason(s) for the merger. As success integration of the merged businesses is possibly among the hardest to the successful outcomes to achieve, it is essential to map the requirement for this strategy to the reason for the merger. Ordered from least to highest need for integration, typical merger motivations might include:
# Eliminate a competitor
# Hedge market cycles
# Acquire brand
# Enter a geographic market
# Integrate vertically
# Opportunistic
# Grow market share
# Cut costs – economies of scale
# Grow size (defensive)
# Acquire technical or management expertise
=Reasons For Failure=
==A Summary of the Recent Studies==
Integration of the of the pre-merger businesses in the post-merger entity is a precursor to success in (possibly) the majority of merger strategies. From a comprehensive review of the literature we have identified the the most common reasons sited for integration failure, (with two added by the author from direct (anecdotal) experience).
{| border="1"
! !! Reason !! %
|-
|1 || Poorly planned and managed integration || 100
|-
|2 || Neglect of existing business due to the attention being paid to the acquired business || 68
|-
|3 || Underestimating the depth & pervasiveness of human issues triggered by the merger || 50
|-
|4 || Loss of key staff in acquired business || 50
|-
|5 || Demotivation of employees of acquired business || 50
|-
|6 || Underestimating problems of skill transfer || 34
|-
|7 || Selecting the wrong partner || 34
|-
|8 || Cultural incompatibility || 17
|-
|9 || Delayed decisions due to breakdown of responsibilities, delegations & authority || 17
|-
|10 || Too much focus on doing the deal - not enough on to integration planning & management || 17
|-
|11 || Insufficient research (due diligence) into the acquired business || 17
|-
|12 || Paying the wrong price or at the wrong time || 17
|-
|13 || Buying for the wrong reasons || 17
|-
|14 || Incompatible business and IT systems || JB
|-
|15 || Doomed by negotiation || JB
|-
|}
IT systems are likely to increase in importance because in the last 10-15 years they have become more entwined with business models & processes than was possibly the case when some of these studies on which this data is based were conducted, and in larger organisations can represent a key (and diferentiating) part of the businesses infrastructure investment. Incompatibility can be a critical financial and technical barrier to successful integration.
The last point emphasises that where one party in the pre-merger negotiation wins, the merged entities generally lose.
==Failure in a Nutshell==
Where business integration is a key ingredient of the post-merger mix, the studies allow us to identify the top 5 risks of that result in merger failure:
# Integration poorly planned and managed
# Underestimated cultural & human risks
# Loss of key success enablers (eg staff)
# Inaccurate financial due diligence
# Neglecting current business
As these studies examined mergers that actually completed (i.e. the tacke over survived the acquisition process), the studies ignored a common reason for merger failure: That of non-completion. Reasons for non completion might include:
# Legal (non participating competitor) or regulatory intervention
# Unacceptable risks, asset/liability valuations or cultural issues emerging during sue-dilligence
# Exogenous market shifts during the merger process (such as changes in market conditions of demand, financing, etc.)
# Death or departure of key personnel from the target entitites
# Excessive regulatory or judicial hurdles causing the process to extend unacceptably for the participants
# Failure, or inability to offer sufficient compensation to the vendors
# Gazumping by competitor acquirers
=Reasons for Success=
Conversely both formal studies and deductive reasoning allows us to identify the key reasons for successful mergers.
* No need to achieve an integrated business, and "right" price paid
* Nature of post merger structure (vertical, conglomerate or geographic, etc)
* Clearly enunciated & communicated direction
* Acquisition-specific & flexible integration strategy
* Clear decision structure and role definitions
* A sense of urgency and outcome ownership
* Compatible business systems
* Compatible business cultures
* Compatible accounting practices
* Integration ready culture
* Commonality of merger goals
* Active risk management strategy
* Actively managed, tracked & resourced integration project
* Minimised debt service load
* Pre-existing partnering or cohabitation
=Further Reading=
In our next article [[Managing Risk in Mergers & Acquisitions - A Success Strategy]], we examine how to apply this knowledge to create a successful merger strategy.
A cross linked review of the of the literature over a span of 20 years is available at [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]].
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
c05ec1f1dcaeb33c93138ca7f37f54140649ecab
488
472
2018-10-29T12:19:09Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2010 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
For the purposes of this article we will use the terms merger, acquisition and M&A interchangeably for the general activity of conducting a merger or acquisition of one legal business entity by another. The discussion will focus on the M&A activities between distinct legal entities rather than business units within a legel entity, as the issues in the latter case are fundamentally different from those in the former case.
Strictly speaking, a merger differs from an acquisition in that in an acquisition one entity assumes control and absorbs another entity, usually expunging the acquired entities operational distinctiveness. In a merger two or more entities join their business and control structures in a manner that delivers some level of shared control and business profile. In reality, the actual outcomes are rarely purely those of an acquisition or a merger - regardless of the original intentions. The act of acquiring or merging almost always results in irrevocable cultural and operational change for all entities involved - not just the entity acquired.
For this reason, and for reasons that will become apparent later on, we shall treat both activities as essentially the same.
Irrespective of the retoric for the merger, in order to succeed, it is critical for the parties to the merger (and particularly the dominant party) to understand clearly why they are really merging. Typical reasons for merging include (in no particular order):
* Economies of scale through larger productive capacity or ability to share services
* Vertical integration of productive capacity or the supply chain
* Market share / elimination of direct or indirect competition
* Securing supply
* Asset acquisition or stripping
* Strategic hedging through addition of counter cyclical products to the group mix
* Acquisition of access to Intellectual Proiperty
* Geographic expansion or access to markets with entry barriers
* Accumulation of complimentary product/service sets
* Suppression of emerging product line / Intellectual Property threats
* Acquisition of customers
Not all of these motivations will pass traditional measures of success such as "improved productivity" or "staff retention" - as clearly in a number of these cases the underlying purpose of the merger has nothing to do with establishing a bigger, better, more efficient business - just a safer business environment.
If your purpose is merely to eliminate a competitor, or acquire their IP, or strip their assets, etc. much of the discussion in the paper will be of limited applicability to your situation. Your objectives are met if the price you pay for acquisition and business wind-up delivers these outcomes for less than you gain in return. If your purpose is to gain productivity improvement, or economies of scale, complimentary product mix outcomes and retain as much of the acquired (or junior partner's) business / delivery capability as possible (etc.) then this paper is relevant to your circumstances.
=M&A - The State of the Industry=
==What Measure Success?==
The most obvious outcome of any M&A is prima-facie the elimination of an actual or potential competitor from the competitive mix.
In 1999 KPMG published a study of merger outcomes over the preceding 10 years. The study identified that 75% to 83% of mergers fail where failure was measured by lower productivity, labour unrest, higher absenteeism & loss of shareholder value or even dissolution of companies.
This and other studies highlight a central question in determining the strategy for a successful merger - what is the basis for measuring the success of an M&A project?
<table>
<tr>
<th>
Success Measure
</th>
<th>
Survey Outcome
</th>
<th>
Year of Study
</th>
</tr>
<tr>
<td>
Achievement of anticipated purpose
</td>
<td>
30-45%
</td>
<td>
1997
</td>
</tr>
<tr>
<td>
Achievement of strategic or financial object
</td>
<td>
<20%
</td>
<td>
1983, 1991, 1994
</td>
</tr>
<tr>
<td>
Preserve or Enhance book value
</td>
<td>
25%-45%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Enhance shareholder value
</td>
<td>
17%
</td>
<td>
1995
</td>
</tr>
<tr>
<td>
Preserve or improve NPAT
</td>
<td>
<50%
</td>
<td>
1996, 1999
</td>
</tr>
<tr>
<td>
Preserve or improve productivity
</td>
<td>
<25%
</td>
<td>
1988, 1999
</td>
</tr>
<tr>
<td>
Preserve strike, absenteeism and accidents levels
</td>
<td>
<50%
</td>
<td>
1977, 1981, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Long Term
</td>
<td>
20-50%
</td>
<td>
1978, 1988, 1999
</td>
</tr>
<tr>
<td>
Financially advantageous in Short Term
</td>
<td>
50%
</td>
<td>
1996
</td>
</tr>
</table>
A summary of the conclusions from a number of these studies can be found in [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
It is clear from the range of studies and the span of years they cover, that successful mergers, are distressingly and consistently unlikely - at least with respect to these measures of success. A Merger, like life, is not a dress rehearsal. Unfortunately, as most executives go through a merger only rarely, mistakes are common, and the first time you do it, it will be for real. It is therefore important to learn, as far as possible, from the conclusions of others that have gone before - because the odds of success are not in your favour.
Both the Zweig (1995) and KPMG (1999) study found in their respective studies of merger outcomes that on 17% of mergers resulted in an enhancement of either shareholder value or key performance drivers. Perhaps of even greater concern, Zweig found that shareholder value was actually destroyed in 53% of cases, and KPMG determined the performance drivers actually weakened in 78% of cases:
<table>
<tr>
<td>
[[image:Zweig95_M&A_ImpactOnShareValue.jpg]]
</td>
<td>
[[image:KPMG99_M&A_ImpactOnKPI.jpg]]
</td>
</tr>
</table>
=Why Merge=
Studies of merger outcomes in terms of only classical performance or direct shareholder value enhancement imply a need for successful integration of the pre-merger businesses. This assumption does not capture the total range of success measures that might properly apply to merger motivations (regardless of the public retoric of the entities involved). The need for successful integration of the pre-merger businesses depends on the true underlying motivation for the merger:
[[Image:MnA WhyMerge.jpg]]
The fundamental driver for measuring post-merger success is to first clearly define the reason(s) for the merger. As success integration of the merged businesses is possibly among the hardest to the successful outcomes to achieve, it is essential to map the requirement for this strategy to the reason for the merger. Ordered from least to highest need for integration, typical merger motivations might include:
# Eliminate a competitor
# Hedge market cycles
# Acquire brand
# Enter a geographic market
# Integrate vertically
# Opportunistic
# Grow market share
# Cut costs – economies of scale
# Grow size (defensive)
# Acquire technical or management expertise
=Reasons For Failure=
==A Summary of the Recent Studies==
Integration of the of the pre-merger businesses in the post-merger entity is a precursor to success in (possibly) the majority of merger strategies. From a comprehensive review of the literature we have identified the the most common reasons sited for integration failure, (with two added by the author from direct (anecdotal) experience).
{| border="1"
! !! Reason !! %
|-
|1 || Poorly planned and managed integration || 100
|-
|2 || Neglect of existing business due to the attention being paid to the acquired business || 68
|-
|3 || Underestimating the depth & pervasiveness of human issues triggered by the merger || 50
|-
|4 || Loss of key staff in acquired business || 50
|-
|5 || Demotivation of employees of acquired business || 50
|-
|6 || Underestimating problems of skill transfer || 34
|-
|7 || Selecting the wrong partner || 34
|-
|8 || Cultural incompatibility || 17
|-
|9 || Delayed decisions due to breakdown of responsibilities, delegations & authority || 17
|-
|10 || Too much focus on doing the deal - not enough on to integration planning & management || 17
|-
|11 || Insufficient research (due diligence) into the acquired business || 17
|-
|12 || Paying the wrong price or at the wrong time || 17
|-
|13 || Buying for the wrong reasons || 17
|-
|14 || Incompatible business and IT systems || JB
|-
|15 || Doomed by negotiation || JB
|-
|}
IT systems are likely to increase in importance because in the last 10-15 years they have become more entwined with business models & processes than was possibly the case when some of these studies on which this data is based were conducted, and in larger organisations can represent a key (and diferentiating) part of the businesses infrastructure investment. Incompatibility can be a critical financial and technical barrier to successful integration.
The last point emphasises that where one party in the pre-merger negotiation wins, the merged entities generally lose.
==Failure in a Nutshell==
Where business integration is a key ingredient of the post-merger mix, the studies allow us to identify the top 5 risks of that result in merger failure:
# Integration poorly planned and managed
# Underestimated cultural & human risks
# Loss of key success enablers (eg staff)
# Inaccurate financial due diligence
# Neglecting current business
As these studies examined mergers that actually completed (i.e. the tacke over survived the acquisition process), the studies ignored a common reason for merger failure: That of non-completion. Reasons for non completion might include:
# Legal (non participating competitor) or regulatory intervention
# Unacceptable risks, asset/liability valuations or cultural issues emerging during sue-dilligence
# Exogenous market shifts during the merger process (such as changes in market conditions of demand, financing, etc.)
# Death or departure of key personnel from the target entitites
# Excessive regulatory or judicial hurdles causing the process to extend unacceptably for the participants
# Failure, or inability to offer sufficient compensation to the vendors
# Gazumping by competitor acquirers
=Reasons for Success=
Conversely both formal studies and deductive reasoning allows us to identify the key reasons for successful mergers.
* No need to achieve an integrated business, and "right" price paid
* Nature of post merger structure (vertical, conglomerate or geographic, etc)
* Clearly enunciated & communicated direction
* Acquisition-specific & flexible integration strategy
* Clear decision structure and role definitions
* A sense of urgency and outcome ownership
* Compatible business systems
* Compatible business cultures
* Compatible accounting practices
* Integration ready culture
* Commonality of merger goals
* Active risk management strategy
* Actively managed, tracked & resourced integration project
* Minimised debt service load
* Pre-existing partnering or cohabitation
=Further Reading=
In our next article [[Managing Risk in Mergers & Acquisitions - A Success Strategy]], we examine how to apply this knowledge to create a successful merger strategy.
A cross linked review of the of the literature over a span of 20 years is available at [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]].
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
c05ec1f1dcaeb33c93138ca7f37f54140649ecab
Managing Risk in Mergers & Acquisitions - A Success Strategy
0
295
402
382
2018-10-29T12:12:38Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2007 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
THIS ARTICLE IS NOT YET COMPLETE
=Pre-Merger Actions=
==Pre-merger Requisits==
* Beyond Financial Due-diligence (history & forecast)
** Financial,
** Legal,
** Cultural,
** Infrastructure, etc
* Include the cost of integration (including IT) in the forecasts
* Understand the financial structure, performance drivers and debt levels
* Understand the hidden control & decision relationships (why the acquired business really works)
* Understand all the stakeholders and implied or expressed service agreements
* Understand the meaning of merger success (in this context and for both parties)
* Agree the merger strategy (on both sides of the table)
* Don’t kill it during negotiation (greed is not good in this case)
==Bishop’s Stakeholder Communities Model==
===Analysing Strategy, Culture & Processes===
We see a business or business unit as only having activities designed to service these communities. Some
Processes are purely to foster community interaction & membership, others are designed for services the
community needs like payroll, leave applications, advertisements, policy creation, complaints, help, performance
information and dissemination. With a little thought and consistent application the model proves both universal
and scalable.. You may use this model freely as long as the original author is always accredited.
A business consists only of stakeholder communities:
<table>
<tr>
<td>
# Workforce
## Employees
## Contractors
# Suppliers
# Partners
## Business network
## Cooperative
# Customers
## Pay for goods & service
# Clients
## Receive goods & service
# Governance
## Regulators
## Board
## Senior exec
# Government
# Wealth / Enterprise Custodians
## Asset managers
## Treasury, equipment, IP
# The Public
## The ultimate source & influence on all other stakeholders
</td>
<td>
[[Image:BishopsStakeholderCommunityModel.png]]
</td>
</tr>
</table>
=Post Merger Actions=
==Introduction==
* Understand the required degree of integration for the intended merger outcome
* Assess and monitor merger & integration risk
** Including: triggering events, consequences, remediation, responsibility, escalations
** Consider carefully the role of internal & external brands
* Empower the merger from the top
** Establish an merger or integration steering committee
*** Comprising board + stakeholder executive (include IT)
* Establish an integration manager / office
** Assemble the right-skilled integration team
** Focus Internal PR on bonding and service crossflow (not happy sheets)
** Establish a specific IT integration/interfacing advisory panel include business leaders
** Establish an integration ‘help-desk’ & communicate its existence
* Re-Perform cultural due diligence (where high integration exists)
* Perform targeted redundancies early & together – then tell the team it is over
* Revise Management Performance Reporting
** Target at the required integration degree
* Implement an integration strategy
** Work in many short (100 day) projects
* Implement a merger tracking programme
** Defined performance measures with targets (automate)
** Risk & remediation managed (automate)
** Progress & outcome communications
* Monitor progress and revise strategy
==Empower from the Top==
Weber (1996) concluded merger successes were generally CEO lead who:
* Dedicate executive time and focus
* Put together a leadership team to drive it
* Focus management attention on formal success factors
* Create a sense of human purpose and direction
* Model desired behaviour and ‘rules of the road’
==Distilling the Risks==
(Weber (96) & Bishop)
1 Is the combination achieving financial and operational goals? R1
2 Are schedules on target and are changes being implemented effectively? R2
3 Do employees understand and support the need for change? R3
4 What is the effect on people’s well-being and esprit-de-corps? R4
5 Are managers at all levels taking steps to minimise negative reactions and build positive feelings? R5
6 Are productivity or work quality being affected? R6
7 Do people understand their new roles and what is expected? R7
8 Are client and staff complaint levels stable or dropping? R8
9 Is the IT Business Process value map stable or declining? (See next slide for an example) R9
10 Is the post-merger integration investment budget on track? R10
==The IT and Business Process Value Map==
$NTV – Net Time Value (of net contribution over life of IT system)
This table runs at the businees process and business unit, etc levels
DO NOT UNDERESTIMATE THE IMPACT OF IT ISSUES
BP1 BP2 BP3 BP4
IT Sys1 $NTv $NTv $NTv $NTv $TNTV
IT Sys2 $NTv $NTv $NTv $NTv $TNTV
IT Sys3 $NTv $NTv $NTv $NTv $TNTV
IT Sys4 $TNTV
IT Sys5 $TNTV
IT Sys6 $TNTV
IT Sys7 $TNTV
IT Sys8 $TNTV
IT Sys9 $TNTV
$TNTV $TNTV $TNTV $TNTV
==Tracking Success – The Scorecard==
* Market measures
* Integration measures
* Operational measures
* Process measures
* Cultural measures
* Financial measures
* Purpose measures
==Role of the Integration Manager==
(Ashkenis & Francis 2001)
* Inject Speed
** Ramp up planning
** Accelerate implementation
** Push for decisions & actions
** Monitor progress & report to CEO/Steering
* Engineer Success
** Identify critical business synergies
** Define and launch 100 day projects
** Orchestrate BP transformation to combine entity Best Practice
* Make Social Connections
** Serve as a travelling ambassador between locations and businesses
** Serve as a lightning rod for hot issues (& venting)
** Interpret the customs language and culture of both companies
* Create Structure
** Provide flexible integration frameworks
** Mobilize joint teams
** Create key events and timelines
** Facilitate team and exec review
==Engaging The Right Skills==
* Project management
* Risk management
* Process reengineering
* IT interfacing / integrating
* Marketing & Brand management
* Intra-Corporate & Public Relations
* Corporate Governance
* Conglomerate Accounting & Finance
* Legal & HR
==Constraining Risk Events==
-Setting Strategic Priorities-
* Address:
** Corporate PR, marketing & sales quickly – these are the company to most external stakeholders
* Focus on retaining key staff
* Focus on customer retention
* Focus on IT change cost
* Do not disconnect business process from IT systems during transition (and understand the ISNTV)
* Forge a new corporate identity – or know why you aren’t
* Focus/ Build on similarities – not differences
* Align capabilities, services and products
* Promote successes and strengths in the acquired entity
* There is no business more important than the firm’s business.
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
8a2e82879b309fbbf8459f35718ec9324f63473e
474
402
2018-10-29T12:17:44Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2007 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
THIS ARTICLE IS NOT YET COMPLETE
=Pre-Merger Actions=
==Pre-merger Requisits==
* Beyond Financial Due-diligence (history & forecast)
** Financial,
** Legal,
** Cultural,
** Infrastructure, etc
* Include the cost of integration (including IT) in the forecasts
* Understand the financial structure, performance drivers and debt levels
* Understand the hidden control & decision relationships (why the acquired business really works)
* Understand all the stakeholders and implied or expressed service agreements
* Understand the meaning of merger success (in this context and for both parties)
* Agree the merger strategy (on both sides of the table)
* Don’t kill it during negotiation (greed is not good in this case)
==Bishop’s Stakeholder Communities Model==
===Analysing Strategy, Culture & Processes===
We see a business or business unit as only having activities designed to service these communities. Some
Processes are purely to foster community interaction & membership, others are designed for services the
community needs like payroll, leave applications, advertisements, policy creation, complaints, help, performance
information and dissemination. With a little thought and consistent application the model proves both universal
and scalable.. You may use this model freely as long as the original author is always accredited.
A business consists only of stakeholder communities:
<table>
<tr>
<td>
# Workforce
## Employees
## Contractors
# Suppliers
# Partners
## Business network
## Cooperative
# Customers
## Pay for goods & service
# Clients
## Receive goods & service
# Governance
## Regulators
## Board
## Senior exec
# Government
# Wealth / Enterprise Custodians
## Asset managers
## Treasury, equipment, IP
# The Public
## The ultimate source & influence on all other stakeholders
</td>
<td>
[[Image:BishopsStakeholderCommunityModel.png]]
</td>
</tr>
</table>
=Post Merger Actions=
==Introduction==
* Understand the required degree of integration for the intended merger outcome
* Assess and monitor merger & integration risk
** Including: triggering events, consequences, remediation, responsibility, escalations
** Consider carefully the role of internal & external brands
* Empower the merger from the top
** Establish an merger or integration steering committee
*** Comprising board + stakeholder executive (include IT)
* Establish an integration manager / office
** Assemble the right-skilled integration team
** Focus Internal PR on bonding and service crossflow (not happy sheets)
** Establish a specific IT integration/interfacing advisory panel include business leaders
** Establish an integration ‘help-desk’ & communicate its existence
* Re-Perform cultural due diligence (where high integration exists)
* Perform targeted redundancies early & together – then tell the team it is over
* Revise Management Performance Reporting
** Target at the required integration degree
* Implement an integration strategy
** Work in many short (100 day) projects
* Implement a merger tracking programme
** Defined performance measures with targets (automate)
** Risk & remediation managed (automate)
** Progress & outcome communications
* Monitor progress and revise strategy
==Empower from the Top==
Weber (1996) concluded merger successes were generally CEO lead who:
* Dedicate executive time and focus
* Put together a leadership team to drive it
* Focus management attention on formal success factors
* Create a sense of human purpose and direction
* Model desired behaviour and ‘rules of the road’
==Distilling the Risks==
(Weber (96) & Bishop)
1 Is the combination achieving financial and operational goals? R1
2 Are schedules on target and are changes being implemented effectively? R2
3 Do employees understand and support the need for change? R3
4 What is the effect on people’s well-being and esprit-de-corps? R4
5 Are managers at all levels taking steps to minimise negative reactions and build positive feelings? R5
6 Are productivity or work quality being affected? R6
7 Do people understand their new roles and what is expected? R7
8 Are client and staff complaint levels stable or dropping? R8
9 Is the IT Business Process value map stable or declining? (See next slide for an example) R9
10 Is the post-merger integration investment budget on track? R10
==The IT and Business Process Value Map==
$NTV – Net Time Value (of net contribution over life of IT system)
This table runs at the businees process and business unit, etc levels
DO NOT UNDERESTIMATE THE IMPACT OF IT ISSUES
BP1 BP2 BP3 BP4
IT Sys1 $NTv $NTv $NTv $NTv $TNTV
IT Sys2 $NTv $NTv $NTv $NTv $TNTV
IT Sys3 $NTv $NTv $NTv $NTv $TNTV
IT Sys4 $TNTV
IT Sys5 $TNTV
IT Sys6 $TNTV
IT Sys7 $TNTV
IT Sys8 $TNTV
IT Sys9 $TNTV
$TNTV $TNTV $TNTV $TNTV
==Tracking Success – The Scorecard==
* Market measures
* Integration measures
* Operational measures
* Process measures
* Cultural measures
* Financial measures
* Purpose measures
==Role of the Integration Manager==
(Ashkenis & Francis 2001)
* Inject Speed
** Ramp up planning
** Accelerate implementation
** Push for decisions & actions
** Monitor progress & report to CEO/Steering
* Engineer Success
** Identify critical business synergies
** Define and launch 100 day projects
** Orchestrate BP transformation to combine entity Best Practice
* Make Social Connections
** Serve as a travelling ambassador between locations and businesses
** Serve as a lightning rod for hot issues (& venting)
** Interpret the customs language and culture of both companies
* Create Structure
** Provide flexible integration frameworks
** Mobilize joint teams
** Create key events and timelines
** Facilitate team and exec review
==Engaging The Right Skills==
* Project management
* Risk management
* Process reengineering
* IT interfacing / integrating
* Marketing & Brand management
* Intra-Corporate & Public Relations
* Corporate Governance
* Conglomerate Accounting & Finance
* Legal & HR
==Constraining Risk Events==
-Setting Strategic Priorities-
* Address:
** Corporate PR, marketing & sales quickly – these are the company to most external stakeholders
* Focus on retaining key staff
* Focus on customer retention
* Focus on IT change cost
* Do not disconnect business process from IT systems during transition (and understand the ISNTV)
* Forge a new corporate identity – or know why you aren’t
* Focus/ Build on similarities – not differences
* Align capabilities, services and products
* Promote successes and strengths in the acquired entity
* There is no business more important than the firm’s business.
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
8a2e82879b309fbbf8459f35718ec9324f63473e
490
474
2018-10-29T12:19:09Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
=About The Author & The Article=
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] As head of a succession of consulting firms and as a board member, vice chairman and chairman of a variety of entities, the author has participated in a number of mergers and acquisitions both as the dominant and junior partner. Through study and application of the theory, and participation in/responsibility for both successful and unsuccessful mergers he has acquired a detailed practical knowledge of how to make mergers and acquisitions work successfully.
Copyright 1995-2007 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
=Introduction - Why Merge or Acquire?=
THIS ARTICLE IS NOT YET COMPLETE
=Pre-Merger Actions=
==Pre-merger Requisits==
* Beyond Financial Due-diligence (history & forecast)
** Financial,
** Legal,
** Cultural,
** Infrastructure, etc
* Include the cost of integration (including IT) in the forecasts
* Understand the financial structure, performance drivers and debt levels
* Understand the hidden control & decision relationships (why the acquired business really works)
* Understand all the stakeholders and implied or expressed service agreements
* Understand the meaning of merger success (in this context and for both parties)
* Agree the merger strategy (on both sides of the table)
* Don’t kill it during negotiation (greed is not good in this case)
==Bishop’s Stakeholder Communities Model==
===Analysing Strategy, Culture & Processes===
We see a business or business unit as only having activities designed to service these communities. Some
Processes are purely to foster community interaction & membership, others are designed for services the
community needs like payroll, leave applications, advertisements, policy creation, complaints, help, performance
information and dissemination. With a little thought and consistent application the model proves both universal
and scalable.. You may use this model freely as long as the original author is always accredited.
A business consists only of stakeholder communities:
<table>
<tr>
<td>
# Workforce
## Employees
## Contractors
# Suppliers
# Partners
## Business network
## Cooperative
# Customers
## Pay for goods & service
# Clients
## Receive goods & service
# Governance
## Regulators
## Board
## Senior exec
# Government
# Wealth / Enterprise Custodians
## Asset managers
## Treasury, equipment, IP
# The Public
## The ultimate source & influence on all other stakeholders
</td>
<td>
[[Image:BishopsStakeholderCommunityModel.png]]
</td>
</tr>
</table>
=Post Merger Actions=
==Introduction==
* Understand the required degree of integration for the intended merger outcome
* Assess and monitor merger & integration risk
** Including: triggering events, consequences, remediation, responsibility, escalations
** Consider carefully the role of internal & external brands
* Empower the merger from the top
** Establish an merger or integration steering committee
*** Comprising board + stakeholder executive (include IT)
* Establish an integration manager / office
** Assemble the right-skilled integration team
** Focus Internal PR on bonding and service crossflow (not happy sheets)
** Establish a specific IT integration/interfacing advisory panel include business leaders
** Establish an integration ‘help-desk’ & communicate its existence
* Re-Perform cultural due diligence (where high integration exists)
* Perform targeted redundancies early & together – then tell the team it is over
* Revise Management Performance Reporting
** Target at the required integration degree
* Implement an integration strategy
** Work in many short (100 day) projects
* Implement a merger tracking programme
** Defined performance measures with targets (automate)
** Risk & remediation managed (automate)
** Progress & outcome communications
* Monitor progress and revise strategy
==Empower from the Top==
Weber (1996) concluded merger successes were generally CEO lead who:
* Dedicate executive time and focus
* Put together a leadership team to drive it
* Focus management attention on formal success factors
* Create a sense of human purpose and direction
* Model desired behaviour and ‘rules of the road’
==Distilling the Risks==
(Weber (96) & Bishop)
1 Is the combination achieving financial and operational goals? R1
2 Are schedules on target and are changes being implemented effectively? R2
3 Do employees understand and support the need for change? R3
4 What is the effect on people’s well-being and esprit-de-corps? R4
5 Are managers at all levels taking steps to minimise negative reactions and build positive feelings? R5
6 Are productivity or work quality being affected? R6
7 Do people understand their new roles and what is expected? R7
8 Are client and staff complaint levels stable or dropping? R8
9 Is the IT Business Process value map stable or declining? (See next slide for an example) R9
10 Is the post-merger integration investment budget on track? R10
==The IT and Business Process Value Map==
$NTV – Net Time Value (of net contribution over life of IT system)
This table runs at the businees process and business unit, etc levels
DO NOT UNDERESTIMATE THE IMPACT OF IT ISSUES
BP1 BP2 BP3 BP4
IT Sys1 $NTv $NTv $NTv $NTv $TNTV
IT Sys2 $NTv $NTv $NTv $NTv $TNTV
IT Sys3 $NTv $NTv $NTv $NTv $TNTV
IT Sys4 $TNTV
IT Sys5 $TNTV
IT Sys6 $TNTV
IT Sys7 $TNTV
IT Sys8 $TNTV
IT Sys9 $TNTV
$TNTV $TNTV $TNTV $TNTV
==Tracking Success – The Scorecard==
* Market measures
* Integration measures
* Operational measures
* Process measures
* Cultural measures
* Financial measures
* Purpose measures
==Role of the Integration Manager==
(Ashkenis & Francis 2001)
* Inject Speed
** Ramp up planning
** Accelerate implementation
** Push for decisions & actions
** Monitor progress & report to CEO/Steering
* Engineer Success
** Identify critical business synergies
** Define and launch 100 day projects
** Orchestrate BP transformation to combine entity Best Practice
* Make Social Connections
** Serve as a travelling ambassador between locations and businesses
** Serve as a lightning rod for hot issues (& venting)
** Interpret the customs language and culture of both companies
* Create Structure
** Provide flexible integration frameworks
** Mobilize joint teams
** Create key events and timelines
** Facilitate team and exec review
==Engaging The Right Skills==
* Project management
* Risk management
* Process reengineering
* IT interfacing / integrating
* Marketing & Brand management
* Intra-Corporate & Public Relations
* Corporate Governance
* Conglomerate Accounting & Finance
* Legal & HR
==Constraining Risk Events==
-Setting Strategic Priorities-
* Address:
** Corporate PR, marketing & sales quickly – these are the company to most external stakeholders
* Focus on retaining key staff
* Focus on customer retention
* Focus on IT change cost
* Do not disconnect business process from IT systems during transition (and understand the ISNTV)
* Forge a new corporate identity – or know why you aren’t
* Focus/ Build on similarities – not differences
* Align capabilities, services and products
* Promote successes and strengths in the acquired entity
* There is no business more important than the firm’s business.
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
8a2e82879b309fbbf8459f35718ec9324f63473e
Managing Risk in Mergers & Acquisitions - A Review of the Literature
0
296
404
384
2018-10-29T12:12:38Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
==About The Author & This Article==
Rachel Curry, Research Consultant, Bishop Phillips Consulting
This article presents a summary of the literature examining the risks in corporate mergers and acquisitions over a 20 year period up until 2003. It was originally prepared by Rachel Curry of our research team as background detail for a briefing provided to the Members or the Bendigo Stock Exchange by [[Jonathan Bishop]]. The subheadings represent the names of the articles or papers summarised. Document links were added after the initial paper was prepared, and some references may be in error. The original summaries were compiled from printed editions of the papers or texts, and some page references may differ from the online references. Most of the links will navigate to subscription or book distributers as appropriate. Please advise any identified discrepancies.
</noinclude>
==MERGER FAILURE RATES AND REASONS FOR FAILURE==
===Managing Mergers, Acquisitions & Strategic Alliances===
[http://books.google.com/books?id=w2YR9LwY7FQC&dq=MERGER+FAILURE+RATES+AND+REASONS+FOR+FAILURE&pg=PA5&ots=CSqEPdOcJl&sig=cZKsAhRXXl1LH_lmGHgwNjIOhxI&prev=http://www.google.com/search%3Fsourceid%3Dnavclient%26ie%3DUTF-8%26rls%3DGGLG,GGLG:2005-34,GGLG:en%26q%3DMERGER%2BFAILURE%2BRATES%2BAND%2BREASONS%2BFOR%2BFAILURE&sa=X&oi=print&ct=result&cd=3&cad=legacy]
Sue Cartwright, Cary L. Cooper
Diagnosis and analysis of merger failure has traditionally focused on financial and strategic factors, with mergers considered to fail for rational economic reasons such as economies of scale not achieved to the magnitude expected, poor strategic fit or unexpected changes in market conditions. However, considering financial and strategic factors only is insufficient to achieve a successful merger or acquisition. Two important human factors to merger and acquisition success which impact on integration are:
<ul>
<li> ‘The culture compatibility of the combining organizations, and the resultant cultural dynamics.’
<li> ‘The way in which the merger/acquisition integration process is managed.’
</ul>
A lack of cultural compatibility can inhibit the creation of a ‘cohesive and coherent organizational entity’. A survey conducted by the British Institute of Management (1986) determined that ‘managerial underestimation of the difficulties of merging two cultures was a major contributory factor to merger and acquisition failure.’
The factors often held responsible for merger and joint venture failure include the selection of inappropriate venture partners, cultural incompatibility, and general “parenting” problems. (p.18)
There has been much debate about the most appropriate and accurate way to assess the gains arising from mergers, including both managerial and mathematical methods. Despite the method selected, many studies indicate mergers have an unfavourable impact on profitability, with research conducted by Mecks (1977) and Sinetar (1981) concluding that mergers have been associated with lowered productivity, worse strike records, higher absenteeism, and poorer accident rates.
Further research conducted by Ellis and Pekar (1978) and Marks (1988) suggest that in the long term between 50 and 80 per cent of all mergers and takeovers are considered financially unsuccessful, while a study conducted by the Department of Trade and Industry, published by the British Institute of Management (1988) and another by Hunt (1988) determined the success rates post-acquisition to be around 50 per cent. More current studies show similar trends continuing, with Cartwright and Cooper (1996) determining, on the basis of financial results in the first year of combined trading, that only half of mergers and acquisitions studied were successful.
Estimate by Davy et al (1988) held ‘employee problems’ to be responsible for between one-third and half of all merger failures, while a discussion paper by the British Institute of Management (1986) identified sixteen factors related to unsuccessful mergers and acquisitions, including (p.28):
<ul>
<li> underestimating the difficulties of merging two cultures
<li> underestimating the problems of skill transfer
<li> demotivation of employees of acquired company
<li> departure of key people in acquired company
<li> too much energy devoted to ‘doing the deal’, not enough to post-acquisition planning and integration
<li> decision making delayed by unclear responsibilities and post-acquisition conflicts
<li> neglecting existing business due to the amount of attention going into the acquired company
<li> insufficient research about the acquired company
</ul>
‘Ability to integrate the new company’ (p.28) was ranked as the most important factor for acquisition success according to a study by Booz, Allen and Hamilton (1985) while Kitching (1967) determined ‘the key to merger success was essentially the way in which the “transitional process” was managed and the quality of the working relationship between the partnering organizations.’
===Consulting in Mergers and Acquisitions===
[http://www.ingentaconnect.com/content/mcb/023/1997/00000010/00000003/art00006]
Marks M.L.
Three studies (Davidson, 1991; Elsass and Veiga, 1994; Lubatkin, 1983) have found that ‘fewer than 20 per cent of corporate combinations achieve their desired financial or strategic objectives.’
Zweig (1995) studied deals value at $500 million or more, and found that half of these deals destroyed shareholder value, 30 per cent had a minimal impact and only 17 per cent created shareholder value.
Many factors attributable to this low success rate, including (p.1):
<ul>
<li> paying the wrong price
<li> buying for the wrong reasons
<li> selecting the wrong partner
<li> buying at the wrong time
<li> managing the post-merger integration process inappropriately
</ul>
Marks (1997) together with previous studies (Marks and Mirvis, 1997; Mirvis and Marks, 1992) found the common factor restricting ability to achieve hoped-for synergies and financial gains to be (p. 1- 2):
<ul>
<li> ‘underestimating the multitude of integration issues and problems that arise as organizations come together;
<li> underestimating the drain on resources and the distraction from performance required to manage the transition from pre- to post-merger status; and
<li> underestimating the pervasiveness and depth of the human issues triggered in a merger or acquisition.’
</ul>
Since mid-1980s, many aspects of mergers and acquisitions have changed, including (p.3):
<ul>
<li> ‘deals are more strategically driven
<li> technological advances are driving deals
<li> globalization is driving more deals
<li> deals are involving larger organizations
<li> entire industries are put into play (deregulation, social policies and changing customer demands)
<li> managers are smarter about doing deals and managing integration
<li> human assets are even more crucial to merger and acquisition success than before.’
</ul>
“Consultations to facilitate mergers and acquisitions emanate from sound change management principles, yet must be sensitive to the special requirements of combining complex organizations.” (p.4)
===Enhancing the Success of Mergers and Acquisitions===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=A600DFCDB0CD4D4945CE767ABBAC9918?contentType=Article&hdAction=lnkhtml&contentId=865419]
Mike Schraeder, Dennis R. Self
Research conducted by Carleton (1997) indicate between 55 – 70 per cent of mergers and acquisitions fail to meet their anticipated purpose.
Number of researchers determine that cultural incompatibility of the companies involved in the merger/acquisition are partly responsible for financial benefits anticipated not being achieved (Fralicx and Bolster, 1997; Cartwright and Cooper, 1993). Chatterje et al (1992) also agree that poor cultural fit has contributed to several merger and acquisition failures where the companies involved appeared to be suitable strategic partners.
Mirvis (1985) highlighted four factors that were believed to impact on the integration of organizations:
<ul>
<li> top management relations (including reporting relationships, decision making and flexibility)
<li> compatibility of business systems
<li> existence of a culture that will support the integration of business systems
<li> goals the respective parties intend to achieve
</ul>
Several other factors impacting on integration that have been identified through other research are:
<ul>
<li> compatibility of respective business systems (Mirvis, 1985)
<li> organizational members experience difficulty adjusting to new procedures and performance standards (Marks and Mirvis, 1992)
<li> differences in managerial styles and accounting practices (Cartwright and Cooper, 1993)
</ul>
Weber (1996) identifies that anticipated benefits from mergers and acquisitions are other unrealized because of productivity losses and the ‘traumatic effect of mergers and acquisitions on a firm’s human resources.’ Also finds that ‘the magnitude of cultural differences can effectively impede a successful integration during mergers and acquisitions, resulting in poor financial performance.’
Coopers and Lybrand (1992) studied failed mergers and acquisitions, and over 80 per cent of the executives involved identified that different management practices and styles as the primary contributor to integration issues.
To achieve merger and acquisition success, several researchers have determined the following factors need to be considered:
<ul>
<li> develop a flexible and comprehensive integration plan
<li> share information and encourage communication
<li> encourage participation by involving others in the process
<li> enhance commitment by establishing relationships and building trust
</ul>
===Due Diligence: The Devil in the Details===
[http://www.workforce.com/archive/feature/22/22/68/index.php]
Greengard, Samuel
“HR has a critical role in due diligence – both from the benefits and compensation side and the cultural side” – Deborah Rochelle, senior merger and acquisition consultant, Watson Wyatt Worldwide. She believes that ‘due diligence must encompass people, programs, plans, policies and processes.’
Clemente (1999) states that ‘ultimately, many mergers fail because of human resource–related issues, such as culture clash.’
Studies have found that between 50 and 75 per cent of all merging companies fail to retain book value two years after merging, and ‘many others are torpedoed by ongoing culture clash and an erosion of top talent.’ (p. 2)
Mitchell Lee Marks, management consultant, believes a number of failed mergers aren’t because of inept management or inadequate due diligence, but because the two organizations haven’t determined whether they have compatible cultures or how to overcome these differences if the cultures aren’t compatible.
Organizations should develop a detailed checklist to work through due diligence process to allow the organization to evaluate which factors are most important.
===On Managing Cultural Integration and Cultural Change Process in M & A===
Bijilsma-Frankema, K. (2001)
Journal of European Industrial Training, Vol.25
Magnet (1984) and Gilkey, 1991) have found that between 60 per cent and two-thirds of mergers and acquisitions fail to meet expectations.
Gilkey argues that:
‘the high percentage of failure is mainly due to the fact that mergers and acquisitions are still designed with business and financial fit as primary conditions, leaving psychological and cultural issues as secondary concerns. A close examination of these issues could have brought about a learning process, directed at successfully managing such ventures.’ (Gilkey, 1991, p.331)
Eisele (1996) found three factors that generally influence the success of mergers and acquisitions (p.6):
<ul>
<li> cultural fit
<li> cultural potential
<li> competent managers to guide the process
</ul>
===The Effective Management of Mergers===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=D784A9C7145AEEB97AB42AC75F0E6A95?contentType=Article&hdAction=lnkhtml&contentId=1410708]
Han Nguyen, Brian H. Kleiner
YTD 2002, there were over 4,363 mergers and acquisitions, worth over $291.7 billion.
Prime reason for most mergers and acquisitions is to maintain or increase market share, and to increase shareholder value by cutting costs, and introducing new, expanded and improved services.
Study by KPMG (publishing in PR Newswire, 1999) found that between 75 and 83 per cent of mergers and acquisitions failed, where failure meant lowered productivity, labour unrest, higher absenteeism and loss of shareholder value, or even a dissolution of the companies involved.
Merger success is directly correlated with the level and quality of planning, with insufficient time often being spent analyzing current and future market trends and integration issues. Failure is often also due to an insufficient due diligence (Oon, 1998).
Simpson (2000) found the opportunity for mergers to fail is greatest during the integration phase because of improper managing and strategy, culture differences, delays in communications, and lack of clear vision.
Bijilsma-Frankema (2001) found ‘increasing evidence that cultural incompatibility is single largest cause of lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’
KPMG developed best practice guidelines, with the following main keys necessary for successful integration (p.4):
<ul>
<li> ‘Directors must get out of the boardroom
<li> Set direction for the new business
<li> Understand the emotional political and rational issues
<li> Maximize involvement
<li> Focus on communication
<li> Provide clarity around roles and decision lines
<li> Continue to focus on customers
<li> Be flexible’
</ul>
Communication is listed as the key factor to make integration effective and successful.
===Managing Merger Madness===
[http://www.emeraldinsight.com/Insight/viewContentItem.do?contentType=Article&hdAction=lnkhtml&contentId=869290]
Journal: Strategic Direction (Author unkown)
Successful mergers and acquisitions consist of (p.1):
<ul>
<li> Acquisition target being carefully and dispassionately selected
<li> A post-acquisition strategy relevant to the newly merger organization need to be developed from the start
</ul>
In pre-merger planning stage, the most common mistakes are (p.1):
<ul>
<li> Failure to conduct a detailed risk assessment and management profile of the acquisition target
<li> Allowing pressure to increase share value to take the place of a convincing strategy
<li> Assuming total synergy
</ul>
The most common mistakes in integration processes are (p.1):
<ul>
<li> Slow post-merger integration
<li> Cultural conflicts
<li> No risk management strategy
</ul>
===Merging for Success===
[http://www.ingentaconnect.com/content/mcb/056/2002/00000018/00000006/art00003]
Author: Unknown
Found that in the first few months following the announcement of an acquisition, productivity falls by up to 50 per cent. Most mergers and acquisitions fail for reasons other than money, such as leadership issues involving unclear objectives or cultural clashes.
===Anatomy of a Merger===
Unknown.
Success of mergers and acquisitions range from 20 to 60 per cent (British Institute of Management, 1986; Hunt, 1988; Marks, 1988; Weber, 1996). Poor results have now generally come to be attributed to poor human resource planning.
Research identifies communication to be the most important factor during the merger and acquisition process.
Both Balmer and Dinnie (1999) and De Voge and Spreier (1999) indicate that communication is the key to a successful integration of two clashing cultures.
Ernst and Young (1994) identified cultural incompatibility as the single largest cause of ‘lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’ (p. 3)
For sustained competitive advantage to be achieved, it is imperative the mergers and acquisitions be implemented from a financially and legally sound standpoint, as well as a behavioural approach.
Leadership from top-level management is also important for merger success. Weber (1996) found the higher the commitment of the acquired firm’s top management, the higher the effectiveness and the financial performance of the merged entity. Success mergers are led by CEOs who (p.6, Part II):
<ul>
<li> Dedicate executive time and focus
<li> Put together a leadership team
<li> Focus management attention on success factors
<li> Create a sense of human purpose and direction
<li> Model desired behaviour and ‘rules of the road’
</ul>
It is recommended a merger-tracking program be implemented to determine whether the organization is working towards its goals, and what the merger outcomes were. It should cover things such as (p.7 – 8, Part II):
<ul>
<li> ‘Is the combination achieving financial and operational goals?
<li> Are schedules on target, and are changes being implemented effectively?
<li> Do employees understand and support the need for change?
<li> What is the effect on people’s well-being and esprit de corps?
<li> Are managers at all levels taking steps to minimize negative reactions and build positive feelings?
<li> Are productivity or work quality being affected?
<li> Do people understand their new roles and what is expected of them?
</ul>
==ATTRIBUTES LEADING TO SUCCESS OR FAILURE==
===Mergers and Acquisitions: A Guide to Creating Value for Stakeholders===
[http://www.questia.com/PM.qst?a=o&d=106499472#]
Michael A. Hitt, Jeffrey S. Harrison, R. Duane Ireland
Some important factors that can contribute to success or failure in mergers and acquisitions are:
'''Due Diligence'''
Lack of due diligence has caused many merger failures. Involves comprehensive analysis of firm characteristics such as financial condition, management capabilities, physical assets and intangible assets.
'''Financing'''
Manageable debt levels should be ensured.
'''Complementary Resources'''
Occurs when the ‘primary resources of the acquiring and target firms are somewhat different, yet simultaneously supportive of one another.’ (p.179) This tends to create economic value to a greater value that exists when the merging firms have identical or unrelated resources.
'''Friendly/Hostile Acquisitions'''
Friendly acquisitions tend to create greater economic value. A hostile acquisition can reduce the transfer of information during due diligence and merger integration, and increase turnover of key executives in the firm being acquired.
'''Synergy Creation'''
Four foundations to creation of synergy are strategic fit, organizational fit, managerial actions and value creation.
'''Organizational Learning'''
Many people should participate in the acquisition process to ensure knowledge about acquisitions is being spread throughout the firm, and isn’t lost if one of the key people typically involved leaves. The learning process should be managed, with steps taken to study and learn from acquisitions, with the information gained recorded.
'''Focus on Core Business'''
Cultural and management differences are more greatly magnified the less firms have in common, therefore constraining the sharing of resources and capabilities. ‘Result is that positive benefits from financial synergy are not enough to offset the negative effects of diversification.’ (p.181)
'''Emphasis on Innovation'''
Innovation is critical to organizational competitiveness. ‘Companies that innovate enjoy the first-mover advantages of acquiring a deep knowledge of new markets and developing strong relationships with key stakeholders in those markets’ (p. 181)
'''Ethical Concerns / Opportunism'''
Risk in mergers and acquisitions is that the information received may be incorrect, misleading or deceptive. Steps should be taken to ensure that the information is accurate and hasn’t been manipulated by management with the aim to making performance appear higher than it is.
===The Complete Guide to Mergers & Acquisitions: Process Tools to Support M&A: Integration at every level===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The likelihood of a successful merger is increased by considering the following ten key recommendations (p. 196 – 197):
<ul>
<li> ‘Conduct due-diligence analyses in the financial and human-capital-related areas.
<li> Determine the required or desired degree of integration.
<li> Speed up decisions instead of focusing on precision.
<li> Get support and commitment from senior managers.
<li> Clearly define an approach to integration.
<li> Select a highly respected and capable integration leader.
<li> Select dedicated, capable people for the integration core team and task forces.
<li> Use best practices.
<li> Set measurable goals and objectives.
<li> Provide continuous communication and feedback.’
</ul>
'''Due Diligence'''
Human resource due diligence analysis as well as financial due diligence is important. It provides details about where the companies converge or diverge in areas such as leadership, communication, training and performance management. Identifying this can allow the companies to plan for any conflicts that might occur during the integration phase in respect to these matters.
'''Speedy Decisions'''
Tends to allow faster integration, and enables people to refocus more quickly on work, customers and results.
'''Clearly Defined Approach'''
Allows faster decision making and organizes the entire integration process. ‘Without a defined approach that includes clear deliverables, due dates, milestones, information flows, and so on, each function of the enterprise will be working on a different schedule and producing deliverables that vary widely in terms of quality and content.’ (p.198)
'''Capable Leadership'''
‘The integration leader should be an excellent project manager with a broad view of the enterprise and good people skills.’ (p. 198)
'''Measurable Goals and Objectives'''
Measurable goals and objectives let people involved know what a successful integration consists of, and how long it should take.
==COMMON PROBLEMS AND CHALLENGES IN ACQUISITIONS==
===Managing Acquisitions: Creating Value Through Corporate Renewal===
[http://www.amazon.com/Managing-Acquisitions-Creating-Through-Corporate/dp/0029141656]
David B. Jemison, Philippe C. Haspeslagh
Four common challenges in managing acquisitions are (p. 8):
<ul>
<li> ‘Ensuring that acquisitions support the firm’s overall corporate renewal strategy
<li> Developing a pre-acquisition decision-making process that will allow consideration of the “right” acquisitions and that will develop for any particular acquisition a meaningful justification, given limited information and the need for speed and secrecy.
<li> Managing the post-acquisition integration process to create the value hoped for when the acquisition was conceived.
<li> Fostering both acquisition-specific and broader organizational learning from the exposure to the acquisition.’
</ul>
‘The key to integration is to obtain the participation of the people involved without compromising the strategic task.’ (p.11)
Acquisition integration has several challenges (p.11):
<ul>
<li> ‘Adapting pre-acquisition views to embrace reality,
<li> An ability to create the atmosphere necessary for capability transfer,
<li> The leadership to provide a common vision,
<li> And careful management of the interactions between the organizations.’
</ul>
'''Process Perspective'''
‘Adopting a process perspective shifts the focus from an acquisition’s results to the drivers that cause these results: the transfer of capabilities that will lead to competitive advantage. In the process perspective, acquisitions are not independent, one-off deals. Instead, they are a means to the end of corporate renewal. The transaction itself does not bring the expected benefits; instead, actions and activities of the managers after the agreement determine the results.’ (p.12)
(A summary of the entire chapter is provided on p. 15)
===Winning at Mergers and Acquisitions: The Guide to Market-Focused |Planning and Integration===
[http://www.wiley.com/WileyCDA/WileyTitle/productCd-047119056X.html]
Mark N. Clemente, David S. Greenspan
Key to successful mergers and acquisitions is ‘being able to take the differences inherent in the two companies and meld them to create an enhanced capability.’ (p. 43)
Problem is often that stakeholders focus on the short-term benefits from mergers and acquisitions such as cost reduction, which results in decisions being made that can sacrifice long-term goals to achieve short-term savings.
‘When companies seek to merge or acquire, and can cite more than two strategic drivers as reasons to come together, then the chances of success are higher.’ (p.44)
Twelve common challenges present in the majority of mergers and acquisitions are (p.163):
<ul>
<li> ‘Embracing the concept of change
<li> Setting priorities
<li> Sharing information and effecting corporate understanding
<li> Melding cultures
<li> Forging a new corporate identity
<li> Determining managerial roles and responsibilities
<li> Effecting teamwork and cooperation
<li> Combining corporate functions and internal processes
<li> Aligning capabilities, services, and products
<li> Measuring results
<li> Acknowledging the two levels of integration
<li> Maintaining flexibility’
</ul>
The long-term success or failure of mergers and acquisitions can be determined by the steps put in place to meet these challenges – each challenge should be ‘met with a clear focus and forward-thinking tactics.’ (p.163)
'''Setting Priorities'''
Integration planning is the number-one priority once a deal has been closed. The critical steps in the integration process itself are:
<ul>
<li> Address corporate information, marketing, and sales departments quickly, as these represent the company to stakeholders
<li> Corporate image and branding aspects are important to begin promoting the new image. This allows the company to display ‘the best face on the merger to external audiences while you grapple with many of the longer-term internal and operational issues.’ (p.165)
<li> Focus on retaining key employees
<li> Focus on customer retention – this is critical to maintain the value of the acquired company.
</ul>
'''Sharing Information and Effecting Corporate Understanding'''
The two companies need to share information, and understand the nature of the new corporate relationship. This should address issues such as ‘What is the company’s corporate philosophy? What are the strategic intentions of senior management? Why has the company come to develop, commercialize, and invest in the products and services it does? How are the sales and production people compensated and why?’ (p. 166)
'''Melding Cultures'''
‘Cultural compatibility is one of the most significant determinants of a successful M&A transaction.’ (p.167)
‘Acknowledging whether cultural compatibility can exist should be a factor in determining whether to pursue a given deal. Integration can never be attaining – and growth strategies never realized – if two companies are worlds apart culturally.’ (p.167)
This alignment of cultures can be achieved through information sharing, emphasizing similarities and ‘mitigating dissimilarities’ (p.167) through effective communication.
'''Determining Managerial Roles and Responsibilities'''
‘Allowing the acquired company’s managers to maintain responsibility for activities central to its core operations will help to accelerate integration by minimizing gaps in performance or production. Ideally, the acquiring management should audit and counsel the existing management, augmenting it where it is weak but leaving the previous management team intact until key processes have been successfully incorporated into the merged firm’s operational infrastructure.’ (p. 169)
Defining the character traits required in the new organization, and then identifying people possessing these assists in the selection of the management team that will best achieve strategic objectives.
Staffing decisions must be made early in the integration process to avoid employee uncertainty, which can impact on productivity.
'''Measuring Results'''
The integration program must have measurable criteria to assess the progress of the merger. ‘Must strive to set forth measurement criteria wherever it is possible to do so, whether it is by setting time parameters by which certain integration tasks must be completed, by gauging attitude changes via employee research, or by tracking the number of people who stay with the merged company against expected levels of attrition.’ (p. 175)
'''Acknowledging the Two Levels of Integration'''
‘The key to a prompt and effective integration launch is focusing on the similarities inherent in each organization and building on them.’ (p.175)
‘The key to successful integration is identifying the similarities inherent in each organization and building on them while maintaining a disciplined yet flexible approach…’ (p.177)
‘Isolating common factors and focusing on similarities provides the essence of the growth planning approach to devising and implementing a successful integration strategy.’ (p. 177)
==MEASURING MERGER SUCCESS==
===Keeping Track of Success: Merger Measurement Systems===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The benefits that arise from a formal tracking process are (p.145):
<ul>
<li> ‘Determining whether the transition is proceeding according to plan
<li> Identifying “hot spots” before they flare out of control
<li> Ensuring a good flow of communication
<li> Highlighting the need for midcourse corrections
<li> Demonstrating interest in the human side of change
<li> Involving more people in the combination process
<li> Sending a message about the new company’s culture.’
</ul>
‘Four areas for which separate but interrelated measurement processes must be continually managed during merger integration’: (p.145)
<ul>
<li> Integration measures: assess the integration events and determine whether ‘overall integration approach is accomplishing its mission of leading the organization through change.’ (p.145)
<li> Operational measures: track ‘any potential merger-related impact on the organization’s ability to conduct its continuing, day-to-day business.’ (p.145)
<li> Process and cultural measures: determine the ‘status of merger-driven efforts to redesign business processes or elements of the organizational culture.’ (p.145)
<li> Financial measures – track and report whether the company is achieving its expected synergies.
</ul>
(Examples of measures used for the above are included on p.145)
'''Integration Measures'''
‘Merger measurement systems need to evolve as the integration evolves into each successive phase.’ (p.146)
‘Near the end of the project, it is essential to capture feedback, learning, and process upgrades that can be used to build an ongoing institutional knowledge base regarding the integration process itself.’ (p.150)
Refer to p.150 for Automated Feedback Channels – several interesting points regarding use of IT in integration.
'''Operational Measures'''
The company should establish and communicate critical success factors. These critical success factors ‘summarize the essential strategic business outcomes that must be achieved.’ (p.152)
(Diagram on p.153 provides a summary of the process involved in defining operational measures)
'''Process and Cultural Measures'''
A ‘formal process for measuring the effectiveness of major merger-related redesign and cultural integration efforts’ (p.154) should be created by the company to track progress.
One method for this is the ‘Merger Integration Scorecard’ which provides a status update showing the progress of the most important critical success factors in key measurement categories. An example of this is provided on p.159-161.
'''Financial Measures'''
Four components are recommended to ensure a company identifies and achieves its essential objectives (p.162):
<ul>
<li> ‘An education process
<li> A verification process
<li> Document templates for submitting, tracking, and summarizing the achievement of synergies
<li> A process for reporting and communicating the achievement of synergies.’
</ul>
It is also important to identify the sources of synergies. Synergies typically come from: (p.163)
<ul>
<li> Income generation – ‘produce efficiencies whereby increased production is achieved via changes to processes, new or different equipment, new products, new channels for sales or distribution, enhanced quality, new management techniques, or best practices.’ (p.163)
<li> Expense reductions unrelated to reductions in staffing expenses – result from the avoidance and reduction of costs that were made possible due to the integration.
<li> Avoidance of capital outlay – ‘involve any reduction in planned use of capital, or in the scope of capital projects, that is made possible by improvements in plant use or by the sharing of resources.’ (p.163)
<li> Expense reductions related to reductions in staffing expenses – ‘involves the elimination of redundant roles, positions, or units when these reductions are attributable to the integration.’ (p.163)
</ul>
==BENEFITS FROM INTEGRATION MANAGEMENT==
===Integration Managers: Special Leaders for Special Times===
[http://www1.ximb.ac.in/users/fac/dpdash/dpdash.nsf/23e5e39594c064ee852564ae004fa010/7216b2f7b30b5247e52568b2001830f5/$FILE/ATT8WDSA/Integration_Managers.pdf]
Ronald N. Ashkenas, Suzanne C. Francis
(Article basically covers the role of integration managers, and looks at case studies involving integration managers)
‘Integration managers help the process in four principal ways: they speed it up, create a structure for it, forge social connections between the two organization, and help engineer short-term successes that produce business results.’ (p.183-184)
‘The integration manager can clear paths between the two cultures by facilitating the social connections among people on both sides.’ (p.191) This can help to overcome the problem of culture clash.
Five personality factors that are likely to increase the success of individuals in the role of integration manager are (p.196 – 201):
<ul>
<li> Deep knowledge of the acquiring company
<li> No need for credit – ‘The integration manager cannot be concerned with getting credit – or even recognition – for an effective integration.’ (p.198)
<li> Comfort with chaos – The integration manager need to have strong project management and organizational skills. ‘The best integration managers keep the process moving by constantly recalibrating their plans.’ (p.199)
<li> A responsible independence – Needs to be able to take initiative and make independent judgments, as there is no one providing instructions for what they need to do. It is also ‘vitally important that the integration manager have – or win – the trust of the most senior executives in his or her company.’ (p.200)
<li> Emotional and cultural intelligence – Integration manager must be able to understand the emotional and cultural issues that are involved in a merger, and recognize that it isn’t just an ‘engineering exercise’, but involves people.
</ul>
Summary, p. 202 – 203 ‘What Integration Managers Do’
'''Inject Speed'''
<ul>
<li> Ramp up planning efforts
<li> Accelerate implementation
<li> Push for decisions and actions
<li> Monitor progress against goals, and pace the integration efforts to meet deadlines
</ul>
'''Engineer Success'''
<ul>
<li> Help identify critical business synergies
<li> Launch 100-day projects to achieve short-term bottom-line results
<li> Orchestrate transfers of best practices between companies
</ul>
'''Make Social Connections'''
<ul>
<li> Act as traveling ambassador between locations and businesses
<li> Serve as a lighting rod for hot issues; allow employees to vent
<li> Interpret the customs, language, and cultures of both companies
</ul>
Cr'''eate Structure'''
<ul>
<li> Provide flexible integration frameworks
<li> Mobilize joint teams
<li> Create key events and timelines
<li> Facilitate team and executive reviews’ (p.202 – 203)
</ul>
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
6e31233413f04229fc45c9c48f3a9109be21ba02
476
404
2018-10-29T12:17:44Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
==About The Author & This Article==
Rachel Curry, Research Consultant, Bishop Phillips Consulting
This article presents a summary of the literature examining the risks in corporate mergers and acquisitions over a 20 year period up until 2003. It was originally prepared by Rachel Curry of our research team as background detail for a briefing provided to the Members or the Bendigo Stock Exchange by [[Jonathan Bishop]]. The subheadings represent the names of the articles or papers summarised. Document links were added after the initial paper was prepared, and some references may be in error. The original summaries were compiled from printed editions of the papers or texts, and some page references may differ from the online references. Most of the links will navigate to subscription or book distributers as appropriate. Please advise any identified discrepancies.
</noinclude>
==MERGER FAILURE RATES AND REASONS FOR FAILURE==
===Managing Mergers, Acquisitions & Strategic Alliances===
[http://books.google.com/books?id=w2YR9LwY7FQC&dq=MERGER+FAILURE+RATES+AND+REASONS+FOR+FAILURE&pg=PA5&ots=CSqEPdOcJl&sig=cZKsAhRXXl1LH_lmGHgwNjIOhxI&prev=http://www.google.com/search%3Fsourceid%3Dnavclient%26ie%3DUTF-8%26rls%3DGGLG,GGLG:2005-34,GGLG:en%26q%3DMERGER%2BFAILURE%2BRATES%2BAND%2BREASONS%2BFOR%2BFAILURE&sa=X&oi=print&ct=result&cd=3&cad=legacy]
Sue Cartwright, Cary L. Cooper
Diagnosis and analysis of merger failure has traditionally focused on financial and strategic factors, with mergers considered to fail for rational economic reasons such as economies of scale not achieved to the magnitude expected, poor strategic fit or unexpected changes in market conditions. However, considering financial and strategic factors only is insufficient to achieve a successful merger or acquisition. Two important human factors to merger and acquisition success which impact on integration are:
<ul>
<li> ‘The culture compatibility of the combining organizations, and the resultant cultural dynamics.’
<li> ‘The way in which the merger/acquisition integration process is managed.’
</ul>
A lack of cultural compatibility can inhibit the creation of a ‘cohesive and coherent organizational entity’. A survey conducted by the British Institute of Management (1986) determined that ‘managerial underestimation of the difficulties of merging two cultures was a major contributory factor to merger and acquisition failure.’
The factors often held responsible for merger and joint venture failure include the selection of inappropriate venture partners, cultural incompatibility, and general “parenting” problems. (p.18)
There has been much debate about the most appropriate and accurate way to assess the gains arising from mergers, including both managerial and mathematical methods. Despite the method selected, many studies indicate mergers have an unfavourable impact on profitability, with research conducted by Mecks (1977) and Sinetar (1981) concluding that mergers have been associated with lowered productivity, worse strike records, higher absenteeism, and poorer accident rates.
Further research conducted by Ellis and Pekar (1978) and Marks (1988) suggest that in the long term between 50 and 80 per cent of all mergers and takeovers are considered financially unsuccessful, while a study conducted by the Department of Trade and Industry, published by the British Institute of Management (1988) and another by Hunt (1988) determined the success rates post-acquisition to be around 50 per cent. More current studies show similar trends continuing, with Cartwright and Cooper (1996) determining, on the basis of financial results in the first year of combined trading, that only half of mergers and acquisitions studied were successful.
Estimate by Davy et al (1988) held ‘employee problems’ to be responsible for between one-third and half of all merger failures, while a discussion paper by the British Institute of Management (1986) identified sixteen factors related to unsuccessful mergers and acquisitions, including (p.28):
<ul>
<li> underestimating the difficulties of merging two cultures
<li> underestimating the problems of skill transfer
<li> demotivation of employees of acquired company
<li> departure of key people in acquired company
<li> too much energy devoted to ‘doing the deal’, not enough to post-acquisition planning and integration
<li> decision making delayed by unclear responsibilities and post-acquisition conflicts
<li> neglecting existing business due to the amount of attention going into the acquired company
<li> insufficient research about the acquired company
</ul>
‘Ability to integrate the new company’ (p.28) was ranked as the most important factor for acquisition success according to a study by Booz, Allen and Hamilton (1985) while Kitching (1967) determined ‘the key to merger success was essentially the way in which the “transitional process” was managed and the quality of the working relationship between the partnering organizations.’
===Consulting in Mergers and Acquisitions===
[http://www.ingentaconnect.com/content/mcb/023/1997/00000010/00000003/art00006]
Marks M.L.
Three studies (Davidson, 1991; Elsass and Veiga, 1994; Lubatkin, 1983) have found that ‘fewer than 20 per cent of corporate combinations achieve their desired financial or strategic objectives.’
Zweig (1995) studied deals value at $500 million or more, and found that half of these deals destroyed shareholder value, 30 per cent had a minimal impact and only 17 per cent created shareholder value.
Many factors attributable to this low success rate, including (p.1):
<ul>
<li> paying the wrong price
<li> buying for the wrong reasons
<li> selecting the wrong partner
<li> buying at the wrong time
<li> managing the post-merger integration process inappropriately
</ul>
Marks (1997) together with previous studies (Marks and Mirvis, 1997; Mirvis and Marks, 1992) found the common factor restricting ability to achieve hoped-for synergies and financial gains to be (p. 1- 2):
<ul>
<li> ‘underestimating the multitude of integration issues and problems that arise as organizations come together;
<li> underestimating the drain on resources and the distraction from performance required to manage the transition from pre- to post-merger status; and
<li> underestimating the pervasiveness and depth of the human issues triggered in a merger or acquisition.’
</ul>
Since mid-1980s, many aspects of mergers and acquisitions have changed, including (p.3):
<ul>
<li> ‘deals are more strategically driven
<li> technological advances are driving deals
<li> globalization is driving more deals
<li> deals are involving larger organizations
<li> entire industries are put into play (deregulation, social policies and changing customer demands)
<li> managers are smarter about doing deals and managing integration
<li> human assets are even more crucial to merger and acquisition success than before.’
</ul>
“Consultations to facilitate mergers and acquisitions emanate from sound change management principles, yet must be sensitive to the special requirements of combining complex organizations.” (p.4)
===Enhancing the Success of Mergers and Acquisitions===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=A600DFCDB0CD4D4945CE767ABBAC9918?contentType=Article&hdAction=lnkhtml&contentId=865419]
Mike Schraeder, Dennis R. Self
Research conducted by Carleton (1997) indicate between 55 – 70 per cent of mergers and acquisitions fail to meet their anticipated purpose.
Number of researchers determine that cultural incompatibility of the companies involved in the merger/acquisition are partly responsible for financial benefits anticipated not being achieved (Fralicx and Bolster, 1997; Cartwright and Cooper, 1993). Chatterje et al (1992) also agree that poor cultural fit has contributed to several merger and acquisition failures where the companies involved appeared to be suitable strategic partners.
Mirvis (1985) highlighted four factors that were believed to impact on the integration of organizations:
<ul>
<li> top management relations (including reporting relationships, decision making and flexibility)
<li> compatibility of business systems
<li> existence of a culture that will support the integration of business systems
<li> goals the respective parties intend to achieve
</ul>
Several other factors impacting on integration that have been identified through other research are:
<ul>
<li> compatibility of respective business systems (Mirvis, 1985)
<li> organizational members experience difficulty adjusting to new procedures and performance standards (Marks and Mirvis, 1992)
<li> differences in managerial styles and accounting practices (Cartwright and Cooper, 1993)
</ul>
Weber (1996) identifies that anticipated benefits from mergers and acquisitions are other unrealized because of productivity losses and the ‘traumatic effect of mergers and acquisitions on a firm’s human resources.’ Also finds that ‘the magnitude of cultural differences can effectively impede a successful integration during mergers and acquisitions, resulting in poor financial performance.’
Coopers and Lybrand (1992) studied failed mergers and acquisitions, and over 80 per cent of the executives involved identified that different management practices and styles as the primary contributor to integration issues.
To achieve merger and acquisition success, several researchers have determined the following factors need to be considered:
<ul>
<li> develop a flexible and comprehensive integration plan
<li> share information and encourage communication
<li> encourage participation by involving others in the process
<li> enhance commitment by establishing relationships and building trust
</ul>
===Due Diligence: The Devil in the Details===
[http://www.workforce.com/archive/feature/22/22/68/index.php]
Greengard, Samuel
“HR has a critical role in due diligence – both from the benefits and compensation side and the cultural side” – Deborah Rochelle, senior merger and acquisition consultant, Watson Wyatt Worldwide. She believes that ‘due diligence must encompass people, programs, plans, policies and processes.’
Clemente (1999) states that ‘ultimately, many mergers fail because of human resource–related issues, such as culture clash.’
Studies have found that between 50 and 75 per cent of all merging companies fail to retain book value two years after merging, and ‘many others are torpedoed by ongoing culture clash and an erosion of top talent.’ (p. 2)
Mitchell Lee Marks, management consultant, believes a number of failed mergers aren’t because of inept management or inadequate due diligence, but because the two organizations haven’t determined whether they have compatible cultures or how to overcome these differences if the cultures aren’t compatible.
Organizations should develop a detailed checklist to work through due diligence process to allow the organization to evaluate which factors are most important.
===On Managing Cultural Integration and Cultural Change Process in M & A===
Bijilsma-Frankema, K. (2001)
Journal of European Industrial Training, Vol.25
Magnet (1984) and Gilkey, 1991) have found that between 60 per cent and two-thirds of mergers and acquisitions fail to meet expectations.
Gilkey argues that:
‘the high percentage of failure is mainly due to the fact that mergers and acquisitions are still designed with business and financial fit as primary conditions, leaving psychological and cultural issues as secondary concerns. A close examination of these issues could have brought about a learning process, directed at successfully managing such ventures.’ (Gilkey, 1991, p.331)
Eisele (1996) found three factors that generally influence the success of mergers and acquisitions (p.6):
<ul>
<li> cultural fit
<li> cultural potential
<li> competent managers to guide the process
</ul>
===The Effective Management of Mergers===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=D784A9C7145AEEB97AB42AC75F0E6A95?contentType=Article&hdAction=lnkhtml&contentId=1410708]
Han Nguyen, Brian H. Kleiner
YTD 2002, there were over 4,363 mergers and acquisitions, worth over $291.7 billion.
Prime reason for most mergers and acquisitions is to maintain or increase market share, and to increase shareholder value by cutting costs, and introducing new, expanded and improved services.
Study by KPMG (publishing in PR Newswire, 1999) found that between 75 and 83 per cent of mergers and acquisitions failed, where failure meant lowered productivity, labour unrest, higher absenteeism and loss of shareholder value, or even a dissolution of the companies involved.
Merger success is directly correlated with the level and quality of planning, with insufficient time often being spent analyzing current and future market trends and integration issues. Failure is often also due to an insufficient due diligence (Oon, 1998).
Simpson (2000) found the opportunity for mergers to fail is greatest during the integration phase because of improper managing and strategy, culture differences, delays in communications, and lack of clear vision.
Bijilsma-Frankema (2001) found ‘increasing evidence that cultural incompatibility is single largest cause of lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’
KPMG developed best practice guidelines, with the following main keys necessary for successful integration (p.4):
<ul>
<li> ‘Directors must get out of the boardroom
<li> Set direction for the new business
<li> Understand the emotional political and rational issues
<li> Maximize involvement
<li> Focus on communication
<li> Provide clarity around roles and decision lines
<li> Continue to focus on customers
<li> Be flexible’
</ul>
Communication is listed as the key factor to make integration effective and successful.
===Managing Merger Madness===
[http://www.emeraldinsight.com/Insight/viewContentItem.do?contentType=Article&hdAction=lnkhtml&contentId=869290]
Journal: Strategic Direction (Author unkown)
Successful mergers and acquisitions consist of (p.1):
<ul>
<li> Acquisition target being carefully and dispassionately selected
<li> A post-acquisition strategy relevant to the newly merger organization need to be developed from the start
</ul>
In pre-merger planning stage, the most common mistakes are (p.1):
<ul>
<li> Failure to conduct a detailed risk assessment and management profile of the acquisition target
<li> Allowing pressure to increase share value to take the place of a convincing strategy
<li> Assuming total synergy
</ul>
The most common mistakes in integration processes are (p.1):
<ul>
<li> Slow post-merger integration
<li> Cultural conflicts
<li> No risk management strategy
</ul>
===Merging for Success===
[http://www.ingentaconnect.com/content/mcb/056/2002/00000018/00000006/art00003]
Author: Unknown
Found that in the first few months following the announcement of an acquisition, productivity falls by up to 50 per cent. Most mergers and acquisitions fail for reasons other than money, such as leadership issues involving unclear objectives or cultural clashes.
===Anatomy of a Merger===
Unknown.
Success of mergers and acquisitions range from 20 to 60 per cent (British Institute of Management, 1986; Hunt, 1988; Marks, 1988; Weber, 1996). Poor results have now generally come to be attributed to poor human resource planning.
Research identifies communication to be the most important factor during the merger and acquisition process.
Both Balmer and Dinnie (1999) and De Voge and Spreier (1999) indicate that communication is the key to a successful integration of two clashing cultures.
Ernst and Young (1994) identified cultural incompatibility as the single largest cause of ‘lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’ (p. 3)
For sustained competitive advantage to be achieved, it is imperative the mergers and acquisitions be implemented from a financially and legally sound standpoint, as well as a behavioural approach.
Leadership from top-level management is also important for merger success. Weber (1996) found the higher the commitment of the acquired firm’s top management, the higher the effectiveness and the financial performance of the merged entity. Success mergers are led by CEOs who (p.6, Part II):
<ul>
<li> Dedicate executive time and focus
<li> Put together a leadership team
<li> Focus management attention on success factors
<li> Create a sense of human purpose and direction
<li> Model desired behaviour and ‘rules of the road’
</ul>
It is recommended a merger-tracking program be implemented to determine whether the organization is working towards its goals, and what the merger outcomes were. It should cover things such as (p.7 – 8, Part II):
<ul>
<li> ‘Is the combination achieving financial and operational goals?
<li> Are schedules on target, and are changes being implemented effectively?
<li> Do employees understand and support the need for change?
<li> What is the effect on people’s well-being and esprit de corps?
<li> Are managers at all levels taking steps to minimize negative reactions and build positive feelings?
<li> Are productivity or work quality being affected?
<li> Do people understand their new roles and what is expected of them?
</ul>
==ATTRIBUTES LEADING TO SUCCESS OR FAILURE==
===Mergers and Acquisitions: A Guide to Creating Value for Stakeholders===
[http://www.questia.com/PM.qst?a=o&d=106499472#]
Michael A. Hitt, Jeffrey S. Harrison, R. Duane Ireland
Some important factors that can contribute to success or failure in mergers and acquisitions are:
'''Due Diligence'''
Lack of due diligence has caused many merger failures. Involves comprehensive analysis of firm characteristics such as financial condition, management capabilities, physical assets and intangible assets.
'''Financing'''
Manageable debt levels should be ensured.
'''Complementary Resources'''
Occurs when the ‘primary resources of the acquiring and target firms are somewhat different, yet simultaneously supportive of one another.’ (p.179) This tends to create economic value to a greater value that exists when the merging firms have identical or unrelated resources.
'''Friendly/Hostile Acquisitions'''
Friendly acquisitions tend to create greater economic value. A hostile acquisition can reduce the transfer of information during due diligence and merger integration, and increase turnover of key executives in the firm being acquired.
'''Synergy Creation'''
Four foundations to creation of synergy are strategic fit, organizational fit, managerial actions and value creation.
'''Organizational Learning'''
Many people should participate in the acquisition process to ensure knowledge about acquisitions is being spread throughout the firm, and isn’t lost if one of the key people typically involved leaves. The learning process should be managed, with steps taken to study and learn from acquisitions, with the information gained recorded.
'''Focus on Core Business'''
Cultural and management differences are more greatly magnified the less firms have in common, therefore constraining the sharing of resources and capabilities. ‘Result is that positive benefits from financial synergy are not enough to offset the negative effects of diversification.’ (p.181)
'''Emphasis on Innovation'''
Innovation is critical to organizational competitiveness. ‘Companies that innovate enjoy the first-mover advantages of acquiring a deep knowledge of new markets and developing strong relationships with key stakeholders in those markets’ (p. 181)
'''Ethical Concerns / Opportunism'''
Risk in mergers and acquisitions is that the information received may be incorrect, misleading or deceptive. Steps should be taken to ensure that the information is accurate and hasn’t been manipulated by management with the aim to making performance appear higher than it is.
===The Complete Guide to Mergers & Acquisitions: Process Tools to Support M&A: Integration at every level===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The likelihood of a successful merger is increased by considering the following ten key recommendations (p. 196 – 197):
<ul>
<li> ‘Conduct due-diligence analyses in the financial and human-capital-related areas.
<li> Determine the required or desired degree of integration.
<li> Speed up decisions instead of focusing on precision.
<li> Get support and commitment from senior managers.
<li> Clearly define an approach to integration.
<li> Select a highly respected and capable integration leader.
<li> Select dedicated, capable people for the integration core team and task forces.
<li> Use best practices.
<li> Set measurable goals and objectives.
<li> Provide continuous communication and feedback.’
</ul>
'''Due Diligence'''
Human resource due diligence analysis as well as financial due diligence is important. It provides details about where the companies converge or diverge in areas such as leadership, communication, training and performance management. Identifying this can allow the companies to plan for any conflicts that might occur during the integration phase in respect to these matters.
'''Speedy Decisions'''
Tends to allow faster integration, and enables people to refocus more quickly on work, customers and results.
'''Clearly Defined Approach'''
Allows faster decision making and organizes the entire integration process. ‘Without a defined approach that includes clear deliverables, due dates, milestones, information flows, and so on, each function of the enterprise will be working on a different schedule and producing deliverables that vary widely in terms of quality and content.’ (p.198)
'''Capable Leadership'''
‘The integration leader should be an excellent project manager with a broad view of the enterprise and good people skills.’ (p. 198)
'''Measurable Goals and Objectives'''
Measurable goals and objectives let people involved know what a successful integration consists of, and how long it should take.
==COMMON PROBLEMS AND CHALLENGES IN ACQUISITIONS==
===Managing Acquisitions: Creating Value Through Corporate Renewal===
[http://www.amazon.com/Managing-Acquisitions-Creating-Through-Corporate/dp/0029141656]
David B. Jemison, Philippe C. Haspeslagh
Four common challenges in managing acquisitions are (p. 8):
<ul>
<li> ‘Ensuring that acquisitions support the firm’s overall corporate renewal strategy
<li> Developing a pre-acquisition decision-making process that will allow consideration of the “right” acquisitions and that will develop for any particular acquisition a meaningful justification, given limited information and the need for speed and secrecy.
<li> Managing the post-acquisition integration process to create the value hoped for when the acquisition was conceived.
<li> Fostering both acquisition-specific and broader organizational learning from the exposure to the acquisition.’
</ul>
‘The key to integration is to obtain the participation of the people involved without compromising the strategic task.’ (p.11)
Acquisition integration has several challenges (p.11):
<ul>
<li> ‘Adapting pre-acquisition views to embrace reality,
<li> An ability to create the atmosphere necessary for capability transfer,
<li> The leadership to provide a common vision,
<li> And careful management of the interactions between the organizations.’
</ul>
'''Process Perspective'''
‘Adopting a process perspective shifts the focus from an acquisition’s results to the drivers that cause these results: the transfer of capabilities that will lead to competitive advantage. In the process perspective, acquisitions are not independent, one-off deals. Instead, they are a means to the end of corporate renewal. The transaction itself does not bring the expected benefits; instead, actions and activities of the managers after the agreement determine the results.’ (p.12)
(A summary of the entire chapter is provided on p. 15)
===Winning at Mergers and Acquisitions: The Guide to Market-Focused |Planning and Integration===
[http://www.wiley.com/WileyCDA/WileyTitle/productCd-047119056X.html]
Mark N. Clemente, David S. Greenspan
Key to successful mergers and acquisitions is ‘being able to take the differences inherent in the two companies and meld them to create an enhanced capability.’ (p. 43)
Problem is often that stakeholders focus on the short-term benefits from mergers and acquisitions such as cost reduction, which results in decisions being made that can sacrifice long-term goals to achieve short-term savings.
‘When companies seek to merge or acquire, and can cite more than two strategic drivers as reasons to come together, then the chances of success are higher.’ (p.44)
Twelve common challenges present in the majority of mergers and acquisitions are (p.163):
<ul>
<li> ‘Embracing the concept of change
<li> Setting priorities
<li> Sharing information and effecting corporate understanding
<li> Melding cultures
<li> Forging a new corporate identity
<li> Determining managerial roles and responsibilities
<li> Effecting teamwork and cooperation
<li> Combining corporate functions and internal processes
<li> Aligning capabilities, services, and products
<li> Measuring results
<li> Acknowledging the two levels of integration
<li> Maintaining flexibility’
</ul>
The long-term success or failure of mergers and acquisitions can be determined by the steps put in place to meet these challenges – each challenge should be ‘met with a clear focus and forward-thinking tactics.’ (p.163)
'''Setting Priorities'''
Integration planning is the number-one priority once a deal has been closed. The critical steps in the integration process itself are:
<ul>
<li> Address corporate information, marketing, and sales departments quickly, as these represent the company to stakeholders
<li> Corporate image and branding aspects are important to begin promoting the new image. This allows the company to display ‘the best face on the merger to external audiences while you grapple with many of the longer-term internal and operational issues.’ (p.165)
<li> Focus on retaining key employees
<li> Focus on customer retention – this is critical to maintain the value of the acquired company.
</ul>
'''Sharing Information and Effecting Corporate Understanding'''
The two companies need to share information, and understand the nature of the new corporate relationship. This should address issues such as ‘What is the company’s corporate philosophy? What are the strategic intentions of senior management? Why has the company come to develop, commercialize, and invest in the products and services it does? How are the sales and production people compensated and why?’ (p. 166)
'''Melding Cultures'''
‘Cultural compatibility is one of the most significant determinants of a successful M&A transaction.’ (p.167)
‘Acknowledging whether cultural compatibility can exist should be a factor in determining whether to pursue a given deal. Integration can never be attaining – and growth strategies never realized – if two companies are worlds apart culturally.’ (p.167)
This alignment of cultures can be achieved through information sharing, emphasizing similarities and ‘mitigating dissimilarities’ (p.167) through effective communication.
'''Determining Managerial Roles and Responsibilities'''
‘Allowing the acquired company’s managers to maintain responsibility for activities central to its core operations will help to accelerate integration by minimizing gaps in performance or production. Ideally, the acquiring management should audit and counsel the existing management, augmenting it where it is weak but leaving the previous management team intact until key processes have been successfully incorporated into the merged firm’s operational infrastructure.’ (p. 169)
Defining the character traits required in the new organization, and then identifying people possessing these assists in the selection of the management team that will best achieve strategic objectives.
Staffing decisions must be made early in the integration process to avoid employee uncertainty, which can impact on productivity.
'''Measuring Results'''
The integration program must have measurable criteria to assess the progress of the merger. ‘Must strive to set forth measurement criteria wherever it is possible to do so, whether it is by setting time parameters by which certain integration tasks must be completed, by gauging attitude changes via employee research, or by tracking the number of people who stay with the merged company against expected levels of attrition.’ (p. 175)
'''Acknowledging the Two Levels of Integration'''
‘The key to a prompt and effective integration launch is focusing on the similarities inherent in each organization and building on them.’ (p.175)
‘The key to successful integration is identifying the similarities inherent in each organization and building on them while maintaining a disciplined yet flexible approach…’ (p.177)
‘Isolating common factors and focusing on similarities provides the essence of the growth planning approach to devising and implementing a successful integration strategy.’ (p. 177)
==MEASURING MERGER SUCCESS==
===Keeping Track of Success: Merger Measurement Systems===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The benefits that arise from a formal tracking process are (p.145):
<ul>
<li> ‘Determining whether the transition is proceeding according to plan
<li> Identifying “hot spots” before they flare out of control
<li> Ensuring a good flow of communication
<li> Highlighting the need for midcourse corrections
<li> Demonstrating interest in the human side of change
<li> Involving more people in the combination process
<li> Sending a message about the new company’s culture.’
</ul>
‘Four areas for which separate but interrelated measurement processes must be continually managed during merger integration’: (p.145)
<ul>
<li> Integration measures: assess the integration events and determine whether ‘overall integration approach is accomplishing its mission of leading the organization through change.’ (p.145)
<li> Operational measures: track ‘any potential merger-related impact on the organization’s ability to conduct its continuing, day-to-day business.’ (p.145)
<li> Process and cultural measures: determine the ‘status of merger-driven efforts to redesign business processes or elements of the organizational culture.’ (p.145)
<li> Financial measures – track and report whether the company is achieving its expected synergies.
</ul>
(Examples of measures used for the above are included on p.145)
'''Integration Measures'''
‘Merger measurement systems need to evolve as the integration evolves into each successive phase.’ (p.146)
‘Near the end of the project, it is essential to capture feedback, learning, and process upgrades that can be used to build an ongoing institutional knowledge base regarding the integration process itself.’ (p.150)
Refer to p.150 for Automated Feedback Channels – several interesting points regarding use of IT in integration.
'''Operational Measures'''
The company should establish and communicate critical success factors. These critical success factors ‘summarize the essential strategic business outcomes that must be achieved.’ (p.152)
(Diagram on p.153 provides a summary of the process involved in defining operational measures)
'''Process and Cultural Measures'''
A ‘formal process for measuring the effectiveness of major merger-related redesign and cultural integration efforts’ (p.154) should be created by the company to track progress.
One method for this is the ‘Merger Integration Scorecard’ which provides a status update showing the progress of the most important critical success factors in key measurement categories. An example of this is provided on p.159-161.
'''Financial Measures'''
Four components are recommended to ensure a company identifies and achieves its essential objectives (p.162):
<ul>
<li> ‘An education process
<li> A verification process
<li> Document templates for submitting, tracking, and summarizing the achievement of synergies
<li> A process for reporting and communicating the achievement of synergies.’
</ul>
It is also important to identify the sources of synergies. Synergies typically come from: (p.163)
<ul>
<li> Income generation – ‘produce efficiencies whereby increased production is achieved via changes to processes, new or different equipment, new products, new channels for sales or distribution, enhanced quality, new management techniques, or best practices.’ (p.163)
<li> Expense reductions unrelated to reductions in staffing expenses – result from the avoidance and reduction of costs that were made possible due to the integration.
<li> Avoidance of capital outlay – ‘involve any reduction in planned use of capital, or in the scope of capital projects, that is made possible by improvements in plant use or by the sharing of resources.’ (p.163)
<li> Expense reductions related to reductions in staffing expenses – ‘involves the elimination of redundant roles, positions, or units when these reductions are attributable to the integration.’ (p.163)
</ul>
==BENEFITS FROM INTEGRATION MANAGEMENT==
===Integration Managers: Special Leaders for Special Times===
[http://www1.ximb.ac.in/users/fac/dpdash/dpdash.nsf/23e5e39594c064ee852564ae004fa010/7216b2f7b30b5247e52568b2001830f5/$FILE/ATT8WDSA/Integration_Managers.pdf]
Ronald N. Ashkenas, Suzanne C. Francis
(Article basically covers the role of integration managers, and looks at case studies involving integration managers)
‘Integration managers help the process in four principal ways: they speed it up, create a structure for it, forge social connections between the two organization, and help engineer short-term successes that produce business results.’ (p.183-184)
‘The integration manager can clear paths between the two cultures by facilitating the social connections among people on both sides.’ (p.191) This can help to overcome the problem of culture clash.
Five personality factors that are likely to increase the success of individuals in the role of integration manager are (p.196 – 201):
<ul>
<li> Deep knowledge of the acquiring company
<li> No need for credit – ‘The integration manager cannot be concerned with getting credit – or even recognition – for an effective integration.’ (p.198)
<li> Comfort with chaos – The integration manager need to have strong project management and organizational skills. ‘The best integration managers keep the process moving by constantly recalibrating their plans.’ (p.199)
<li> A responsible independence – Needs to be able to take initiative and make independent judgments, as there is no one providing instructions for what they need to do. It is also ‘vitally important that the integration manager have – or win – the trust of the most senior executives in his or her company.’ (p.200)
<li> Emotional and cultural intelligence – Integration manager must be able to understand the emotional and cultural issues that are involved in a merger, and recognize that it isn’t just an ‘engineering exercise’, but involves people.
</ul>
Summary, p. 202 – 203 ‘What Integration Managers Do’
'''Inject Speed'''
<ul>
<li> Ramp up planning efforts
<li> Accelerate implementation
<li> Push for decisions and actions
<li> Monitor progress against goals, and pace the integration efforts to meet deadlines
</ul>
'''Engineer Success'''
<ul>
<li> Help identify critical business synergies
<li> Launch 100-day projects to achieve short-term bottom-line results
<li> Orchestrate transfers of best practices between companies
</ul>
'''Make Social Connections'''
<ul>
<li> Act as traveling ambassador between locations and businesses
<li> Serve as a lighting rod for hot issues; allow employees to vent
<li> Interpret the customs, language, and cultures of both companies
</ul>
Cr'''eate Structure'''
<ul>
<li> Provide flexible integration frameworks
<li> Mobilize joint teams
<li> Create key events and timelines
<li> Facilitate team and executive reviews’ (p.202 – 203)
</ul>
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
6e31233413f04229fc45c9c48f3a9109be21ba02
492
476
2018-10-29T12:19:09Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<noinclude>
==About The Author & This Article==
Rachel Curry, Research Consultant, Bishop Phillips Consulting
This article presents a summary of the literature examining the risks in corporate mergers and acquisitions over a 20 year period up until 2003. It was originally prepared by Rachel Curry of our research team as background detail for a briefing provided to the Members or the Bendigo Stock Exchange by [[Jonathan Bishop]]. The subheadings represent the names of the articles or papers summarised. Document links were added after the initial paper was prepared, and some references may be in error. The original summaries were compiled from printed editions of the papers or texts, and some page references may differ from the online references. Most of the links will navigate to subscription or book distributers as appropriate. Please advise any identified discrepancies.
</noinclude>
==MERGER FAILURE RATES AND REASONS FOR FAILURE==
===Managing Mergers, Acquisitions & Strategic Alliances===
[http://books.google.com/books?id=w2YR9LwY7FQC&dq=MERGER+FAILURE+RATES+AND+REASONS+FOR+FAILURE&pg=PA5&ots=CSqEPdOcJl&sig=cZKsAhRXXl1LH_lmGHgwNjIOhxI&prev=http://www.google.com/search%3Fsourceid%3Dnavclient%26ie%3DUTF-8%26rls%3DGGLG,GGLG:2005-34,GGLG:en%26q%3DMERGER%2BFAILURE%2BRATES%2BAND%2BREASONS%2BFOR%2BFAILURE&sa=X&oi=print&ct=result&cd=3&cad=legacy]
Sue Cartwright, Cary L. Cooper
Diagnosis and analysis of merger failure has traditionally focused on financial and strategic factors, with mergers considered to fail for rational economic reasons such as economies of scale not achieved to the magnitude expected, poor strategic fit or unexpected changes in market conditions. However, considering financial and strategic factors only is insufficient to achieve a successful merger or acquisition. Two important human factors to merger and acquisition success which impact on integration are:
<ul>
<li> ‘The culture compatibility of the combining organizations, and the resultant cultural dynamics.’
<li> ‘The way in which the merger/acquisition integration process is managed.’
</ul>
A lack of cultural compatibility can inhibit the creation of a ‘cohesive and coherent organizational entity’. A survey conducted by the British Institute of Management (1986) determined that ‘managerial underestimation of the difficulties of merging two cultures was a major contributory factor to merger and acquisition failure.’
The factors often held responsible for merger and joint venture failure include the selection of inappropriate venture partners, cultural incompatibility, and general “parenting” problems. (p.18)
There has been much debate about the most appropriate and accurate way to assess the gains arising from mergers, including both managerial and mathematical methods. Despite the method selected, many studies indicate mergers have an unfavourable impact on profitability, with research conducted by Mecks (1977) and Sinetar (1981) concluding that mergers have been associated with lowered productivity, worse strike records, higher absenteeism, and poorer accident rates.
Further research conducted by Ellis and Pekar (1978) and Marks (1988) suggest that in the long term between 50 and 80 per cent of all mergers and takeovers are considered financially unsuccessful, while a study conducted by the Department of Trade and Industry, published by the British Institute of Management (1988) and another by Hunt (1988) determined the success rates post-acquisition to be around 50 per cent. More current studies show similar trends continuing, with Cartwright and Cooper (1996) determining, on the basis of financial results in the first year of combined trading, that only half of mergers and acquisitions studied were successful.
Estimate by Davy et al (1988) held ‘employee problems’ to be responsible for between one-third and half of all merger failures, while a discussion paper by the British Institute of Management (1986) identified sixteen factors related to unsuccessful mergers and acquisitions, including (p.28):
<ul>
<li> underestimating the difficulties of merging two cultures
<li> underestimating the problems of skill transfer
<li> demotivation of employees of acquired company
<li> departure of key people in acquired company
<li> too much energy devoted to ‘doing the deal’, not enough to post-acquisition planning and integration
<li> decision making delayed by unclear responsibilities and post-acquisition conflicts
<li> neglecting existing business due to the amount of attention going into the acquired company
<li> insufficient research about the acquired company
</ul>
‘Ability to integrate the new company’ (p.28) was ranked as the most important factor for acquisition success according to a study by Booz, Allen and Hamilton (1985) while Kitching (1967) determined ‘the key to merger success was essentially the way in which the “transitional process” was managed and the quality of the working relationship between the partnering organizations.’
===Consulting in Mergers and Acquisitions===
[http://www.ingentaconnect.com/content/mcb/023/1997/00000010/00000003/art00006]
Marks M.L.
Three studies (Davidson, 1991; Elsass and Veiga, 1994; Lubatkin, 1983) have found that ‘fewer than 20 per cent of corporate combinations achieve their desired financial or strategic objectives.’
Zweig (1995) studied deals value at $500 million or more, and found that half of these deals destroyed shareholder value, 30 per cent had a minimal impact and only 17 per cent created shareholder value.
Many factors attributable to this low success rate, including (p.1):
<ul>
<li> paying the wrong price
<li> buying for the wrong reasons
<li> selecting the wrong partner
<li> buying at the wrong time
<li> managing the post-merger integration process inappropriately
</ul>
Marks (1997) together with previous studies (Marks and Mirvis, 1997; Mirvis and Marks, 1992) found the common factor restricting ability to achieve hoped-for synergies and financial gains to be (p. 1- 2):
<ul>
<li> ‘underestimating the multitude of integration issues and problems that arise as organizations come together;
<li> underestimating the drain on resources and the distraction from performance required to manage the transition from pre- to post-merger status; and
<li> underestimating the pervasiveness and depth of the human issues triggered in a merger or acquisition.’
</ul>
Since mid-1980s, many aspects of mergers and acquisitions have changed, including (p.3):
<ul>
<li> ‘deals are more strategically driven
<li> technological advances are driving deals
<li> globalization is driving more deals
<li> deals are involving larger organizations
<li> entire industries are put into play (deregulation, social policies and changing customer demands)
<li> managers are smarter about doing deals and managing integration
<li> human assets are even more crucial to merger and acquisition success than before.’
</ul>
“Consultations to facilitate mergers and acquisitions emanate from sound change management principles, yet must be sensitive to the special requirements of combining complex organizations.” (p.4)
===Enhancing the Success of Mergers and Acquisitions===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=A600DFCDB0CD4D4945CE767ABBAC9918?contentType=Article&hdAction=lnkhtml&contentId=865419]
Mike Schraeder, Dennis R. Self
Research conducted by Carleton (1997) indicate between 55 – 70 per cent of mergers and acquisitions fail to meet their anticipated purpose.
Number of researchers determine that cultural incompatibility of the companies involved in the merger/acquisition are partly responsible for financial benefits anticipated not being achieved (Fralicx and Bolster, 1997; Cartwright and Cooper, 1993). Chatterje et al (1992) also agree that poor cultural fit has contributed to several merger and acquisition failures where the companies involved appeared to be suitable strategic partners.
Mirvis (1985) highlighted four factors that were believed to impact on the integration of organizations:
<ul>
<li> top management relations (including reporting relationships, decision making and flexibility)
<li> compatibility of business systems
<li> existence of a culture that will support the integration of business systems
<li> goals the respective parties intend to achieve
</ul>
Several other factors impacting on integration that have been identified through other research are:
<ul>
<li> compatibility of respective business systems (Mirvis, 1985)
<li> organizational members experience difficulty adjusting to new procedures and performance standards (Marks and Mirvis, 1992)
<li> differences in managerial styles and accounting practices (Cartwright and Cooper, 1993)
</ul>
Weber (1996) identifies that anticipated benefits from mergers and acquisitions are other unrealized because of productivity losses and the ‘traumatic effect of mergers and acquisitions on a firm’s human resources.’ Also finds that ‘the magnitude of cultural differences can effectively impede a successful integration during mergers and acquisitions, resulting in poor financial performance.’
Coopers and Lybrand (1992) studied failed mergers and acquisitions, and over 80 per cent of the executives involved identified that different management practices and styles as the primary contributor to integration issues.
To achieve merger and acquisition success, several researchers have determined the following factors need to be considered:
<ul>
<li> develop a flexible and comprehensive integration plan
<li> share information and encourage communication
<li> encourage participation by involving others in the process
<li> enhance commitment by establishing relationships and building trust
</ul>
===Due Diligence: The Devil in the Details===
[http://www.workforce.com/archive/feature/22/22/68/index.php]
Greengard, Samuel
“HR has a critical role in due diligence – both from the benefits and compensation side and the cultural side” – Deborah Rochelle, senior merger and acquisition consultant, Watson Wyatt Worldwide. She believes that ‘due diligence must encompass people, programs, plans, policies and processes.’
Clemente (1999) states that ‘ultimately, many mergers fail because of human resource–related issues, such as culture clash.’
Studies have found that between 50 and 75 per cent of all merging companies fail to retain book value two years after merging, and ‘many others are torpedoed by ongoing culture clash and an erosion of top talent.’ (p. 2)
Mitchell Lee Marks, management consultant, believes a number of failed mergers aren’t because of inept management or inadequate due diligence, but because the two organizations haven’t determined whether they have compatible cultures or how to overcome these differences if the cultures aren’t compatible.
Organizations should develop a detailed checklist to work through due diligence process to allow the organization to evaluate which factors are most important.
===On Managing Cultural Integration and Cultural Change Process in M & A===
Bijilsma-Frankema, K. (2001)
Journal of European Industrial Training, Vol.25
Magnet (1984) and Gilkey, 1991) have found that between 60 per cent and two-thirds of mergers and acquisitions fail to meet expectations.
Gilkey argues that:
‘the high percentage of failure is mainly due to the fact that mergers and acquisitions are still designed with business and financial fit as primary conditions, leaving psychological and cultural issues as secondary concerns. A close examination of these issues could have brought about a learning process, directed at successfully managing such ventures.’ (Gilkey, 1991, p.331)
Eisele (1996) found three factors that generally influence the success of mergers and acquisitions (p.6):
<ul>
<li> cultural fit
<li> cultural potential
<li> competent managers to guide the process
</ul>
===The Effective Management of Mergers===
[http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=D784A9C7145AEEB97AB42AC75F0E6A95?contentType=Article&hdAction=lnkhtml&contentId=1410708]
Han Nguyen, Brian H. Kleiner
YTD 2002, there were over 4,363 mergers and acquisitions, worth over $291.7 billion.
Prime reason for most mergers and acquisitions is to maintain or increase market share, and to increase shareholder value by cutting costs, and introducing new, expanded and improved services.
Study by KPMG (publishing in PR Newswire, 1999) found that between 75 and 83 per cent of mergers and acquisitions failed, where failure meant lowered productivity, labour unrest, higher absenteeism and loss of shareholder value, or even a dissolution of the companies involved.
Merger success is directly correlated with the level and quality of planning, with insufficient time often being spent analyzing current and future market trends and integration issues. Failure is often also due to an insufficient due diligence (Oon, 1998).
Simpson (2000) found the opportunity for mergers to fail is greatest during the integration phase because of improper managing and strategy, culture differences, delays in communications, and lack of clear vision.
Bijilsma-Frankema (2001) found ‘increasing evidence that cultural incompatibility is single largest cause of lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’
KPMG developed best practice guidelines, with the following main keys necessary for successful integration (p.4):
<ul>
<li> ‘Directors must get out of the boardroom
<li> Set direction for the new business
<li> Understand the emotional political and rational issues
<li> Maximize involvement
<li> Focus on communication
<li> Provide clarity around roles and decision lines
<li> Continue to focus on customers
<li> Be flexible’
</ul>
Communication is listed as the key factor to make integration effective and successful.
===Managing Merger Madness===
[http://www.emeraldinsight.com/Insight/viewContentItem.do?contentType=Article&hdAction=lnkhtml&contentId=869290]
Journal: Strategic Direction (Author unkown)
Successful mergers and acquisitions consist of (p.1):
<ul>
<li> Acquisition target being carefully and dispassionately selected
<li> A post-acquisition strategy relevant to the newly merger organization need to be developed from the start
</ul>
In pre-merger planning stage, the most common mistakes are (p.1):
<ul>
<li> Failure to conduct a detailed risk assessment and management profile of the acquisition target
<li> Allowing pressure to increase share value to take the place of a convincing strategy
<li> Assuming total synergy
</ul>
The most common mistakes in integration processes are (p.1):
<ul>
<li> Slow post-merger integration
<li> Cultural conflicts
<li> No risk management strategy
</ul>
===Merging for Success===
[http://www.ingentaconnect.com/content/mcb/056/2002/00000018/00000006/art00003]
Author: Unknown
Found that in the first few months following the announcement of an acquisition, productivity falls by up to 50 per cent. Most mergers and acquisitions fail for reasons other than money, such as leadership issues involving unclear objectives or cultural clashes.
===Anatomy of a Merger===
Unknown.
Success of mergers and acquisitions range from 20 to 60 per cent (British Institute of Management, 1986; Hunt, 1988; Marks, 1988; Weber, 1996). Poor results have now generally come to be attributed to poor human resource planning.
Research identifies communication to be the most important factor during the merger and acquisition process.
Both Balmer and Dinnie (1999) and De Voge and Spreier (1999) indicate that communication is the key to a successful integration of two clashing cultures.
Ernst and Young (1994) identified cultural incompatibility as the single largest cause of ‘lack of projected performance, departure of key executives, and time-consuming conflicts in the consolidation of businesses.’ (p. 3)
For sustained competitive advantage to be achieved, it is imperative the mergers and acquisitions be implemented from a financially and legally sound standpoint, as well as a behavioural approach.
Leadership from top-level management is also important for merger success. Weber (1996) found the higher the commitment of the acquired firm’s top management, the higher the effectiveness and the financial performance of the merged entity. Success mergers are led by CEOs who (p.6, Part II):
<ul>
<li> Dedicate executive time and focus
<li> Put together a leadership team
<li> Focus management attention on success factors
<li> Create a sense of human purpose and direction
<li> Model desired behaviour and ‘rules of the road’
</ul>
It is recommended a merger-tracking program be implemented to determine whether the organization is working towards its goals, and what the merger outcomes were. It should cover things such as (p.7 – 8, Part II):
<ul>
<li> ‘Is the combination achieving financial and operational goals?
<li> Are schedules on target, and are changes being implemented effectively?
<li> Do employees understand and support the need for change?
<li> What is the effect on people’s well-being and esprit de corps?
<li> Are managers at all levels taking steps to minimize negative reactions and build positive feelings?
<li> Are productivity or work quality being affected?
<li> Do people understand their new roles and what is expected of them?
</ul>
==ATTRIBUTES LEADING TO SUCCESS OR FAILURE==
===Mergers and Acquisitions: A Guide to Creating Value for Stakeholders===
[http://www.questia.com/PM.qst?a=o&d=106499472#]
Michael A. Hitt, Jeffrey S. Harrison, R. Duane Ireland
Some important factors that can contribute to success or failure in mergers and acquisitions are:
'''Due Diligence'''
Lack of due diligence has caused many merger failures. Involves comprehensive analysis of firm characteristics such as financial condition, management capabilities, physical assets and intangible assets.
'''Financing'''
Manageable debt levels should be ensured.
'''Complementary Resources'''
Occurs when the ‘primary resources of the acquiring and target firms are somewhat different, yet simultaneously supportive of one another.’ (p.179) This tends to create economic value to a greater value that exists when the merging firms have identical or unrelated resources.
'''Friendly/Hostile Acquisitions'''
Friendly acquisitions tend to create greater economic value. A hostile acquisition can reduce the transfer of information during due diligence and merger integration, and increase turnover of key executives in the firm being acquired.
'''Synergy Creation'''
Four foundations to creation of synergy are strategic fit, organizational fit, managerial actions and value creation.
'''Organizational Learning'''
Many people should participate in the acquisition process to ensure knowledge about acquisitions is being spread throughout the firm, and isn’t lost if one of the key people typically involved leaves. The learning process should be managed, with steps taken to study and learn from acquisitions, with the information gained recorded.
'''Focus on Core Business'''
Cultural and management differences are more greatly magnified the less firms have in common, therefore constraining the sharing of resources and capabilities. ‘Result is that positive benefits from financial synergy are not enough to offset the negative effects of diversification.’ (p.181)
'''Emphasis on Innovation'''
Innovation is critical to organizational competitiveness. ‘Companies that innovate enjoy the first-mover advantages of acquiring a deep knowledge of new markets and developing strong relationships with key stakeholders in those markets’ (p. 181)
'''Ethical Concerns / Opportunism'''
Risk in mergers and acquisitions is that the information received may be incorrect, misleading or deceptive. Steps should be taken to ensure that the information is accurate and hasn’t been manipulated by management with the aim to making performance appear higher than it is.
===The Complete Guide to Mergers & Acquisitions: Process Tools to Support M&A: Integration at every level===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The likelihood of a successful merger is increased by considering the following ten key recommendations (p. 196 – 197):
<ul>
<li> ‘Conduct due-diligence analyses in the financial and human-capital-related areas.
<li> Determine the required or desired degree of integration.
<li> Speed up decisions instead of focusing on precision.
<li> Get support and commitment from senior managers.
<li> Clearly define an approach to integration.
<li> Select a highly respected and capable integration leader.
<li> Select dedicated, capable people for the integration core team and task forces.
<li> Use best practices.
<li> Set measurable goals and objectives.
<li> Provide continuous communication and feedback.’
</ul>
'''Due Diligence'''
Human resource due diligence analysis as well as financial due diligence is important. It provides details about where the companies converge or diverge in areas such as leadership, communication, training and performance management. Identifying this can allow the companies to plan for any conflicts that might occur during the integration phase in respect to these matters.
'''Speedy Decisions'''
Tends to allow faster integration, and enables people to refocus more quickly on work, customers and results.
'''Clearly Defined Approach'''
Allows faster decision making and organizes the entire integration process. ‘Without a defined approach that includes clear deliverables, due dates, milestones, information flows, and so on, each function of the enterprise will be working on a different schedule and producing deliverables that vary widely in terms of quality and content.’ (p.198)
'''Capable Leadership'''
‘The integration leader should be an excellent project manager with a broad view of the enterprise and good people skills.’ (p. 198)
'''Measurable Goals and Objectives'''
Measurable goals and objectives let people involved know what a successful integration consists of, and how long it should take.
==COMMON PROBLEMS AND CHALLENGES IN ACQUISITIONS==
===Managing Acquisitions: Creating Value Through Corporate Renewal===
[http://www.amazon.com/Managing-Acquisitions-Creating-Through-Corporate/dp/0029141656]
David B. Jemison, Philippe C. Haspeslagh
Four common challenges in managing acquisitions are (p. 8):
<ul>
<li> ‘Ensuring that acquisitions support the firm’s overall corporate renewal strategy
<li> Developing a pre-acquisition decision-making process that will allow consideration of the “right” acquisitions and that will develop for any particular acquisition a meaningful justification, given limited information and the need for speed and secrecy.
<li> Managing the post-acquisition integration process to create the value hoped for when the acquisition was conceived.
<li> Fostering both acquisition-specific and broader organizational learning from the exposure to the acquisition.’
</ul>
‘The key to integration is to obtain the participation of the people involved without compromising the strategic task.’ (p.11)
Acquisition integration has several challenges (p.11):
<ul>
<li> ‘Adapting pre-acquisition views to embrace reality,
<li> An ability to create the atmosphere necessary for capability transfer,
<li> The leadership to provide a common vision,
<li> And careful management of the interactions between the organizations.’
</ul>
'''Process Perspective'''
‘Adopting a process perspective shifts the focus from an acquisition’s results to the drivers that cause these results: the transfer of capabilities that will lead to competitive advantage. In the process perspective, acquisitions are not independent, one-off deals. Instead, they are a means to the end of corporate renewal. The transaction itself does not bring the expected benefits; instead, actions and activities of the managers after the agreement determine the results.’ (p.12)
(A summary of the entire chapter is provided on p. 15)
===Winning at Mergers and Acquisitions: The Guide to Market-Focused |Planning and Integration===
[http://www.wiley.com/WileyCDA/WileyTitle/productCd-047119056X.html]
Mark N. Clemente, David S. Greenspan
Key to successful mergers and acquisitions is ‘being able to take the differences inherent in the two companies and meld them to create an enhanced capability.’ (p. 43)
Problem is often that stakeholders focus on the short-term benefits from mergers and acquisitions such as cost reduction, which results in decisions being made that can sacrifice long-term goals to achieve short-term savings.
‘When companies seek to merge or acquire, and can cite more than two strategic drivers as reasons to come together, then the chances of success are higher.’ (p.44)
Twelve common challenges present in the majority of mergers and acquisitions are (p.163):
<ul>
<li> ‘Embracing the concept of change
<li> Setting priorities
<li> Sharing information and effecting corporate understanding
<li> Melding cultures
<li> Forging a new corporate identity
<li> Determining managerial roles and responsibilities
<li> Effecting teamwork and cooperation
<li> Combining corporate functions and internal processes
<li> Aligning capabilities, services, and products
<li> Measuring results
<li> Acknowledging the two levels of integration
<li> Maintaining flexibility’
</ul>
The long-term success or failure of mergers and acquisitions can be determined by the steps put in place to meet these challenges – each challenge should be ‘met with a clear focus and forward-thinking tactics.’ (p.163)
'''Setting Priorities'''
Integration planning is the number-one priority once a deal has been closed. The critical steps in the integration process itself are:
<ul>
<li> Address corporate information, marketing, and sales departments quickly, as these represent the company to stakeholders
<li> Corporate image and branding aspects are important to begin promoting the new image. This allows the company to display ‘the best face on the merger to external audiences while you grapple with many of the longer-term internal and operational issues.’ (p.165)
<li> Focus on retaining key employees
<li> Focus on customer retention – this is critical to maintain the value of the acquired company.
</ul>
'''Sharing Information and Effecting Corporate Understanding'''
The two companies need to share information, and understand the nature of the new corporate relationship. This should address issues such as ‘What is the company’s corporate philosophy? What are the strategic intentions of senior management? Why has the company come to develop, commercialize, and invest in the products and services it does? How are the sales and production people compensated and why?’ (p. 166)
'''Melding Cultures'''
‘Cultural compatibility is one of the most significant determinants of a successful M&A transaction.’ (p.167)
‘Acknowledging whether cultural compatibility can exist should be a factor in determining whether to pursue a given deal. Integration can never be attaining – and growth strategies never realized – if two companies are worlds apart culturally.’ (p.167)
This alignment of cultures can be achieved through information sharing, emphasizing similarities and ‘mitigating dissimilarities’ (p.167) through effective communication.
'''Determining Managerial Roles and Responsibilities'''
‘Allowing the acquired company’s managers to maintain responsibility for activities central to its core operations will help to accelerate integration by minimizing gaps in performance or production. Ideally, the acquiring management should audit and counsel the existing management, augmenting it where it is weak but leaving the previous management team intact until key processes have been successfully incorporated into the merged firm’s operational infrastructure.’ (p. 169)
Defining the character traits required in the new organization, and then identifying people possessing these assists in the selection of the management team that will best achieve strategic objectives.
Staffing decisions must be made early in the integration process to avoid employee uncertainty, which can impact on productivity.
'''Measuring Results'''
The integration program must have measurable criteria to assess the progress of the merger. ‘Must strive to set forth measurement criteria wherever it is possible to do so, whether it is by setting time parameters by which certain integration tasks must be completed, by gauging attitude changes via employee research, or by tracking the number of people who stay with the merged company against expected levels of attrition.’ (p. 175)
'''Acknowledging the Two Levels of Integration'''
‘The key to a prompt and effective integration launch is focusing on the similarities inherent in each organization and building on them.’ (p.175)
‘The key to successful integration is identifying the similarities inherent in each organization and building on them while maintaining a disciplined yet flexible approach…’ (p.177)
‘Isolating common factors and focusing on similarities provides the essence of the growth planning approach to devising and implementing a successful integration strategy.’ (p. 177)
==MEASURING MERGER SUCCESS==
===Keeping Track of Success: Merger Measurement Systems===
[http://www.amazon.com/gp/reader/0787947865/ref=sib_dp_pt/002-0140027-7346405#reader-link]
Timothy J. Galpin, Mark Herndon
The benefits that arise from a formal tracking process are (p.145):
<ul>
<li> ‘Determining whether the transition is proceeding according to plan
<li> Identifying “hot spots” before they flare out of control
<li> Ensuring a good flow of communication
<li> Highlighting the need for midcourse corrections
<li> Demonstrating interest in the human side of change
<li> Involving more people in the combination process
<li> Sending a message about the new company’s culture.’
</ul>
‘Four areas for which separate but interrelated measurement processes must be continually managed during merger integration’: (p.145)
<ul>
<li> Integration measures: assess the integration events and determine whether ‘overall integration approach is accomplishing its mission of leading the organization through change.’ (p.145)
<li> Operational measures: track ‘any potential merger-related impact on the organization’s ability to conduct its continuing, day-to-day business.’ (p.145)
<li> Process and cultural measures: determine the ‘status of merger-driven efforts to redesign business processes or elements of the organizational culture.’ (p.145)
<li> Financial measures – track and report whether the company is achieving its expected synergies.
</ul>
(Examples of measures used for the above are included on p.145)
'''Integration Measures'''
‘Merger measurement systems need to evolve as the integration evolves into each successive phase.’ (p.146)
‘Near the end of the project, it is essential to capture feedback, learning, and process upgrades that can be used to build an ongoing institutional knowledge base regarding the integration process itself.’ (p.150)
Refer to p.150 for Automated Feedback Channels – several interesting points regarding use of IT in integration.
'''Operational Measures'''
The company should establish and communicate critical success factors. These critical success factors ‘summarize the essential strategic business outcomes that must be achieved.’ (p.152)
(Diagram on p.153 provides a summary of the process involved in defining operational measures)
'''Process and Cultural Measures'''
A ‘formal process for measuring the effectiveness of major merger-related redesign and cultural integration efforts’ (p.154) should be created by the company to track progress.
One method for this is the ‘Merger Integration Scorecard’ which provides a status update showing the progress of the most important critical success factors in key measurement categories. An example of this is provided on p.159-161.
'''Financial Measures'''
Four components are recommended to ensure a company identifies and achieves its essential objectives (p.162):
<ul>
<li> ‘An education process
<li> A verification process
<li> Document templates for submitting, tracking, and summarizing the achievement of synergies
<li> A process for reporting and communicating the achievement of synergies.’
</ul>
It is also important to identify the sources of synergies. Synergies typically come from: (p.163)
<ul>
<li> Income generation – ‘produce efficiencies whereby increased production is achieved via changes to processes, new or different equipment, new products, new channels for sales or distribution, enhanced quality, new management techniques, or best practices.’ (p.163)
<li> Expense reductions unrelated to reductions in staffing expenses – result from the avoidance and reduction of costs that were made possible due to the integration.
<li> Avoidance of capital outlay – ‘involve any reduction in planned use of capital, or in the scope of capital projects, that is made possible by improvements in plant use or by the sharing of resources.’ (p.163)
<li> Expense reductions related to reductions in staffing expenses – ‘involves the elimination of redundant roles, positions, or units when these reductions are attributable to the integration.’ (p.163)
</ul>
==BENEFITS FROM INTEGRATION MANAGEMENT==
===Integration Managers: Special Leaders for Special Times===
[http://www1.ximb.ac.in/users/fac/dpdash/dpdash.nsf/23e5e39594c064ee852564ae004fa010/7216b2f7b30b5247e52568b2001830f5/$FILE/ATT8WDSA/Integration_Managers.pdf]
Ronald N. Ashkenas, Suzanne C. Francis
(Article basically covers the role of integration managers, and looks at case studies involving integration managers)
‘Integration managers help the process in four principal ways: they speed it up, create a structure for it, forge social connections between the two organization, and help engineer short-term successes that produce business results.’ (p.183-184)
‘The integration manager can clear paths between the two cultures by facilitating the social connections among people on both sides.’ (p.191) This can help to overcome the problem of culture clash.
Five personality factors that are likely to increase the success of individuals in the role of integration manager are (p.196 – 201):
<ul>
<li> Deep knowledge of the acquiring company
<li> No need for credit – ‘The integration manager cannot be concerned with getting credit – or even recognition – for an effective integration.’ (p.198)
<li> Comfort with chaos – The integration manager need to have strong project management and organizational skills. ‘The best integration managers keep the process moving by constantly recalibrating their plans.’ (p.199)
<li> A responsible independence – Needs to be able to take initiative and make independent judgments, as there is no one providing instructions for what they need to do. It is also ‘vitally important that the integration manager have – or win – the trust of the most senior executives in his or her company.’ (p.200)
<li> Emotional and cultural intelligence – Integration manager must be able to understand the emotional and cultural issues that are involved in a merger, and recognize that it isn’t just an ‘engineering exercise’, but involves people.
</ul>
Summary, p. 202 – 203 ‘What Integration Managers Do’
'''Inject Speed'''
<ul>
<li> Ramp up planning efforts
<li> Accelerate implementation
<li> Push for decisions and actions
<li> Monitor progress against goals, and pace the integration efforts to meet deadlines
</ul>
'''Engineer Success'''
<ul>
<li> Help identify critical business synergies
<li> Launch 100-day projects to achieve short-term bottom-line results
<li> Orchestrate transfers of best practices between companies
</ul>
'''Make Social Connections'''
<ul>
<li> Act as traveling ambassador between locations and businesses
<li> Serve as a lighting rod for hot issues; allow employees to vent
<li> Interpret the customs, language, and cultures of both companies
</ul>
Cr'''eate Structure'''
<ul>
<li> Provide flexible integration frameworks
<li> Mobilize joint teams
<li> Create key events and timelines
<li> Facilitate team and executive reviews’ (p.202 – 203)
</ul>
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
6e31233413f04229fc45c9c48f3a9109be21ba02
Managing Risk in Mergers & Acquisitions
0
297
406
386
2018-10-29T12:12:38Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Topics==
* [[Managing Risk in Mergers & Acquisitions - Causes of Success & Failure]]
* [[Managing Risk in Mergers & Acquisitions - A Success Strategy]]
* [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
975422383bcff83e8288f0207aa4f21d1f209d44
478
406
2018-10-29T12:17:44Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Topics==
* [[Managing Risk in Mergers & Acquisitions - Causes of Success & Failure]]
* [[Managing Risk in Mergers & Acquisitions - A Success Strategy]]
* [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
975422383bcff83e8288f0207aa4f21d1f209d44
How do I get a copy of BPC RiskManager V6.2.5?
0
299
408
407
2018-10-29T12:15:44Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
The BPC RiskManager V6.2.5 (Enrima Edition) Enterprise and Single User software is available in a downloadable form from the Bishop Phillips Consulting web site. The software comes with a 60 day evaulation license (which means you can use it as if you own it for 60 days) prior to purchase. Online and phone support is provided to evaluation clients as if they were paying clients.
You must register a software enquiry with BPC using this form:
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php http://www.bishopphillips.com/australia/BPCServiceEnquiry.php]
Note: The enquiry form can get a bit emotional when the moon is out, so if it tells you there are errors when there aren't any (ie you have completed the "required" fields), just put something in the general comments box and resubmit. That seems to make it happy! The form was written for php 5, but the server it is on currently hosts php4 (pending an upgrade), so although it was scaled back for the lower grade environment it is still wants to be a php5 program, and intermittently rebels.
Within 24 hours of receipt you will be contacted by email by Bishop Phillips Consulting. If you would like an evaluation copy, they will arrange it for you, and provide you with pricing and the contact details for the Bishop Phillips Consulting office closest to your location.
[[Category:RiskManager FAQ]]
<noinclude>{{BackLinks}}
</noinclude>
0a7dc86a8d2c10b03e1e92f3dc71c11803fc2dc5
When are multiple BPC RIskManager server licenses required?
0
300
410
409
2018-10-29T12:15:44Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Background=
We will be acquiring an Enterprise license. We are looking to have RM implemented across a group of companies. They will all be using the same instance with same fields and definitions as the subject matter is the same, but they will each be on different servers, with different IT teams managing them. Can we use a single server license or will we require multiple server licenses?
=Answer=
Yes, and no. Firstly, the Enterprise license is not the best license for this, but rather a Group license (assuming all the companies are related entities) and you expect them all to adopt the BPC RiskManager system. Enterprise License counting is on production servers and legal entities (so you can have as many test and training servers as you like). Each system can have as many databases as you like (we don't license by the database).
The principal difference between the Group and Enterprise licensing is that the total license fee is capped, on the condition that the entities are all related parties (ie. subsidiaries, or a shared service client group.
At the Enterprise and Group license level, we do not license by client - so you can have as many clients (users) as you like, unless you have negotiated a special restricted enterprise license (which sometimes happens with Government clients). This discussion therefore focusses on production servers. Lets consider a couple of scenarios assuming an Enterprise licensing model:
* One application server, one legal entity, one database = 1 license.
* One application server, one legal entity, many databases = 1 license.
* One application server, multiple legal entities, one databases = 1 license.
* One application server, multiple legal entities, multiple databases, but purpose based rather than entity based = 1 license.
* One application server, multiple legal entities, multiple databases, but entity based rather than purpose based = 1 license primary + multiple add-on licenses
* Multiple application servers, one legal entity, one database = 1 license primary + multiple add-on licenses
In all scenarios:
* Multiple web servers (eg a web farm) hosting the BPC SurveyManager component = 1 license (restricted to internal corporate use)
* Servicing the web generally with surveys unrelated to my BPC RiskManager installation = Contact for agreed licensing arrangement.
Essentially, the Enterprise License is not a single server/single database license, but a server and company based license with additional servers/companies after the first one being heavily discounted via the addon licenses.
The Group License is a multi-server / multi-company license offered at a fixed fee for the group. Group licenses are offered on a per parent organisation basis once we understand you expected use scenarios and expected organisation structure. They are always cheaper than the equivalent Enterprise licensing scenario. In either case, there is a per installation charge for maintenance which includes bug fixing, help desk and access to regular upgrades levied annually in advance. Optionally included in this is a reference database vault - under which we retain copies of each of your multiple databases, fully configured (but possibly without data), so that location specific upgrade scripts can be generated and data recovery is easier.
The Group Licence fee allows any company within a group to access and use the software in whatever configuration is seen fit by the group (and there is a large range of possible structures as you will see in the installation guide). With any large group one database configuration does not fit all and that, while there are some common threads, there will always be configuration differences between databases (company structures, people, categories, reference links, etc.).
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
1bd2b683a960e75c3b67224807d79aaff695b024
Does your license include the cost of MS SQL Server ?
0
301
412
411
2018-10-29T12:15:45Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Answer==
No. You will need a valid MS SQL server license ( or use SQL Express 2005 / 2008 - although this is not recommended for larger multi-user installs). We place such little demand on the database server that it is common for the physical database server to be shared, although often dedicated instances are created for RiskManager.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
47c534bfcee75c0c99ed6ccce8c4efb580405d7d
What will need to be arranged prior to the installing BPC RiskManager?
0
302
414
413
2018-10-29T12:15:45Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
Things we will need you to have arranged prior to the install:
# Fully patched Windows 2000, 2003 or 2008 server with IIS 6+ for the application server hosting (or if undertaking a desktop install - Windows 2000, Windows XP, Windows Vista SP2, Windows 7 PC).
# If using SSL/HTTPS as the communication protocol, an SSL Certificate for the Windows IIS 6/7 server for the domain you will be using for the RiskManager application server. We do not perform certificate validation – so an internal certificate should be fine but note that we use real Certificates issued by Verisign and Thawte on our sites so we have only tested under an environment where the certificates are “real”. I can not think of any reason why an internal certificate should be a problem however. Certificates are ONLY required for HTTPS, not for HTTP or Raw TCP/IP communications protocols (which is the normal way the application is used).
# Fully patched SQL Server 2000, 2005 or 2008 database engine (installed in MIXED MODE – not just Windows Authentication Mode) with Enterprise Manager (SQL 2000) or Database Studio (2005/2008) available (unless you want to fluff around in SQL command line calls).
# Administration access to the Windows servers and the SQL Server (you need to be able to create and restore a database and create an SQL user account and assign roles to that account), or if installing a destop version, you will need administration access to the desktop PC.
# A test client computer with admin rights to that computer – preferable Win XP (latest patches – of course) that has network connectivity to the Windows application server.
# Simple TCP/IP network connectivity between all these components – lets get it working in the simple scenario before we complicate it all with proxy servers. If you plan on using the HTTP or HTTPS communication protocols between the client and the application server, the windows IIS server rights settings are a little fiddly and the error messages are less than helpful when they are wrongly set so it would be better to know we have these right before we in introduce another layer network communications problems like proxy servers.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
bc43f33f67b4c5bd8a9bb32e914aa42135c4cdfd
Does the RiskManager client application work with FireFox browsers?
0
303
416
415
2018-10-29T12:15:45Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Background=
Getting an error message when attempting to log in on a laptop (with Mozilla Firefox).
"Access Violation @ address 09BE4F70 in Module "Riskma~3.ocx". Read of address 000000
=Answer=
The BPC RiskManager Desktop client will coexist with all versions of FireFox browsers, however the embedded webpages held on some panels will note display unless IE is also present on the desktop. Clickable web page links (which are also available on every panel with an embedded web page display window will correctly launch whichever browser is you default browser.
With respect to the BPC RiskManager ActiveX Plugin client, Firefox versions after 2.5 do not work well with RiskManager. The message diaplayed in the backround section relates to this issue. With the release of Firefox 3 the responsible committee deleted support for ActiveX plug-ins, so RiskManager will not load in any version of Firefox above 2.5. Not only did they remove the libraries used, but deliberately restructured the interfacing architecture so that it was virtually impossible to write a support library so that an ActiveX plug-in could be supported.
The plugin support model adopted in place of the ActiveX model is, quite simply a bug ridden mess (at least with respect to FireFox 3), and given the tiny percentage of Firefox browsers among our corporate client base, it is impossible for us to justify separate client code base for 1% (based on our web site hits) of the potential client base. When the Firefox team wish to make the product relevant to business users again, we will gladly support it once more. (Ok - You get it: I am annoyed with the FireFox team about this! I feel better now.)
Clients who can not use Internet Explorer or one of the other ActiveX compatible browsers, must use the Windows executable client instead to access the application server - and you will still be able to use your favourite non-IE browser to see referenced web content from the client. This is explained in the installation and help documentation.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
83e3347656c97206132eb04099499480661a466e
In what programming language is BPC RiskManager written?
0
304
418
417
2018-10-29T12:15:45Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=In What Language Is BPC RiskManager Programmed=
BPC RiskManager consists of more than 100,000 lines of code written in Delphi 7 (Object Pascal) from CodeGear (formerly Borland, now Embarcadero) compiled into W32 binary executables and TSQL/SQLPlus. Some smaller components are written or supported by libraries in JavaScript, PaxScript, and ReportBuilder script.
The Delphi environment was originally developed by Borland, starting with Turbo Pascal in the 1980's. It has been one of the leading development environments and languages for almost 20 years and has one of the largest and most skilled development communities in the world. Delphi 7 was released in 2001 and has proven to be perhaps the most resilient, and bullet proof development environment of the last decade.
=Why Object Pascal?=
From our perspective the most apparent reason is that by default Pascal imposes rigid data typing, and size checking. In Pascal you have to turn these off if you want to misbehave, while in C the reverse is the case. Buffer overflow errors such as those that have plagued Microsoft operating systems (written in C and Basic) and been the cause of many security holes are not
possible in Pascal - because while it still operates at the hardware level of the computer it dynamically checks pointer references and array boundaries (raising exceptions when indexes flow past them) and maintains reference counts of objects allocated so that they can be released when no other objects point at them.
This safety net means that it is slightly slower in array and memory release operations than C, but identical in
procedure calls, pointer, floating point and stack operation speeds. So it delivers a higher level of reliability than C, while compromising only slightly on speed but remains much faster than .Net languages or Java - which are "interpreted" (although both claim to be compiled - the reality is that they are compiled as a set of runtime library calls) and operate inside a virtual machine that provides a managed pseudo machine in which the applications work.
A further advantage is that, because no run time engine is needed, any Pascal library will work on all the target machines, with any other machine library regardless of the compiler version. You do not have to worry about framework or psuedo machine engine versions - the idea is simply irrelevant.
=Does It Matter?=
Not really.
You won't notice the language in which we develop, any more than it is apparent what language Microsoft Word is written in. Think of RiskManager as just another MS Office application and you will be about right - it looks, feels and behaves the same way.
=I am used to applications in the .Net and Java Languages. How is this different?=
Its an awful lot simpler. No run time environment, no library version problems, no interactions with other applications sharing the run time environment. Just take our application out of the box and put it on your computer. The Windows client does not even need to be installed! You can literally copy it onto a desktop computer and just run it.
Actually, you are more used to applications written in Win32 languages, like Delphi, C, Visual Basic and C++. Think MS Office, Outlook, any Windows operating system (XP, Vista, Windows 2000, Windows 2003, etc.! All of these are Win32 applications, written in native compiled languages - not run-time languages.
Although many people refer to .Net as a language, strictly speaking, .Net is not a language as much as a runtime environment.
The .Net languages include C#, VisualBasic for .Net, Eifel (for .Net) and Delphi. I.e. Delphi is also available in a .Net form, that is not essentially different from its Win32 cousin, except that some things we can do in Delphi 7 (for Win32), we can not yet do in any .Net language.
Your .Net and Java runtime environments, in turn run on Win32 platforms using Win32 libraries to talk to the hardware. Delphi cuts out the unnecessary and resource hogging middle man.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
4adc0c28463916b421e2034ffb9c9a88460d3b72
What is the best way to get support?
0
305
420
419
2018-10-29T12:15:45Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Answer==
For IT Technical issues and software problems support is provided 24 hours * 7 days per week.
Email is the preferred method of communication as that ensures the correct person addresses your issue in the first instance, you won't have to wait to speak to anyone. We will generally call you in response to an email (if requested, or not requested, but deemed appropriate), and often within minutes of receipt of the email. There are local Canada, US and Australian numbers + International Skype numbers.
International IT and Technical support is handled from Australia and is handled directlty by the software programming team. The US and Australian support numbers route through to Australia while the Canada support number routes through to Canada.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
c646bf3902aa0b0a3d457f50e5ad3b667de28bfd
How do I arrange installation support and what is the timeline?
0
306
422
421
2018-10-29T12:15:45Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Answer==
Immediate. If you have a windows 2003 (or higher) server set up with IIS 6 (or higher) and MS SQL Server 2000/2005/2008 and the administration passwords necessary for install on those environments (local machine administrator password and the SQL server SA password), you can download the software from our website, install and be live inside two hours (we can do it in 15-30 minutes).
If you are installing on a single user, or network server with either a local database server or a remote database server, the installation and upgrade is fully automated and will take about 15 minutes. Separate client components have their own managed installers and can be run separately on the target machines (even from a central network share). Client installation to application launch takes around 3 minutes.
Never the less we like to talk you through what is happening and any decisions points where you could enable non standard set-ups that might be more suited to your needs, and introduce you to the significan number of hidden tools and features that are provided against the time they are needed. One of the most important of these are the security setting - which are defaulted off in a fresh install so that the initial user can be automatically created during the first client connection.
To get this support, just send us an email the day before to confirm an install time and we will call you and talk you through the installation over the phone.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
561bf7fa1cf7b73f29165e302d54d73728c49d09
What support packages are available and at what cost?
0
307
424
423
2018-10-29T12:15:46Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Answer==
The BPC RiskManager Enterprise License fee includes 24 hours telephone technical support and unlimited general email support during the first 6 months after first installation. After the first 6 months technical support is available under the maintenance subscription agreement. The BPC RiskManager Single User License fee includes unlimited email support and 2 hours telephone technical support during the first 3 months.
The BPC RiskManager maintenance subscription includes the support covering software upgrades and technical assistance. The majority of the subscription is dedicated to developing the software upgrades in the Beta and Production release cycles. Clients with current subscriptions receive technical support of up to 36 hours contact in a year on production versions and an additional 2 hours per beta release installed (and an unlimited additional support during a feature development phase if you are part of the Beta testing stream for a specific feature). Depending on the issues encountered and the context of the direct support, above that level we will probably approach you for some additional fees.
Training, configuration, report writing, survey writing, customisation, database conversion (where required), risk advisory, and similar activities are separately negotiated and quoted outside of the maintenance fee. Your quote should cover rates for the additional items (Canada Office). The maintenance fee is for the purposes of installing upgrades and funding the continued development of the software.
All support packages include priority scheduling of requested enhancements. Only current subscribers download, install or use RiskManager software upgrades. Where you have specific customisations (not configurations) you can either register the request with us for inclusion in a future release - under which arrangement its inclusion and timing of release is at our discretion - but significant priority is given to requests from current subscribers, or you can specifically contract for the modification in which case you can have certainty over timing and inclusion. In either case the universal condition is that ALL customisations are (at our sole discretion) included in the main code base that all clients receive as part of the upgrade cycle. This is to ensure that we only have one code base to manage and that nobody gets 'orphaned' because of their modifications.
"On demand" development of requested end-user reports is NOT part of the maintenance subscription, unless released as part of the Beta or Production software release cycles. Development of custom reports as a separate individual client release is a separately contracted service. In addition to those reports shipped as part of the main product, we also support the user-group open source efforts in report development and system customisation scripting however and we occasionally release additional report templates for public use through the forum.
Charges for maintenance support subscriptions vary depending on the installation, license and components used but are available at their current settings from the product order page.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
0ea7b8cb780cd3c229be984203c4d6098dbd1bb9
Is there a User Group Forum?
0
308
426
425
2018-10-29T12:15:46Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Answer==
Yes.
Forum: http://bpc.bishopphillips.com/forum/
Most users use other forms of communication such as emailing or phoning us were just so easy that users tended to do that rather than remember the forum details, and it is not software that tends to have a lot of bugs. Where they occur we fix them pretty quickly - often within 24 hours and then re-release. Initial installation is always handled by us with remote support as you work through the install - so there are not install questions to resolve on a forum - and upgrades are usually just a matter of running the auto-installer or (if you prefer manual methods) copying a couple of files and running a script - or with many clients simply backing up their database and giving it to us to convert.
Also:
# Wiki: Some many months ago we launched a public wiki (riskwiki.bishopphillips.com) to which we are progressively transferring our large internal library of management consulting and governance "technologies".
# TeamServices: Issues, enhancements and bugs are tracked through the team.bishopphillips.com site. Registered clients have access to this site, but to date most users simply send us an email and we record the issue on the team site.
# Blog: http://bpc.bishopphillips.com/riskthink/
# We are open to suggestions. BPC also runs a world-wide user-group coordinated by our Canadian office.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
055eec03f666403ae89c13bad6c1bcdcf63bcab0
How does one decide the optimum BPC RiskManager configuration?
0
309
428
427
2018-10-29T12:15:46Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Answer==
BPC RiskManager is shipped with a pre-configured database, set up with the most common options so you can use it right "out of the box". However, this application can really "sing" and you will probably want to do a lot more than just the standard configuration.
Generally, (although not mandatory) we will conduct a short consultancy to ascertain the most suitable initial configuration, and build a pre-configured database for you to use.
BPC RiskManager is designed to cover a very wide range of risk models so the configuration settings are not always obvious from the start. Almost everything can be changed from the client and are a few settings that are set on the application server management interface - so the initial decisions can be changed later. The install set includes a partially configured database with default settings, so it can be used "out of the box" - but we would generally advise you to have a small amount of configuration and training support in addition to the license.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
514e487f1542ad1ee5f36f92460278bd51f6b74b
Is BPC RiskManager a Client-Server application?
0
310
430
429
2018-10-29T12:15:46Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Answer==
No. BPC RiskManager is an n-tier application server system. Even in the single user configuration it is an application server solution (just with all layers on the one computer).
Refer to here for more information. [[BPC RiskManager V6.2 Network Architecture]]
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
e50081c44a8e651a24e643fe9950bf7c9e7af52f
Security: What is the most secure architecture for BPC RiskManager?
0
311
432
431
2018-10-29T12:15:46Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Access Rights=
With respect to access security we support trusted login, AD, LDAP, NT Groups, and internally managed methods for user access rights at the application server layer.
=Database Acess=
Only the application server accesses the database. There is no direct user to database connection ever established (even in report generation), consequently only one access login account is required between the database layer and the application server and there is no need to establish (or desirability in establishing) access rights for a user at the database level.
=Browser Plugin & Network Communictions=
==RiskManager Browser and Non-Browser Clients==
The browser based and non browser versions of the Risk Manager client face the same issues and use the same models for network communications. The browser merely hosts a plug-in component (think flash player, or adobe pdf reader) and is essentially used for distribution of that component. Once the plug-in starts it establishes a direct connection to the application server on a different port than that used for normal web communications (which may or may not also be a web server – i.e.. The web server that delivers the base page can use any security model you prefer – and the application server(s) can be on any physical server you desire – not necessarily the same machine as the web server).
The data stream is not a linear ascii data stream like a web page system but a stream of binary delta (change) packets which are essentially unusable out of the context of their stream and the non-delta (change) packets that are not re-transmitted in any case. On a private network this would generally be sufficient, in all but the most extreme scenarios.
The stream itself can also be separately encrypted, our preferred model where additional security is required is that the entire channel is encrypted through a Vitrual Proviate Network (VPN) tunnel (which can be defined to operate on a single port if desired) because these are generally more secure and faster than data-level encryption as they can be imposed at the hardware level. The RiskManager access model is STATEFULL so security models that allow for preservation of state across access are appropriate (hence VPN tunnels are a really good idea).
With respect to VPN solutions, either a fully fledged VPN (ideally hardware implemented for speed) should be used an the entire traffic between client and server tunnelled there-through, or HTTPS (SSL) can be used directly from the client to a dedicated (supplied) listener on the server, however in this latter case you will have to install an SSL certificate on the IIS server running in the application server and use the HTTPSrvr dll instead of (or in addition to) the SocketServer.
Built in to the RiskManager client / application server architecture are three models for communications:
* Proprietary port using raw TCP/IP (This is the default method)
* HTTP
* HTTPS (SSL)
==SurveyManager==
Where the survey manager module is used as part of the risk management process, this system uses conventional pure html web pages and will happily utilize secure socket layer or VPN tunnels as desired and appropriate to the location of survey page recipients.
The module is hosted on an IIS web server (any version) and any security model appropriate to web site technology is appropriate for use in this context. The server side of the surveymanager system is 100% STATELESS – so each page transaction is independent of any preceding or following transaction and hence the security model adopted does not even have to allow for preservation of session context across succeeding page submissions – except that common sense dictates that you would at least want the browser to be able to negotiate a login session across succeeding web pages with the web server if only to avoid the inconvenience of the user logging in with each page submitted.
The survey manager is used to deliver and collect compliance information and a variety of other data (such as risk or cause property information). Certificates, secure-socket layer, windows authentication, LDAP, etc access security models as well as no security with anonymous http access are all appropriate and acceptable. In terms of access (rather than data confidentiality), in addition to any access model adopted to log in to a the webserver, the survey engine uses a random key associated with the user ID to confirm the right-of access to the specific web page being served. The user does not need to know this key and it is delivered with the page invitation – so merely knowing the user id does not grant access to a survey. Login based security for respondents (to the extent this is desired) is expected to be handled by the web server / operating system, however there are also dedicated question types that will deliver a login page as part of a survey if that is preferred. In addition there are a variety of special field encryption mechanisms that can be turned on on a user, page, survey, survey instance, organisation, or database level.
Survey manager web pages are never stored as pages, but dynamically generated on the fly based on the user, the organization, the survey, a variety of context and user specific filters and keys, the responses to previous questions or other surveys, internally stored rules and a variety of other factors. All of this is stored in the SM/RM database and only graphical and page layout elements are actually stored on the web server itself (and even these can be stored in the database) – so a surveymanager website can consist of just the surveymanager dll and a single javascript library if necessary. The database accessed by the surveymanager library is determined by the library name (used as a key in the server registry), so again the user never establishes a direct page level connection to the underlying database.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
63aa5a1275f25d3d19275dbcfca6848d6ef4912b
BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7
0
312
434
433
2018-10-29T12:15:47Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Browser Setup For ActiveX Plugins using IE 7=
<ol>
<li> From a client computer (or from the application server computer if no client computer is easily available) open Internet Explorer.
<li> Choose “Tools” from the menu bar and “Internet Options” from the menu that appears.
<li> Select the “Security” tab.
<br>
<br>
[[Image:RMC_IESetup2.png]]
<br>
<br>
<li> Select the zone in which your risk manager application server resides relative to you client computer on the “Select a zone to view or change settings” tool bar. The diagram shows "Intranet Zone" which is the normal situation, but depending on your intended server destination you might need to choose a different zone - such as "Internet Zone"
<li> Select “Custom Level”
<li> On the “Security Settings” window scroll through the settings list until you find the “Download signed ActiveX Controls” setting. Enable the “Prompt” option (which is Microsoft’s recommended setting). Our ActiveX controls are signed with current Verisign ceritificates. Administrators can achieve higher level of security by also flagging controls from Bishop Phillips Consulting as being trusted, or from the riskmanager application server web site as being trusted – but the recommended setting should be enough.
<br>
<br>
[[Image:RMC_IESetup1.png]]
<br>
<br>
<li> We also set the automatic prompting for ActiveX controls to enable, but this may not be required in all scenarios.
<li> Scroll a little further down the list and enable the running of ActiveX plugins as follows:
<br>
<br>
[[Image:RMC_IESetup3.png]]
<br>
<br>
<li> Now select OK and close the security settings window, and select OK again and close the Internet Options window. You should now be back at your browser window.
</ol>
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
63750efadfd453fe53eee189342a5327b47f20cc
Steps For Migrating RiskManager V6.x from Test To Production
0
271
436
286
2018-10-29T12:15:47Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Introduction==
There is a very detailed installation process on [[RM625ENT Installation Instructions]]
However, this assumes an essentially manual installation process, essentially starting from a raw iron server, and includes installation of the OS components required. If you use the automatic installer (recommended) the process is much simpler. Production, generally differs from Test or Dev environments, however as the components maybe more widely distributed and you are generally starting with an at least partially configured server (unless you are dedicating a production application server instance to RiskManager).
Different sites do differing things for production, some reinstall completely others duplicate test into production, some do everything manually for production, while using the automated system for Dev, etc.
We recommend a reinstallation - partly because it is the least error prone, and possibly faster.
==If You Have An Existing BPC RiskManager Production Installation==
If you have an existing RM installation in production, you can actually just copy the changed files onto the server (replacing the existing files of the same name) and start the RiskManagerData server once, then close it down, and you are done, so the auto-installer is not actually necessary in this case. Alternativley, you can run the uninstaller in production to remove the previous installation, and then use the new installer to reinstall. You will NOT loose any of your configuration settings - so it is completely safe to do this. That will essentially make you existing system a raw machine EXCEPT that the connection settings will be in place already.
If this is your situation, the steps below are still correct BUT you should NOT let the installer create the database(s) for you - as you already have the connections present. Just say no to this question when it comes up during installation.
==Performing the Migration To Production==
Read the preceding section if your production server has a pre-existing RiskManager V6 installation. If you are migrating from BPC RiskManager Express or RiskMan, you DO NOT NEED TO UNINSTALL, BPC RiskManager V6.x will ignore the Express settings and installation.
Assuming we are starting with a W2003+ server that does not have a pre-existing RM installation, and that your SQL Server is on a separate computer:
===MAKE DECISIONS BEFORE INSTALLING:===
<ol>
<li> If using BPC support during installation email us to arrange a time for our call to assist you install.
<li> Decide whether you are going to enable SurveyManager as part of the installation, or later. (Ask the business)
<li> Decide how many databases will be set up in production (Can be increased later if desired, but easiest if known prior to installation as the installer does all the work for you).
<li> If you want to make an existing database available in production that has been set up in dev/test and you will NOT be using the same physical database as that set up in dev/test, decide whether you will be will be using the RM installer to restore a backup of the established database into production, or whether you will restore the backup separately (after installation completes). You should consider:
* If you have already restored the database into production you probably do not the installer to attempt to create it
* If the target database is the "DEFAULT" connection (so named) of the application server and the database does not already exist in production and the database server is essentially a single server solution with data and log files on the same server then either decision is appropriate, and it is probably simplest to let the installer create a database of the target name and then use the installer's restore option to restore the backup over the top of the newly created database.
* If the target database is a uniquely named database connection of the application server and the database does not already exist in production and the database server is essentially a single server solution with data and log files on the same server then either decision is appropriate, and it is probably simplest to let the installer create a database of the target name and then use the installer's restore option to restore the backup over the top of the newly created database.
* If the target database server is a complex configuration with log files and data files separated across multiple NAS/Servers etc, the installer will probably not be able to determine the configuration correctly as the information is not always available to it in the remote registry (although it will attempt to do it correctly). So restoring from backup on a remote machine may not succeed. You are probably best to do this manually prior to installing the server, or if the database does not yet exist on the target server, the simplest approach is to let the installer create an empty database of the same name and then restore your backup over the top of the newly created database after installation completes. If you choose the former approach, you will need to do some extra steps (instructions provided during installation below) so the client test will validate your install. If you do not create any databases during installation (or have a pre-existing database to which to connect) you will not be able to validate connection during installation. We strongly recommend that you at least let let the installer create its default database and test connection to that. You can always discard it later.
<li> Decide whether you will be using network compression comms or the default raw comms (see the instructions below for the implications of this decision – raw is simplest) and if both, which will be the default. (Can be enabled later if desired)
<li> Decide whether you will be enabling HTTP/HTTPS comms access as well. (Can be enabled later if desired)
<li> Decide whether you will be using the desktop client and or the browser plugin (we recommend the desktop client – both have the same functionality, but the plugin behaviour varies a little across different Win OS’s and IE versions due to MS security changes, so if you have mixed desktop OS’s not every desktop will behave exactly the same. If you want to know the implications or need this explained further, ask us or look on the riskwiki.
<li> Verify the installation site (eg the remote desktop on which the installer will be working) has phone access (preferably hands free), and that you know the telephone number for the phone, and, ideally, outbound internet (IE/Firefox/etc) access so you can look at the riskwiki if needed.
<li> You should do steps 1 – 9 below prior to the BPC support call.
</ol>
===PREPARE THE SITE BEFORE INSTALLING:===
<ol>
<li> Verify server has the following infrastructure on it:
* Functioning network connection to the rest of the network with port 211 (and ideally port 212 as well) and SQL Server TCP ports available – eg 1433.
* Functioning installation of IIS 6+
<li> Verify the server either has on it or available to it:
* Functioning SQL Server (any version) configured in Mixed mode authentication or SQL Authentication mode
* Functioning SMTP server that will accept relays from this machine (this can always be configured later)
<li> Verify that you the person installing knows:
* Server local system administrator user ID / PWD
* SQL Server user id SA / PWD (If SA is not available you will need to speak to me again)
* The name of the SQL Server and the instance (if not using the default instance)
* The Administrator account user ID (usually Administrator) and PWD for the RiskManagement system. This is database specific, and more important when restoring than installing. Not knowing does not stop you installing, but may prevent you from connecting via a client when the test is run at the end of the installation. Otherwise, any RM Administrator account is fine to use. It is auto-created on first connection, so it can often be the user name of the person who does the installation. Ideally you settle on a common user name, and always use that across all databases and remember the password. Access by the root administrator account can be blocked by the RM system administrator after installation of a fresh database, so for restored databases, it may be that this account’s access is blocked anyway.
* The http addressable name of the application server as it would be typed into browser address bar by a remote LAN client (eg: a human operating from her office)
* The fully qualified domain name of the application server as it would be entered in the windows network browser of a remote user if they were able to browse to a folder on the application server (eg. the human again)
(NOTE: Part of the installation process is to create special purpose limited rights SQL accounts, the installer either creates these for you, or expets you to know the passwords. I am assuming they do not exist yet on the target SQL server. You will need to provide a password during the installation for the “riskmanuser” sql server account. The installer will make this account if it is not available already, so you need to have decided what the password will be. I recommend using the same password as that used for dev. This is a limited rights ID. The other accounts will be set to use the same password. They can be changed manually later if desired.).
<li> If transfering the dev database into production:
* Prepare a backup of the dev database.
* Ensure the verison of SQL Server in production is the same as, or higher than, that in dev from where the backup comes (eg. You can NOT restore an SQL 2008 backup into an SQL 2005 server, but you can do the reverse)
<li> Confirm with the RM administrator how many databases they want in production. We recommend a minimum of two databases, the default auto names database, and another spare / empty database for future use. The auto named database will have the connection name “DEFAULT”, the other database can have whatever connection name you choose. The autonamed database will be called RiskManDB625 and the connection will be called “DEFAULT”. The connection name (and in fact the database name) can be changed later. The connection name is the name the user sees as the database name. The caonnection DEFAULT does not need to be entered at all by the user – so this is ideally the main database in use.
<li> Copy the RM Installer to a directory of the application server that will be accessable to the person performing the installation.
<li> Copy the backup file to a directory on the SQL server that the SQL server will be able to access (read from) during a restore. We recommend that that directory is the default backup directory for the targeted instance of the SQL server as that is where it will read from naturally (and if you use the installer to do it, the SQL server must be able to read the file – so it needs to be readable by the SQL server under the SA account).
<li> Verify that the place from which you will be connecting to the application server (ie the remote client) has a telephone preferrably able to run work in hands free mode (so we can talk you through the process by phone).
<li> Locate your BPC RiskManager registration code so you can enter it when asked. You will not need this until the client connects at the end of the installation process. If this is a new server and new database you will have up to 60 days to enter it.
<li> If you opted during the decision stage above to backup an existing RM database from Test and restore it into Production, you should do that now. (Or schedule it now to be done immediatley before the installation commences). Make sure you know the database name on the server.
<li> Send BPC an email or phone BPC to arrange a time for support to contact you – preferrably as long BEFORE you commence installing as possible. We will confirm the booking and contact you at that time. If you just wish us to be available should you need it during support, we will make sure we are able to take your call at that time, and email you a direct number to use should you need it.
</ol>
===INSTALLING:===
<ol>
<li> If using a remote client to connect to the application server and run the installation process (eg mstsc), verify that the remote client is set to operate at 96 DPI not 120 DPI (there is a bug in the installer display routine that hides some buttons at the 120 DPI resolution. If connecting via mstsc, enter mstc /console as the connection command in start/run from the remote computer so that you are operating in console mode. This is important so that you can see the system tray icons.
<li> (If using BPC support, await the call first). Run the installer in “Complete Mode”, read the onscreen instructions and answer all the questions.
* Always create default database, during initial installation
* If restoring a backed up dev database, the installer can do this AFTER the installer creates the databases, or you can do this after the entire process manually. For some complex SQL setups this may be required, as while the installer attempts to locate the correct places for database restoration from the SQL Server registry, this is not 100% reliable due to the various ways this information is stored in the registry across different versions and instances of SQL Server. Let the installer create the blamk database for you, so that all the connections are made, and then you can simply restore over the default database with your backed up database after the installation. If the SQL server is on the application server itself, there is a much higher probability of complete success in installer based restoration.
<li> The installer will auto-register the components and start the BPC RiskManager DataServer console. If you are NOT connecting to an existing database (ie you let the installer create new databases), you can go on to the next step - just select "End Process" on the consol window...other wise check the dot points below:
* If want to connect to an existing database that was NOT created or restored during installation (ie. a database that exists but that is not yet known by the application server on THIS computer) AND you already have the database(s) set up on the production database server, you will need to configure the connections when the application server console window appears (ie. NOW): [[BPC RiskManager - Database Configuration|How to Connect the BPS RiskManager Application Server to an existing database]]
<li> Next, the installer will start the client locally for a test connection at the end to verify access to the default (or other) database. If you can connect and see the main screen after login you have successfully installed.
NOTE: The installer will set the server up in single-user edition and auto-administration access mode. This does not prevent remote access but will (usually) need to be changed to your correct access settings for production enterprise deployment. See the section "After Installation" below.
<li> Switch the server into web edition - click [[BPC RiskManager - General Configuration|on this link for instructions]].
<li> Set up client access. Either (or all):
* Copy the desktop client installer (there are two to choose from depending on whether you prefer single exe or MSI installers) from the /program files/bishopphillips/RiskManagerVxxx to a network share that will be accessible to users
* Copy the already installed client from the /program files/bishopphillips/RiskManagerVxxx/win32client directory to a separate computer/folder and make the folder sharable, if you want people to simple run the client across the network from a remote folder. The client does not actually need to be installed on a destop to work, but installing it provides shortcuts / menus and enables the use of the network compression/encryption library in V6.2.5.x.
* Install the client into a citrix (or other remote desktop) image.
* Distribute the browser plugin ActiveX client to the Risk Manager web site.
<li> Go to a typical remote LAN computer and attempt to install/use the client set up in 13 to access the server using the same account used previously and verify remote connectivity to the application server.
<li> If intending to use streaming network compression/encryption, follow the instructions in the riskwiki for enabling this. Remember you will need to advise all users that the access settings are other than the defaults in the client. (a box has to be ticked and possibly a port changed in the login window). If using streaming network comms, we recommend 2 ports be enabled – one for raw comms and one for compressed comms. (Hence the suggestion at the start that you clear 211 and 212 for RM comms). In reallity RM does not care what port is used. By default it is set to expect communications on port 211, but you can set it to use any combination of ports you like. We advise sticking with the recommended (obviously). If using steaming compression, you should probably for simplicity enable that on port 211 – so clients only need to tick a box to enable it, and set the raw channel to be 212, as the raw channel is only for trouble shooting, and backup connection.
Note, enabling compression/encryption will EXCLUDE the option of copying clients as a means of installation as the compression library is currently a separate lib in V625.x - that will change in a future release.
</ol>
=After Installation=
Most of these actions require you to use the RiskManager application server configuration console. So firstly, on the application server computer locate the "BPC RiskManager DataServer" in the start menu and start it. When started, the application server appears as an icon in the Windows system tray, typically located in the lower right hand corner of your screen. Please double click on the icon [[Image:RM_App_Server_SysTrayIcon.png]] to interact with this program. The configuration console will open....and then..
<ol>
<li> Now proceed to the instructions for completing the security/access set up:
<br>
<br>
* [[Security Configuration - Update Installation and Reset]]
<br>
<br>
<li> If you have additional databases to connect to riskmanager that you did not do during installation, you had better so that now: [[BPC RiskManager - Database Configuration|How to Connect the BPS RiskManager Application Server to an existing database]]
<br>
<br>
<li> Depending on which other components you are using (network streaming compression/encryption, email messaging, surveymanager, browser plugin client, etc.) there may be a few manual steps to complete the installation using the RM Configuration wizard after the installation finishes and tests have been completed. You should generally, in any case, access the IIS server after installation and enable “Unknown ISAPI extensions” – for surveymnager operation even if the surveymanager is not being used yet, as it will save you time later when RM decide randomly to create a survey. The explanation of how to do this is in the riskwiki instructions below. Now do each of these steps in order (note all are optional - the system will work without any of these configurations, but some things like email will not be available without them:
<br>
<br>
# [[BPC RiskManager - Send Mail Options Configuration]]
# [[BPC RiskManager - Mail Server Connection Properties]]
# [[BPC RiskManager - Logging Configuration (OPTIONAL)]]
# [[BPC RiskManager - Create the Root Administrator]]
# [[BPC RiskManager - Distribution of Client Components]] (Browser plugin ActiveX)
# [[BPC RiskManager - Configure Risk Mail Manager]]
<br>
<br>
<li> If you are using the survey engine, the installer will have set that up on the application server, but there are a couple of things you will need to do. In particular you will have to manually tell IIS to allow unknown "ISAPI extensions" and if you have connected to a pre-existing database (rather than one created during the installation process) you will need to configure it. Also, if your SurveyManager web server will be different from your application server computer (eg a web farm), you will need to do the config step for each database in the RiskManager environment. (There is special tab in the to help with the multi database situation efficiently).
<br>
<br>
* [[BPC RiskManager - Install The SurveyManager]]
</ol>
[[Category:RiskManager FAQ]]
[[Category:BPC RiskManager V6 Installation]]
[[Category:BPC RiskManager V6 System Administration]]
<noinclude>
{{BackLinks}}
</noinclude>
ba321d524a897f9f6bc8c831a9035e1da24cabf6
BPC RiskManager V6 on 64 bit Windows
0
272
438
288
2018-10-29T12:15:47Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Introduction=
BPC RiskManager is a 32 bit application, but it will work just fine on 64 bit Windows. In most scenarios (particularly W2008 and above and Windows 7), the supplied BPC Riskamanger auto installer will correctly install the RiskManager system on a 64bit computer with no manual intervention. The optional SurveyManager library will require some manual steps in IIS and you should consider the notes lower down this page concerning that. If you are installing the W2003 64bit you may have to do some manual steps.
If you wish to pursue this solution on Windows 2003 for 64 bit or Windows 2008 for 64 bit you will need to do the following things:
*Install on the application server machine the 32bit ADO drivers for the target database (eg the MDAC 2.8 driver set. The RiskManager Installer will automatically cheeck for these and install them for you, so you can just run the installer for this step if you wish. For standard MS databases these should already be present, but you may need to download the appropriate 32 bit MDAC driver set from Microsoft. (A 64bit DB server will still require a 32bit driver for BPC RM to connect to it, but these should already be present.).
*Install BPC RiskManager as you would on a 32 bit operating system accepting the defaults. The installer will automatically put the 32but components in the x86 directory as required.
*Run the 32bit SocketServer, BPC RiskManager, BPC RiskManager DataServer and BPC RiskMailManager in 32 bit compatible mode i.e. using WOW (Windows-32 bit on Windows-64 bit) on your server. The auto installer will automatically do this for you, so you should not need to do anything unless you are doing a manual install (ie. copying and pasting the components).
*Move the 32 bit Midas.dll into the 32 bit system directory and register it manually. Again the installer will do this automatically and you should not have to do anything unless you are doing a manual install.
*Enable IIS to run 32 bit ISAPI dll's (if using the web components like surveymanager). This, you will have to do even if using the installer.
*Move the 32 bit ISAPI libraries into the 32 bit ISAPI directory. This you may have to do even if using the installer.
If you are installing on Windows 2008 or above, Windows 7 or above the 32 bit and 64 bit MDAC drivers should already be present, or if you are using the installer they should be installed automatically by the installer.
So, the simple solution to setting up RiskManager on 64Bit windows? - Just run the RiskManager Installer and let it do all the work.
=Setting Up the Database drivers on WOW64=
If you are using the insatller to install Riskanager, the installer will check for the MDAC (ADO) drivers and install the correct ones if missing.
There are multiple scenarios that you could be facing - all have essentially the same solution:
#. Locally installed 64 bit database server : You will need the appropriate 32 bit drivers. These have probably been installed with your database installation, but you may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
#. Externally installed database server on 64 bit OS : You will need the appropriate 32 bit drivers. You may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
#. Externally installed database server on 32 bit OS : You will need the appropriate 32 bit drivers. You may have to download the appropriate 32bit MDAC from Microsoft and install. MDAC 2.8+ will be ok.
In other words, the key "gotcha" in setting up the 64 bit OS version is making sure you have the 32 bit drivers loaded and registered appropriately. Most of the time you will already have the ADO drivers avaiable, or the RiskManager installer will have installed them for you, and you need do nothing in this step. If, however, you install and can not connect from the app server to the database, or if the installer fails to make databases when instructed, you probably have something wrong with your ADO drivers. In the early releases of 64bit OS's the existance of the 32Bit MDAC drivers were a particular issue. From Windows 2008 this does not seem to have been a problem any longer.
The second most common event we have noted is that if you are using SQLExpress and, depending on the options you chose, when you installed SQLServer your SQL Instance may be the default instance (ie. no instance name) OR SQLEXPRESS. If you can't connect check this first, then look to see if the 32 bit drivers are present.
=Enable the application components to use WOW64=
Windows-32 on Windows-64 (WoW64) is already part of you Windows 64 bit OS. All you have to do to use it is to enable the 32 bit applications to run in that mode. If you are running the RiskManager installer, it will do all these steps automatically for you.
*Install on the application server machine the 32bit ADO drivers for the target database (eg the MDAC 2.8 driver set. For standard MS databases these should already be present, but you may need to download the appropriate 32 bit MDAC driver set from Microsoft. (A 64bit DB server will still require a 32bit driver for BPC RM to connect to it).
*Install the RiskManager application normally ([[RM625ENT Installation Instructions|see the instructions for installing BPC RiskManager]])
*Run the application server components and socketserver component in W2003/W2008 32 bit compatible mode:
**Right clicking on the icons after installation and selecting properties.
**From the properties screen set the executable compatibility mode to be “Windows 2003 sp1”.
**Open a command prompt and navigate to the "Program Files\common files\borlan\socketserver" directory and type "socketserver.exe -install" to install the socket server as a service after enabling it to run in 32 bit compatible mode.
=Register the 32 bit Midas.dll on the application server=
If you are running the RiskManager installer you will not have to do anything here.
If you are installing manually (ie. copying and pasting the files), you must register the Midas.dll manually by performing the following steps to enable 32 bit MIDAS.DLL to run on 64-bit Windows:
1. Copy the midas.dll from the system32 directory (if present) or the system files
directory of the BPC RiskManager install directory to:
%systemdrive%\windows\SysWOW64\
2. Open a command prompt and navigate to the %systemdrive%\windows\SysWOW64 directory.
3. Type the following command:
Regsvr32 midas.dll
4. Press ENTER.
=Enable the IIS server to run 32 bit ISAPI dlls=
Depending on your version of IIS you will need to do different things. The primary issue is to make sure that IIS sees the components as 32bit apps.
Enable the IIS server to run 32 bit ISAPI dlls by perfoming the following steps:
*To enable IIS 6.0+ to run 32-bit applications on 64-bit Windows
1. Open a command prompt and navigate to the
%systemdrive%\Inetpub\AdminScripts directory.
2. Type the following command:
cscript.exe adsutil.vbs set W3SVC/AppPools/Enable32BitAppOnWin64 “true”
3. Press ENTER.
*Copy the surveymanager dll’s generated during configuration to the IIS server to run 32 bit ISAPI dlls to the special 32 bit ISAPI directory:
%windir%\system32\inetsrv.
[[Category:RiskManager FAQ]]
[[Category:BPC RiskManager V6 Installation]]
[[Category:BPC RiskManager V6 System Administration]]
<noinclude>
{{BackLinks}}
</noinclude>
23221c9ff91592b379804045b1dfd398f2399395
Would it be possible to get a copy of the BPC RiskManager V6 installation guide?
0
313
440
439
2018-10-29T12:15:47Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
Yes. Obviously, you get a copy with the install set for the BPC RiskManager, but you can also get a copy before installing.
The best approach to installation is to let the auto installer do it for you, and then a manual is not really required.
There is an installation instructions manual in pdf and another structured version on this riskwiki. The manual covers installation of all components, and includes the discussions of the architectural considerations. The documents are extremely detailed and assume very little knowledge of the windows environment, so it even covers installing the some 'not always installed' windows components such as the MS SQL Server (2000, 2005, 2008- with notes for Express), MS IIS server, and the MS SMTP server - which are Microsoft components, rather than BPC components. So essentially you can install from a raw MS operating system installation and just follow the installation guide. It covers installation on W2000, W2003, W2008, W2008-64, XP, Vista-Sp1, and Windows 7.
The best manual to use for installation is the riskwiki - as we update that first.
[[RM625ENT Installation Instructions]]
[[Category:RiskManager FAQ]]
<noinclude>{{BackLinks}}
</noinclude>
925da5dac920f31e89b1210c010dc79414be8e5c
Is there a feature listing for the BPC RiskManager windows client and the browser client?
0
314
442
441
2018-10-29T12:15:47Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Background=
You are looking at the possibility of using a mixed client environment based on user specific needs and where they are.
=Answer=
The BPC RiskManager browser client and the non-browser client are BOTH thick clients, while the dynamically generated BPC SurveyManager pages are pure HTML "clients". The browser based BPC RiskManager client is an MS Internet Explorer browser plug-in - like a Flash (tm) media player.
With respect to the two BPC RiskManager clients, both are EXACTLY the same application, just with a different wrapper. One is like a Flash plug-in for a browser, the other is a standard MS windows style executable - but below the wrapper they are the same program, they look the same and they behave the same.
To get different behaviours for different staff, you configure the rights of the staff, or the database to which the application talks. Data entry or enquiry-only staff simply do not have access to all capabilities (can't see them) or, on certain screens are in 'read-only' mode.
Many of your corporate staff known by the system are not going to be users of the BPC RiskManager primary client at all. These, typically, will be completing survey screens, compliance checklists, responding to or actioning emails sent by the system, etc. In these cases the BPC SurveyManager screens will be their primary interface - and those are pure web based HTML and javascript. These screens are generated dynamically through decisions you make in the RiskManager client concerning what a survey (eg a compliance cheklist) contains, and who gets what survey, with what contents and when. There is no standard to these as everything is dynamically constructed by the SurveyManager on a just-in-time basis - right before a page is displayed. Various wizards allow you to cause the survey framework to be generated from within the RiskManager client and determine the look of the web pages yourself.
The full feature list is huge, but a short list of the features available in the browser and non browser clients is available [[BPC RiskManager V6 Enterprise (Enrima Edition)|here]]
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
01f4714de60f1c98e710b76dedb4f539c92d600a
Can you please provide information on the cost of licensing and the type of licensing for BPC RiskManager V6.x ?
0
315
444
443
2018-10-29T12:15:47Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Licensing Philosophy=
There is an element of 'fair use' in the licensing models, and a little variation across countries to satisfy the particular market expectations of each country. Fee structures are based on your location.
Essentially we have to take into account the purpose of the installation - as this relates to maintenance and support. You local office will discuss and agree the terms of the license arrangement for these more unusual configurations. We do not generally count training or testing installs in the license. We rely on your honesty and integrity and sense of fair play - recognising that we all have to be able to stay in business. In the Enterprise and Group licenses, we also allow additional desktop (single user) copies to be installed as long as the use is for the purposes of the licensing client's business.
=Licensing models=
Subject to local variations, the basic models are:
# Single User. License by user - but we usually allow a few people to connect to the desktop without breaching the license - except that in this case you then have to set the web edition flag and some of the simplicity of pure single user mode access is then lost.
# Small work group - By seat - usually restricted to 10 to 15 users.
# Enterprise - Unlimited users. - Licensed by production server and legal entities (includes test and training server licenses without extra charge), with fair use qualifications. Due to the large number of ways the system can be set up, there has to be an element of fair use here. For example, we allow for the survey engine to be on a separate server / server farm without charging any extra licenses, but if the application server is on multiple servers that would require additional server licenses (at a heavy discount). (Also: Read answer to this question: [[When are multiple BPC RIskManager server licenses required?]])
# Group - Unlimited users. - Licensed by group of entities. Unlimited production servers. Fee set per client. (Also: Read answer to this question: [[When are multiple BPC RIskManager server licenses required?]])
Some example scenarios might help clarify the licensing expectations for BPC RiskManager V6 in some more complicated hosting scenarios:
# There is one physical application server (i.e. essentially one mother board - any number of CPU's) but many databases this is a single server license.
# Multiple application servers (i.e. multiple blades or distinct servers) (and one or more shared databases). One server license per computer - but discounted depending on the nature of the use:
## If the application server is, in fact, on multiple servers and the entity is a group with distinct but connected companies and each application server is set up as if it was a separate installation dedicated to separate entities in separately configured databases, then fair use (in our view) would say that that was separate licenses - although we would again significantly discount such an arrangement.
## If the hosting centre was hosting for multiple disconnected businesses (e.g. Government Departments, unrelated corporations, etc) - again fair use would dictate that these were separate installs with separate license, irrespective of the number of application servers involved - and a separate license would be required, but again discounted IRRESPECTIVE of whether there was only one massive physical server with many separate databases or many physical servers with only one or a few databases associated with each server.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
edafc045d7141761132875d0778fe0ae9aabe25b
I just purchased BPC RiskManager. Will you be sending the install disks, and when?
0
316
446
445
2018-10-29T12:15:47Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
We will be sending you a download link and then we will connect by phone to talk you through the install. It isn’t complicated.
Depending on your location and preference we can also send out a consultant to do the installation with you or for you, but there is normally an additional charge for this.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
5181561374ffc226d992ebe89874cfdfcb4f0c1c
Does the RiskManager plug-in itself have a certificate like a java applet does?
0
317
448
447
2018-10-29T12:15:48Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Answer==
Yes. The plug-in is signed with a VeriSign code certificate – so you only have to allow installation of signed ActiveX’s
Remember, the browser plugin is only one of a number of clients available for RiskManager. You do not have to use the browser plugin if you prefer another connection method.
Instructions for configuring IE for browser plugins is available here: [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7]]
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
f08336967dfd90cb7adfcc9e47a7717900ed0b40
For support, what type of support is available (i.e.: email, phone, onsite, etc...)?
0
318
450
449
2018-10-29T12:15:48Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
All of the above. Generally email to us is the fastest - because it will be addressed somewhere in the world very quickly, and usually the issues involve some kind of exchange of information, and where appropriate (or you request) we will call you. Sometimes things just have to be done face-to-face so that is done in those cases. Most things can be done remotely, however.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
275db26f386a66ce995268b8ae8b4b60355fbcc1
How do I get custom features added, or request new features for BPC RiskManager?
0
319
452
451
2018-10-29T12:15:48Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Can I Request New Features & What Does It Cost?=
Yes - you can and are encouraged to request enhancements. We don't guarantee to embrace every suggestion, but we will certianly consider it. If you are happy to leave the decision to us to schedule them and slot them into the existing enhancements schedule, they will be included as part of your annual maintenance subscription. I.e. - No additional charge.
=What if I need the feature quickly, or I don't agree with your decision?=
If you need enhancements faster, or want to be sure they are included you can contract the development directly.
The only condition attached to contracted enhancements is that we reserve the exclusive right to decide to include the enhancement in the general code base that all clients enjoy. To date, 100% of contracted code enhancements have been included in the common code base. This is to your advantage as it ensures your application is not orphaned from the development stream.
See also: [[What support packages are available and at what cost?]]
=How do I request Features?=
The best method is to add the request directly to our team web site. There is a list of all known issues and enhancements planned that is maintained on our [http://team.bishopphillips.com/ http://team.bishopphillips.com/] website to which we and clients add items and track progress on development and release.
You can also just email the request to any BPC Staff member, but preferrably your allocated Bishop Phillips account manager contact. Phoning the request is the least acceptable strategy as there is every chance it will disappear into the ether.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
daf429f8614b4dcc43ce5361f50a760ed733b98e
What type of documentation, technical and user is available for BPC RiskManager?
0
320
454
453
2018-10-29T12:15:48Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Answer==
<ol>
<li> A very detailed installation and configuration manual (assumes you know nothing about windows, SQL Server or RiskManager) and covers XP, Vista, W2000, W2003 and SQL Server 2000 and SQL Server 2005 and SQL Express 2005 setups (Aprox 80 pages) .
<li> Structured installation manual on the riskwiki.
<li> A growing Bishop Phillips Consulting and client/user maintained riskwiki.
<li> Extensive programmer level documentation for:
<ul>
<ul>
<ul>
<li> Report Builder - the end user report building tool (Aprox 135 pages) + Reference manual.
<li> PAXScript and ScripterStudio - the internal scripting languages
<li> WorkFlow Studio - the internal workflow tool (this is being rewritten and updated). Note the WorkFlow Studio is a beta release at the moment, so the documentation does not yet has a fixed target to document. It should be production grade by January. The beta bit refers to the fact that we have not yet sewn it through all the internal screens (because we are still deciding how best to use it - beyond merely documenting process flows), although all the database hooks are in place and the actual designer and engine, and task manager are fully functional and in production ready state.
</ul>
</ul>
</ul>
<li> User help library (being upgraded for the new release).
<li> Example databases with extensive internal documentation (e.g. Standard & Poors - with the risk categories explained). All documentation is shipped with the system and will progressively appear on the riskwiki over the next few months. We currently deliver it as a mixture of Windows help, HTML and pdf documents.
</ol>
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
59ffc6a4f5c588e0375a732eb2b4b2943b592672
What is the difference between the browser plugin and the windows executable RiskManager client?
0
321
456
455
2018-10-29T12:15:48Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Answer==
In a nutshell, very little.
Bishop Phillips Consulting supplies a browser based and non-browser based client for BPC RiskManager. Both solutions are application server solutions (also called 3-tier application server) – not client-server.
There is no difference in functionality between the browser and non-browser version. The solutions differ in how the client component is served to the client computer desktop.
The browser based client is delivered as an IE 5/6/7/8 browser plug-in (like adobe reader or flash player) while the windows (non-browser) client resides on the user’s desktop (like word or excel). The main argument for using one over the other is that the browser based client is distributed simply by publishing it to a web server web page, while the windows client is distributed by copying it to the client computer. The interface is otherwise the same in both solutions. While the browser client is slightly simpler to distribute and update (just point your browser at the web site versus copy a single executable application to your computer.), it disconnects from the server when you close the web page on which it is hosted (just like adobe pdf reader), while the non-browser solution stays connected until you close it yourself (or the server side socket server times out the associated com object through inactivity).
The user interface is identical across both the browser and non-browser versions. We generally release updates to the non-browser client first and in the coming releases the non-browser client will probably behave more like outlook (existing as an icon in the system tray) when not actively in use. We tend to use the non-browser version oiurselves as opposed to the browser version, but that really doesn't prove anything either way.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
a3821cee6e9be7faf2b87fae8c7429ef58896ffc
Database stability: Is the RiskManager essentially a SQL Server application ported to Oracle?
0
322
458
457
2018-10-29T12:15:48Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Answer==
BPC RiskManager V6.x is currently available only as an MS SQL 2000/2005/2008 server and MSDE 2000 / MS 2005 Express / MS 2008 Express solution.
BPC RiskManager Express is available as both an Oracle and SQL Server solution covering Oracle 8 through 10G and MS SQL 2000/2005/2008 server and MSDE 2000 / MS 2005 Express / MS 2008 Express. RiskManager Express has less functionality than RiskManager V6.x. The application in either case is developed on a database independent platform using an SQL Server test environment and then ported to Oracle the oracle server where Oracle versions are available. With respect to RiskManager Express there is no difference in stability of the application attributed to the database engine.
You are encouraged to adopt the MS SQL Server database for RiskManager V6 (the version that otherwise suites your requirements). In the event that an Oracle RiskManager V6 release is essential for you, the database independence layer utilized in BPC RiskManager Express was carried through into V6.
In fact, internally, V6 still goes through the database check steps on start-up that are used in BPC RiskManager Express to determine the database on which it is running and apply the changes to the SQL queries that would otherwise be required to run on Oracle. Therefore, we could produce an Oracle 10g+ release with approximately 1 month’s notice. The original intention when V6 was built was to release both Oracle and SQL Server versions – which is why the database independence layer was preserved in V6, but every V6 client to-date has chosen to adopt the MS SQL server version so we have not been able to justify the development effort required for Oracle solution. RiskManager Express predates RiskManager V6 and does have a predominantly Oracle and Interbase user base.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
9b768242e396b46d1cb7fcbad8eea12934dea426
Database support: Which database choice will give us the best level of support?
0
323
460
459
2018-10-29T12:15:48Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Answer==
===Does the Choice of Database impact the level of support?===
No. All BPC RiskManager V6 systems use SQL Server 2000, 2005 or 2008. It is reasonable to expect during the product version release life of V6 all future releases of MS SQL Server will also be supported. Most remaining RiskManager Express clients (those that have not upgraded to V6) use Oracle, but it also supports all current versions of MS SQL Server. BPC RiskManager V6 has considerably more data-elements than Express and is not back-wards compatible with BPC RiskManager Express (RiskMan) databases.
All V6 customers should choose the latest possible version of the SQL Server database engine that has been released for at least 6 months. V5 customers may also choose Oracle 10 and 11 series databases but are strongly encouraged to choose MS SQL Server equivalents if available in your organisation as this is our primary development database platform.
Selection between SQL Standard/Enterprise versus SQL Express alternatives entirely at your discretion and will be determined by your data volume and user connection needs. The selection of RiskManager V6 or RiskManager Express V5 does not impact the version of SQL installed by your.
We maintain concurrent development tracks for both V6 and Express V5 systems.
===Does concurrent development of V6 and V5 (with its Oracle user base) impact support with respect to database version?===
This is a good question. At this point no – because RM 6 clients are all SQL Server and Express Clients are virtually all Oracle (and include some of our oldest, most loyal clients).
Updates for BPC RiskManager Express V5 are released on Oracle and SQL Server concurrently.
Going forward (assuming you request, or we decide, to release an Oracle version for V6)…the honest answer is yes and no. I expect we will always develop on SQL Server for V6 and future versions (although this depends on which system has the larger client base), and release beta versions on SQL Server. We will then release the Oracle port of the same solution (this may be only a week apart – but the order will most likely be SQL Server first).
Once a BPC RiskManager V6 Oracle version is in production there will be no difference in support, appearance or capabilities of RiskManager V6 on Oracle versus SQL Server. The current release of the application server can talk simultaneously with databases from multiple database servers all running different models and versions of database engines as long as it has an appropriate available ADO Driver library.
===Does (or would) the choice of database impact the system capabilities?===
No - aside from the obvious fact that Oracle is not available as a current choice for V6 (but is of V5 Express). In the event that additional brands of database engines were adopted for V6 the user and administration experience is identical across all databases.
The client and business logic are separated from the database layer using a three stage database virtualization layer in both V6 and Express:
# The lowest is MS ADO which provides a common database interface layer in terms of database connectivity)
# Classic areas of incompatibility across databases lie in the use of identity (auto-incrementing) data fields which are supported in SQL Server but not in Oracle – we do not use them, but rather have reproduced that functionality using triggers which are database independent and maintain our own auto-incrementation field table, - and in the syntax of table joins which we handle through a preprocessing layer which automatically adjusts the syntax of joins depending on the database. In spite of the fact that V6 actually only currently deals with SQL Server – it still applies this step to join syntax.
# Lastly the data manipulation and multi-user data integrity reconciliation is handled in the application layer and records are reconciled at the field level (rather than row level at the database level) so if two users update the same record but different fields the reconciliation layer is smart enough to generally work out the correct combined update.
These methods were all developed for RM version 2 which had a mixed Oracle and SQL Server client base. Hence the brand of database has very little impact on the operation of the system, nor the skills required of the support team. Database specific issues are almost never the cause of support related issues.
===If cross database support is so easy now why do you anticipate 2 months to release an Oracle version?===
Essentially because of 2 reasons:
* The use of blobs is considerably greater in RM6 than in Express including a few places where multiple blob fields are present in the one table. Blobs are traditionally handled differently across the different database brands and require specific attention to ensure correct operation.
* The use of dynamically created SQL statements is greater in RM6 and the probability that some of those SQL statements are not passed through the syntax standardisation layer is greater than otherwise. As this layer adjusts between join types, statements that are not passed through but should have been, will simply not work because they will fail at the syntax level rather than work incorrectly and in any case, all database interaction is held in only a few code modules so it is a reasonably mechanical process to check and fix.
* There are potentially some SQL constructs used dynamically that must be presented differently in Oracle
* There are many more stored procedures and some complex structures like NSTree generators and recursive association tree walkers that may have to be preconceived or for which their are built in capabilities in current oracle systems that should be used instead.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
01314612ba40e01ec405714e0807c89ffbe611e0
What is the best client version - the browser or non browser Risk Manager client?
0
324
462
461
2018-10-29T12:15:49Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Answer==
This is a tricky issue. Generally we would recommend the windows non-browser version, unless you have a large number of users and want to be able to distribute the client component in an ‘on-demand’ manner.
Why? - Simply because there is one less 'moving part' and resizing of the application can be applied to the base (main) window, but an alternate argument is that where there are multiple databases, the browser client offers an advantage because you can easilly list the various connections on the hosting web page, and specifically secure access to the web page as part of your access model. Further the web page offers a simple "single point of publication" distribution system which instantly delivers the latest version to all users.
Like we said - there are really good arguments for both versions. Most larger lients use both versions, with the majority of users using the web browser version. Smaller clients tand to use the windows version.
On the face of it, the browser plug-in would be best in the situation involving a large, diverse oir geographiucally spread user base, as it will look after distribution, and updating itself on client computers. Note, however, the comment following:
The plug-in component is a self registering verisign signed ActiveX which does not write back to the web page hosting it, nor does it respond to scripted instructions sent externally to the plug-in (i.e.. it is a self contained blackbox). The default version contains a separate MIDAS library which it installs and writes to the client computer as part of the registration process, but we can replace this with an internal version of the library if needed. The non-browser version uses an internal version of the library, and is therefore installed merely by copying it to any place on the client computer to which a user has write access. Both the browser and non-browser clients write user preference information to the registry and work under Vista with UAC enabled. Some lockdown configurations of client computers can (obviously) prevent the plug-in from registering, and in those cases the non-browser client is the better choice as it does not need to register itself.
ActiveX plug-ins are not supported in the latest release of FireFox (although earlier releases are fine). Clients using the latest release of FireFox are advised to use the non-browser client.
In version RiskManager 2.5 and above, both the browser and non-browser clients use an internal com wrapper for IE (any version 5+ available on the client computer) to allow embedded document, procedure manuals and other links such as team web sites to be displayed integrated with the risk information on certain tabs. The capability is not critical to the operation of the application, but significant for some forms of user experience. In the event that IE has been stripped from the client computer, this capability will not work. If this proves to be a critical issue, we can produce versions that either use FireFox in that role or use our own HTML display engine, but in the latter case JavaScript support for the displayed pages will be lost.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
b437550a2aeb1215294c4e4a670833dc4889c19c
BPC RiskManager Server - After installing in production or adding an application server
0
325
464
463
2018-10-29T12:15:50Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Background=
You have an existing installation and configured database and you have just either added an extra application server to an existing database or ported from test/dev into production. You ran the auto-installer successfully and tried to connect to the new application server with a client and Risk Manager has rejected your login user ID - but you know your user name and password is correct.
You have seen a message that looks like this:
[[Image:RMLoginFail.jpg]]
=Answer=
That is essentially what it looks like. The application is ok, you are talking successfully to the server and the server has successfully tested the login user ID and password and rejected it as being wrong. So there is nothing wrong with the install, per se.
Now the question is how did you move it to production? What is you chosen authentication method? Have you set the authentication method on the server?
Assuming you are using the auto-installer and that you are using the most common security option - where the application manages the security itself....
On running the auto-installer on a new server, the installer will install the server in single user mode (General Tab on the application server), and Trusted signon shared login role = Administrator (RM Security Tab on the application server). This is to facilitate creation of a new account for on first time installation. In this case, you already have the accounts in the database so you need to switch the system into managing its own security.
So you need to:
<ol>
<li> Login as adminitrator to the application server computer
<li> Start the application server from the start menu on the server (“BPC RiskManager DataServer V6”)
<li> Double click on the green disk in the system tray. The Riskmanager DataServer management console will open.
<li> Click on the General tab.
<li> Change the “Risk Manager Edition” to “Web Edition”.
<br>[[Image:RMDS GP2.png]]
<li> Save settings.
<li> Click on the “RM Security” tab
<br>[[Image:RMDS GP10.png]]
<li> Switch the login role to “Assign access in application (Login Not trusted)”
<li> Switch the “Option to Assign Secure Identification” to “Use client user name only”
<li> Save settings.
<li> Click on “End Process” (bottom of the window)
<li> Attempt to login again.
</ol>
Obviously in steps 7 - 10 you set the security model to what ever model you are actually using. For an instructions on the various models and settings go to:
* [[Security Configuration - Update Installation and Reset]]
Lastly, did you follwow the steps laid recommended for migrating from test to production?
* [[Steps For Migrating RiskManager V6.x from Test To Production]]
If you still have problems send us an email to the support email address, providing a phone number we can call you on - and when you would like to be called. (oh..and make sure your maintenance/subscription fee is current :) )
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
88696fb078d5bdca852feb61666fd22e8bfdf6b0
BPC RiskManager Frequently Asked Questions
0
5
466
302
2018-10-29T12:15:50Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
{|width="100%"
|width="60%" VALIGN="Top" |
# [[How do I get a copy of BPC RiskManager V6.2.5?]]
# [[Would it be possible to get a copy of the BPC RiskManager V6 installation guide?]]
# [[Is there a feature listing for the BPC RiskManager windows client and the browser client?]] We are looking at the possibility of using a mixed client environment based on user specific needs and where they are.
# [[When are multiple BPC RIskManager server licenses required?]] We are looking to have RM implemented across a group of companies. They will all be using the same instance with same fields and definitions as the subject matter is the same. Can we use a single server license or will we require multiple server licenses?
# [[Can you please provide information on the cost of licensing and the type of licensing for BPC RiskManager V6.x ?]]
# [[Does your license include the cost of MS SQL Server ?]]
# [[I just purchased BPC RiskManager. Will you be sending the install disks, and when?]]
# [[What will need to be arranged prior to the installing BPC RiskManager?]]
# [[Does the RiskManager client application work with FireFox browsers?]]
# [[In what programming language is BPC RiskManager written?]]
# [[Does the RiskManager plug-in itself have a certificate like a java applet does?]]
# [[For support, what type of support is available (i.e.: email, phone, onsite, etc...)?]]
# [[What is the best way to get support?]]
# [[How do I arrange installation support and what is the timeline?]]
# [[What support packages are available and at what cost?]]
# [[Is there a cost associated with telephone support (i.e.: cost per call or issue)?]]
# [[How do I get custom features added, or request new features for BPC RiskManager?]]
# [[Is there a User Group Forum?]]
# [[What type of documentation, technical and user is available for BPC RiskManager?]]
# [[How does one decide the optimum BPC RiskManager configuration?]]
# [[Is BPC RiskManager a Client-Server application?]]
# [[What is the difference between the browser plugin and the windows executable RiskManager client?]]
# [[Database stability: Is the RiskManager essentially a SQL Server application ported to Oracle?]]
# [[Database support: Which database choice will give us the best level of support?]]
# [[Security: What is the most secure architecture for BPC RiskManager?]]
# [[What is the best client version - the browser or non browser Risk Manager client?]]
# [[What admin account rights are required to setup a browser plug-in?]]
# [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7|How do I configure IE for the RiskManager browser plugin?]]
# [[BPC RiskManager Server - After installing in production or adding an application server|We just ported our enterprise system to a new server and I can't login. What do I do now?]]
# [[Steps For Migrating RiskManager V6.x from Test To Production|How do I port BPC RiskManager from test (or dev) to production?]]
# [[BPC RiskManager V6 on 64 bit Windows|How do I install BPC RiskManager onto a computer running a 64bit Windows OS?]]
| VALIGN="Top"|
<noinclude>
{|align="right" width="100%" cellpadding="10px"
|- style="background-color:#FFEBCD; " width="100%"
|'''A Frequently asked Question is...'''
|-
|<div class="didyouknow2" STYLE="height: 600px;
border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-right:10px;" >
{{#dpl: includepage=*
|includemaxlength=3000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=RiskManager FAQ
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;"></div>
|}
</noinclude>
|}
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:Bishop Phillips Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinksCategoryHead|CT=RiskManager FAQ|CN=The frequently asked Questions Category}}
</noinclude>
25cfdeccbd4a292afa2715e0cff010008b205d54
486
466
2018-10-29T12:17:45Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
{|width="100%"
|width="60%" VALIGN="Top" |
# [[How do I get a copy of BPC RiskManager V6.2.5?]]
# [[Would it be possible to get a copy of the BPC RiskManager V6 installation guide?]]
# [[Is there a feature listing for the BPC RiskManager windows client and the browser client?]] We are looking at the possibility of using a mixed client environment based on user specific needs and where they are.
# [[When are multiple BPC RIskManager server licenses required?]] We are looking to have RM implemented across a group of companies. They will all be using the same instance with same fields and definitions as the subject matter is the same. Can we use a single server license or will we require multiple server licenses?
# [[Can you please provide information on the cost of licensing and the type of licensing for BPC RiskManager V6.x ?]]
# [[Does your license include the cost of MS SQL Server ?]]
# [[I just purchased BPC RiskManager. Will you be sending the install disks, and when?]]
# [[What will need to be arranged prior to the installing BPC RiskManager?]]
# [[Does the RiskManager client application work with FireFox browsers?]]
# [[In what programming language is BPC RiskManager written?]]
# [[Does the RiskManager plug-in itself have a certificate like a java applet does?]]
# [[For support, what type of support is available (i.e.: email, phone, onsite, etc...)?]]
# [[What is the best way to get support?]]
# [[How do I arrange installation support and what is the timeline?]]
# [[What support packages are available and at what cost?]]
# [[Is there a cost associated with telephone support (i.e.: cost per call or issue)?]]
# [[How do I get custom features added, or request new features for BPC RiskManager?]]
# [[Is there a User Group Forum?]]
# [[What type of documentation, technical and user is available for BPC RiskManager?]]
# [[How does one decide the optimum BPC RiskManager configuration?]]
# [[Is BPC RiskManager a Client-Server application?]]
# [[What is the difference between the browser plugin and the windows executable RiskManager client?]]
# [[Database stability: Is the RiskManager essentially a SQL Server application ported to Oracle?]]
# [[Database support: Which database choice will give us the best level of support?]]
# [[Security: What is the most secure architecture for BPC RiskManager?]]
# [[What is the best client version - the browser or non browser Risk Manager client?]]
# [[What admin account rights are required to setup a browser plug-in?]]
# [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7|How do I configure IE for the RiskManager browser plugin?]]
# [[BPC RiskManager Server - After installing in production or adding an application server|We just ported our enterprise system to a new server and I can't login. What do I do now?]]
# [[Steps For Migrating RiskManager V6.x from Test To Production|How do I port BPC RiskManager from test (or dev) to production?]]
# [[BPC RiskManager V6 on 64 bit Windows|How do I install BPC RiskManager onto a computer running a 64bit Windows OS?]]
| VALIGN="Top"|
<noinclude>
{|align="right" width="100%" cellpadding="10px"
|- style="background-color:#FFEBCD; " width="100%"
|'''A Frequently asked Question is...'''
|-
|<div class="didyouknow2" STYLE="height: 600px;
border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-right:10px;" >
{{#dpl: includepage=*
|includemaxlength=3000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=RiskManager FAQ
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;"></div>
|}
</noinclude>
|}
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:Bishop Phillips Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinksCategoryHead|CT=RiskManager FAQ|CN=The frequently asked Questions Category}}
</noinclude>
25cfdeccbd4a292afa2715e0cff010008b205d54
Is there a cost associated with telephone support (i.e.: cost per call or issue)?
0
326
468
467
2018-10-29T12:15:50Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Answer==
No - If your annual maintenance subscription is paid and active, phone support for IT Technical issues and software operational problems is covered. With a new license, or while evaluating the software the initial install help is provided free. Your maintenance subscription also covers phone support for re-installs, software usage strategy and a reasonable volume of 'how-to' questions. If there is a significant volume of assistance required, or for general risk management and consulting support there may be separate charges so talk to us and, if necessary, we may will propose a modest quote.
[[Category:RiskManager FAQ]]
<noinclude>
{{BackLinks}}
</noinclude>
48137d0fda8fb2ee3d54def68dd80520bde62af1
BPC RiskManager Software Suite
0
3
470
340
2018-10-29T12:17:43Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=BPC RiskManager Software Suite - Risk, Compliance and Certification=
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) was first released in 2008. The latest version was released in 2011. It is updated continuously throughout the year and, as a client, we encourage you to actively participate in the development direction.
The Enrima edition of BPC RiskManager is a single-user and multi-user risk management, compliance management, financial statements certification, insurance, survey, incidents & hazards system all in one application. You can manage multiple organisations and simultaneously view governance issues as risks, compliance obligations (legislation, processes and procedures) and compliance topics simultaneously. It manages email based reminders for a large variety of user expectations internally.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width=100%
|-
|
* BPC RiskManager V5 (Express)
|[[image:BPCRiskManagerExpressV5.jpg]]
|-
|
* BPC RiskManager V6 (Enrima Edition)
|[[image:BPC_RiskManager_V6261_Main_Screen.jpg|600]]
|}
=Client Base=
BPC RiskManager clients are head quartered in Australia, Canada, the United Kingdom and the United States of America. Global clients, of course have offices in many other countries. [http://www.bishopphillips.com| Bishop Phillips Consulting] has local offices in both Australia and North America.
The system is used extensively in the education sector with a very substantial presence in Universities in both Australia and Canada and commercial education providers and colleges in the USA. Other significant client groups include insurance providers (both primary insurers and reinsurers), central government agencies (such as federal & state/province departments and local government), utilities such as postal, electrical and water utilities.
BPC RiskManager implements and substantially extends the Risk Management Standards "AS/NZS 4360:2004 :Risk Management", and "ISO 31000" and complies with the "ISO/IEC Guide 73- Risk Management – Vocabulary".
The Risk Manager is not restricted to merely following the one interpretation of the risk standards. As a consequence of its long market history, BPC RiskManager implements a large number of divergent risk management methodologies or methods. Any combination of one to three assessment groups each containing ratings for likelihood, consequnce and control are possible. For example some clients use a risk management methodology that utilises risk budgets with three rating groups "Inherent, Residual and Target" where inherent ratings shift with external factors and target shifts with the corporate risk appetite (ie a risk budget) while the residual floats according to assessment ratings.
Any number of self assessments in each group can be maintained together with a separate family of assessments and remediations created by audit/expert that coexist with management's risk assessments.
Whether your preferred risk methodology uses quantification (quantitative risk analysis), or qualification (qualitative risk analysis), BPC Risk Manager directly supports the approach on a per assessment basis. Terminology (including fields names and purpose and screen captions) is fully customisable so the system can directly implement the corporate risk methodology / risk method.
=Get a Fully Functional Evaluation Copy of BPC RiskManager for FREE=
You can get a free no-obligation fully functional copy of BPC RiskManager (Enrima Edition) simply by completing the request form here:
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php I want to evaluate BPC RiskManager without obligation for free, please.]
It will work for 60 days, and if you need more time you can contact us and request a longer evaluation. There are no limitations in the evaluation version and we will even give you support for free while you get it running. It is fully self installing and will open up accessing your first risk database when the installer finishes.
If it isn't right for you, you can just uninstall after the 60 days with no further obligation to us.
=Knowledge Base=
*[[BPC RiskManager V6 Enterprise (Enrima Edition)]]
** [[BPC RiskManager V6 Enterprise (Enrima Edition)| BPC RiskManager Features]]
** [[BPC RiskManager V6.2 Network Architecture]]
** [[RM625ENT Installation Instructions|BPC RiakManager V6.2.5 Installation Instructions]]
** [[BPC RiskManager Frequently Asked Questions|BPC RiskManager - Frequently Asked Questions]]
** [[BPC RiskManager Quick Help With Common Tasks]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
** [[BPC RiskManager V6 on 64 bit Windows]]
*[[BPC SurveyManager - Overview]]
** [[BPC Surveymanager - Key Features]]
** [[BPC SurveyManager - Introduction]]
** [[BPC SurveyManager - Creating Surveys - Layout and Markup Tags]]
** [[BPC SurveyManager - Creating Surveys - The Page Script]]
** [[BPC SurveyManager - Questions and Input Controls]]
** [[BPC SurveyManager - Creating Surveys - Properties]]
** [[BPC SurveyManager - Creating Surveys - Rules Scripting]]
** [[BPC SurveyManager - The Built In Reports]]
** [[BPC SurveyManager - Advanced Database Configuration Settings]]
** [[BPC SurveyManager - Client Overview]]
** [[BPC SurveyManager - Tutorials - Survey Layouts]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
<noinclude>
[[Category:Featured Article]]
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
dc7ccfc5f7d790cb2dd0c17b50cdde25c14ee35b
Risk Management - Introduction
0
293
480
388
2018-10-29T12:17:44Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==What Is Risk Management?==
===Risks, Causes & Consequences===
Risks to your operations and assets are a permanent and inescapable aspect of existence. Put simply if you have an objective, the central possibility exists that your objective may not be achieved. That possibility is risk.
Inputs required for your objective may not be available when required, or the cost of the same may make the objective inviable, or the social or technical assumptions may be invalidated, etc. These are threats, or causes of objective failure, and therefore causes of risk. Threats exist - some latent and some active, but all are potential causes of the failure to achieve your objective (with varying likelihoods).
Further, it may be that failure to achieve the objective, or preserve the asset may have impacts far beyond the loss of the expected benefit to be derived, or value of the asset lost. Those impacts are the consequences. For example, at the individual business level, failure to achieve a strategic objective may result in failure of the business, while on the international stage, failure to achieve a diplomatic objective may impact the society detrimentally for generations to come, and failure to protect a critical military or hazardous materials technology may result in extensive loss of life.
Lastly, a risk may not be a bad thing - it might be a good thing, or more commonly known as "an opportunity". Likewise, in impact may not just be "nothing to really bad" but also "really good to nothing to really bad". In its fullest extent risk management covers both opportunties and exposures. Most of the following discussion will consider risk management in its more common guise as managing exposures, but when we consider "Competitive risk Management" we will once again expand the definition.
<br>
===Risk Appetite===
The degree to which these undesired outcomes are more or less certain will effect your degree of concern about them. At the extreme ends, everybody may have pretty much the same response: an undesired outcome that is virtually certain to occur will probably be judged as unacceptable, while an undesired outcome that is virtually certain not to occur, will probably be judged as acceptable. Between these extremes each individual, organisation, and society will have differing determinations of acceptability. This determination is also likely to vary with the nature of the undesired outcome (for example the 50% chance of a loss of thousands of lives is generally considered less acceptable than the 50% chance of the loss of ten dollars). This variance in judgement as the risk appetite - literally your or your organisation's willingness to passively accept the possibility of a particular type of undesired outcome.
===Risk Response, Mitigation and Control===
The reactive leader, when faced with changed circumstances will rapidly form a response. These responses are designed to minimise the consequences of the threat event and are risk mitigation actions, or risk treatments. Of course, some responses (like avoidance or insurance) are by this time out of the question - as the threat has materialised. Faced with too many or too big a change in circumstances, even the most responsive leader can be overwhelmed, and the process fails with the objective not achieved.
A wise leader then (at least) learns from experience, and establishes processes to minimise the likelihood of similar threat events occurring (prevention), to detect when they occur (detection) and immediately respond and mitigate the consequences when they occur regardless (correction). These preplanned and pre-established processes of prevention, detection and correction are controls.
===Rating a Risk===
All controls have a cost - whether measure in money, time, tactical advantage, etc. Too much control may make the achievement of the object inviable. The leader may judge that some threats experienced are unlikely to occur again (for example Yr 2000 date risk was a once off, as the year 2000 is unlikely to occur again in this time line!). Other threats will be considered almost certain - such as a sunny day melting an unrefrigerated cargo of ice cream. the probability that a threat will eventuate is its likelihood. Where the likelihood is very low, the leader may judge it is not worth the cost of controlling.
Likewise, some consequences of threat events are so minor that they can be ignored, while others are catastrophic to the objective. This judgement is the impact rating of the consequence.
The Likelihood of a threat event, combined with it's level of impact to the object achievement constitute the inherent risk to the achievement of the objective.
Although not yet part of the standard, over recent years an additional rating parameter is being argued for consideration: "Velocity". The velocity of a risk is the speed with which a causal event translates into an outcome. Velocity is a rating against time inversely, so the shorter the time it takes for a causal event to result in a specific impact, the higher the velocity.
Conversely if we are going to consider a time based measure for the onset of a risk event, we should allow for a velocity measure on the mitigation side of the equation. Here we would have two types to consider - pre-event controls (such as training, and document manuals), have a velocity measure that acts during a different phase from that during which the impact velocity is measured. The control velocity of specific interest to mitigating impact velocity is that of the reactive controls - Event (or Error) Detection and Event (or Error) Correction controls
<blockquote>
'''NOTE:''' Controls fall into one of three groups - Prevention, Detection and Correction. The first group identifies proactive controls (although some control steps in a given strategy of controls may be reactive even here), while the latter two describe purely reactive controls. Note that under this view the process of setting up a reactive control system and training the participants and systems in the operation of that control is itself a proactive step and hence a Preventive control, while the operation of the actual control itself is, to the triggering causal event, reactive.
</blockquote>
A similar case may, on the face of it, be advanced for direct estimation of Risk Frequency. Specifically, such a measure is one of the frequency of a causal event - with an assessed likelihood of triggering at each cycle. The amount of time required for a single cycle from Causal Event A<sub>0</sub> to the next potential occurrence of Causal Event A at time 1 :i.e. A<sub>1</sub> is the velocity of the likelihood of a causal event being once again tested. On this basis we could again track the velocity of the likelihood.
A reasonably strong case might also be advanced that likelihood measures carry an implied frequency measurement as people tend to rate things as more certain to occur of they are always almost occurring than when rarely experienced, even if the causal event actually occurs on these rare occasions. In this case it is argued that rating likelihood velocity in fact double weights the likelihood rating.
This author leans to the former view. If we are separating some velocities from their coupled ratings, we should consistently apply the logic of separation to them all. On that basis the probability or reliability estimates are consistently cleansed of time subjectivity, and thence become an instantaneous rating rather than a multi period rating of the probability, impact or dampening (control mitigation rating). In database design terms the rating measures are normalised with respect to time. The obvious benefit is that the greater the consistency among the properties (functional and data) if not the content of those properties, the greater the reliability that the items can be combined to give a result that varies consistently with its inputs (in this case a Risk rating). If some of the inputs are themselves functions of other inputs (such as time) the result of combining the various components of the risk formula together will not appear to move consistently with the inputs.
A further benefit of separating velocity information is the colour it might bring to the risk analysis. One can picture a risk model where the assessment of an otherwise well rated risk, on the basis of likelihood velocity (think frequency), impact velocity (think: "How quickly will this hit us?") against preventive control velocity (think "How long will it take for the training to be completed?") and Detection control velocity (think "How quickly will we know that the wheels have fallen off?" and Correction control velocity (think "How quickly will we have cleaned up the mess?"), might reveal some fascinating structural problems in a control system. Such as a 12 month wait for detection controls to be in place for an high to medium impact impact of an event happening every week, and if those detection controls that then tell us only at the end of a quarter that a problem occurred that will take 6 months to fix, we might like to know - even though individually all these controls got the highest ratings in terms of effectiveness. Of course, if our risk formula dealt with these items properly as part of its model we would not have a well rated risk with such problems!
Expressed as a formula where f() means a function of the items in parentheses, the risk equation with all these potential inputs is then:
R<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(C<sub>i</sub>), f(CPV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;C
:Means Mitigating Strategies and Controls effectiveness rating mitigating causal events and consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating each impact and possibly some to all causal events
This formula says nothing more than that the risk rating is a function of eight variables, Whole-of-risk likelihood, likelihood velocity, impact, impact velocity, but mitigated by whole-of-risk control effectiveness-reliability, working over three velocities - Prevention control velocity, Detection control velocity and Correction control velocity. In term the value supplied for each of these ratings is itself a function of the assessed value of the rating to a normalised value (such as the range of reals from -1 to 1, or a shared 5 point scale, etc.)
The weakness in this formula lies in the consolidation of the three risk groups into a single control rating for the purposes of the risk function itself (thus hiding the relationship between the control group velocities and the control group ratings.
R<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(CP<sub>i</sub>), f(CD<sub>i</sub>), f(CC<sub>i</sub>), f(CPV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;CP
:Means Mitigating Strategies and Controls effectiveness rating at preventing causal events.
;CD
:Means Mitigating Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CC
:Means Mitigating Strategies and Controls effectiveness rating for reducing the likelihood of further causal events and mitigating consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating each impact and possibly some to all causal events
<br>
===From Risk Response To Risk Management===
Faced with a similar objective at another time, the prudent leader moves from re-action to pre-action. He applies his own and others' past experience, "common sense", and deductive reasoning when identifying the nature and causes of potential threat events and their consequences. He makes judgements as to the likelihood of these identified threats, and judgements as to the degree of impact arising from the consequences. This process is risk identification and assessment.
Comparing this assessment to the organisation's risk appetite he determines a range of risk responses, treatments or controls. The shift to pre-action (more commonly described as proactive management), the leader's options are widened when compared to the earlier reactive state. By preplanning the risk profile he is able to consider avoidance (just don't do it!), risk sharing (insurance) and threat prevention (training) as options in the risk mitigation armoury. Further the costs of each mitigation strategy can be considered against the benefits expected from the achievement of the objective, and the most effective, efficient and economic ones chosen.
In all cases a threat has a "tell-tale" which must be used to detect that the threat has eventuated, or that the likelihood of a threat has changed. These controls required in this case are detection controls. As with the other pre-action choices, detective controls are most advantageous before the event occurs - as once it has occurred they will generally tell you what you already know. This shifting assessment of risk based on the changes in likelihood over time is the current risk.
Implementing detection controls, allows the leader to defer the implementation (if not the planning, design and establishment) of other reactive controls, thus delivering a degree of certainty over the costs of mitigation at each point in a project, under a variety of circumstances and levels of current risk.
Once the controls (or risk mitigation plan) is applied to the assessed inherent risk of the objective the result is the residual risk - that portion of the inherent or current risk that remains after the controls have been applied.
Risk Management is about applying a structured thought process to identifying and managing such risks.
In one form or another, every leader undertakes risk management from the minute you establish a political ideology, manifesto, business vision, organisational mission, or business or political objective. Without a plan - however loosely defined - the objective is unlikely to be achieved. That plan is a map to managing risks to the non-achievement of the objective - starting with the most obvious risk: "inaction".
While Compliance Management is about a governance process for managing adherence to internally and externally known standards, policies, procedures, and controls; Risk Management is an approach to governance that aims to identify what plans, standards, policies, procedures, and controls are be required and how important each part is to the purpose, and when you will know which additional actions will be required. Risk Management is a systematic process of making a realistic evaluation of the true level of risks to your purpose, and mitigating those risks that exceed your risk appetite in the most efficient, effective and economic manner possible.
==What Is Enterprise Risk Management?==
Enterprise Risk Management applies the concepts outlined at the project or single objective level described above, and applies them across the enterprise, government, or society (as appropriate). Enterprise management distinguishes itself from project risk management by its aims:
* Firstly, it aims to reduce duplication of risk management planning and risk mitigation strategies by facilitating cross-organisational sharing of control frameworks, management expertise, and resources.
* Secondly, it aims to minimise contradictory, counter productive and mutually exclusive risk management strategies by facilitating enterprise wide knowledge of the risk profile of the organisation.
* Thirdly, it aims to inform the governance team of their true organisation wide position on a continuous and instantaneous basis.
* Fourthly, it aims to forecast the risk profile of the organisation within, at least, the decision cycle of the governance team.
==What is Competitive Risk Management?==
So far, we have considered risk management as a stability governance tool for the assisting the achievement of identified objectives. In essence it is under this view a defencive strategy. The scope of governance arguably extends beyond maintenance of environmental stability and achievement of defined near-term deadlines and objectives, to the identification of the correct objectives (those that succeed on some measure), and longer term aspirational objectives such "more profit" or, in social measures - "higher average literacy".
This shift implies to additional dimensions should be considered:
#A risk may also be an opportunity, and an impact may be both positive and negative. Where the impact is positive for the organisation the correct corrective control response is to in fact augment it the effect (such as by adjusting the causal states of other risks (opportunities)). The overall implication is that to accomodate opportunity the risk rating scheme needs to be balance around 0 (meaning minimum risk and minimum opportunity). Whether this is best done with a positive scale and a negative scale or whether this should be achieved with a linear scale with a floating normal line is, I think an implimentation question at this stage.
#A risk/opportunity may have a group of controls (strategies) intended both alternately to mitigate (Prevent, Detect, Correct ) and augment (Focus, Sense, Enable) a risk in some way. Note that we are expanding our control groups from three to six. This is necessary where two impact rating scales are used (an opportiunity scale and a impact scale). If only a single monotonic impact scale was used: eg. "really-good to negligeable to really-bad", we could prossibly escape with four groups: Focus, Prevent, Detect, Correct. Focus is the opportunity's version of Prevent. The difference is that in the case of a risk, an effective preventive control reduces the residual likelihood (if not the inherent likelihood) of a causal event, while in an opportnity we want precisely the opposite outcome. Thus we need to track these separately. In the case of the two scale system we need both the "opportunity" equivalents for detection and correction control functions separated as well.
In competitive risk management we utilise the techniques of "defencive" risk management as a method to inform competitive strategy. The same methods that are applied to determine and manage or avoid your risks, can be applied to:
#determine, induce and exploit your opportunities, and select the opportunities most likely to be successfully exploited; and
#determine and trigger your competitor's risks, and where they are either most exposed, or where their responsive mitigation costs will be greatest. In this use there is an implied additional measure-counter measure relationship between controls where an augmentation strategy is defined that is designed to detect or counter another mitigation strategy.
In competitive risk management we therefore look to identify and exploit our opportunities and the weakness in others through application if risk management techniques. Such an application of the method is likely to be most effective where knowledge of the competitor or competing industry approaches perfection, and the accuracyy of the model used approaches perfect accuracey. There are interesting implications to game theory where all participants in a market use equivalently competitive risk management methods and have equivalently perfect knowledge.
Competitive risk management is therefore a strategy setting process. In both cases the analysis expands the colour of the control analysis part of our formulah described in the previous section. Specifically the nature of the changes required are to accomodate additional ratings and velocities for allow treat risk and opportunity a single function (eg possibly describing a parabolic or logarythmic curve as the output).
Our revised formulah for competitive risk then becomes:
RO<sub>i</sub> = f( f(L<sub>i</sub>), f(LV<sub>i</sub>), f(I<sub>i</sub>), f(IV<sub>i</sub>), f(CF<sub>i</sub>), f(CS<sub>i</sub>), f(CE<sub>i</sub>), f(CP<sub>i</sub>), f(CD<sub>i</sub>), f(CC<sub>i</sub>), f(CFV<sub>i</sub>), f(CSV<sub>i</sub>), f(CEV<sub>i</sub>), f(CPV<sub>i</sub>), f(CDV<sub>i</sub>), f(CCV<sub>i</sub>) )
where:
;RO
:is expressed in a single scale such as: "really-good to negligeable to really-bad", or as complex numbers with two scales a rating (high to neglieable) and a binary (two position) scale - "Opportunity or Risk"
;i
:Represents an individual risk
;L
:Means Likelihood Rating for each cause
;I
:Means Impact Rating for each impact
;CP
:Means Mitigating Strategies and Controls effectiveness rating at preventing causal events.
;CD
:Means Mitigating Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CC
:Means Mitigating Strategies and Controls effectiveness rating for reducing the likelihood of further causal events and mitigating consequential impacts.
;CF
:Means Enabling Strategies and Controls effectiveness rating at focussing causal events.
;CS
:Means Enabling Strategies and Controls effectiveness rating at detecting causal events and consequential impacts.
;CE
:Means Enabling Strategies and Controls effectiveness rating for increasing the likelihood of further causal events and enabling consequential impacts.
;LV
:Means Likelihood Velocity Rating for each causal event
;IV
:Means Impact Velocity Rating for each impact
;CFV
:Means Focus Control Velocity Rating for each causal event
;CSV
:Means Sensing Control Velocity Rating for each causal event and possibly some to all impacts
;CEV
:Means Enabling Control Velocity Rating for each enabling control enabling impacts and possibly some to all causal events
;CPV
:Means Preventive Control Velocity Rating for each causal event
;CDV
:Means Detective Control Velocity Rating for each causal event and possibly some to all impacts
;CCV
:Means Corrective Control Velocity Rating for each mitigating control for all impacts and possibly mitigating some to all causal events
<br>
==The Evolution of the Risk Management Standard==
In Australia, a team of experienced risk management practitioners was assembled over two decades to codify a standard for risk management as it had been (and was being) developed and deployed in Australia and New Zealand. That codification was initially released by Standards Australia as AS/NZS 4360:1995, revised as AS/NZS 4360:1999 and revised again in its current version as AS/NZS 4360:2004. You can access the standard via [http://infostore.saiglobal.com/store/Details.aspx?DocN=AS0733759041AT|SAI Risk Management Portal]. While still very much in its infancy as a governance tool, and immature as a management science, risk management has rapidly been adopted across the world and is now codified into an international standard: ISO 31000:2009 standard (October 2009), and supported by the ISO Guide 73:2009 - largely based on the AS/NZS standard.
==The Classical Approach==
In classical risk management - with respect to a given focus - a business, a business objective, and asset, etc - we told to identify the risks first, so that they can be properly managed. In its classical form, risk management asks, and attempts to answer three questions:
*What can go wrong?
*What can I do to prevent it?
*What do I do if it happens?
You are advised to develop a risk register to document each potential problem, its level of seriousness, what is required to fix it, who will fix the problem, and monitor progress.
There are essentially four things you can do with risk. We will call them, the four T's:
* Tolerate it (by accepting or ignoring a risk - this is where the profit lies)
* Treat it (by actively re-mediating or controlling it)
* Transfer it (by insuring it, perhaps better described as "sharing it")
* Terminate it (by exiting the business that incurs it)
It is critical that leaders understand that risk management is NOT about avoiding risk, but about managing it.
==The Evolution of a Risk Management Thought==
The concept of risk and reward management are not new to mankind. The walls of cities and castles were early forms of risk management, and Hadrian's Wall, Agricola's Wall, Antonine Wall, and the Great Wall of China are dramatic statements of risk containment on a social scale.
History is littered with authors and thinkers exploring the relationship between risk awareness, risk exploitation, active management and outcomes. Military and political strategists have employed the concepts underpinning modern risk management for centuries. The writings of both military and political strategists such as Sun Tzu ("The Art of War"), Carl von Clausewitz ("On War"), Niccolò Machiavelli ("The Prince", "The Art of War"), and Miyamoto Musashi ("The Five Rings") are all examples of the practical application of risk awareness in strategy formation. To varying extent these works all encourage an awareness of one's own and one's opponent's weaknesses, and the mitigations and exploitation of the same.
Perhaps, what is new, is the codification of the process of identifying, measuring, assessing, and responding to risk laid down in the more recent writings. It would be naive, however, to consider that risk management, per-se, is new. The difference between a successful manager and an unsuccessful manager has always been their ability to see the potential reward in an opportunity and get strike the correct balance between ignoring, avoiding, transferring and mitigating risks. Too much risk avoidance means opportunities are not exploited, too much control or insurance means that there is no profit left from the risky activity, and too much ignorance means that eventually the strategy's angel will become history's fool.
In the absence of a formalised approach to risk management, the successful business leader is known as lucky. In truth, the success is probably more due to a that leader's accident of DNA and life experience that leads to instinctively correct risk judgements. It is possibly this instinct, more than anything else, that justifies the executive salary differentials.
There is an important observation to be made from the historic context of risk management theory. Currently risk management professionals tend to view the discipline as an extension of the strategy achievement, yet historically, risk management has been as much about strategy identification and formation, as about implementation.
Good risk management looks both inward and outward. By this I mean that risk management can be applied both to minimising your chance of failure and maximising your competitor's chance of failure. The essence of military strategist's thinking is to identify the weakness's of the opponent and exploit them to you own advantage. Application of the principles of risk management can enable you to not only identify the opponent's weaknesses, but identify the probable strategies they will employ to manage the risks arising from those weaknesses, and hence better inform your planners about potential strategies to employ.
Over the last 50 years a number of frameworks addressing risk management with respect to governance have emerged out of the experience of the different professional groups involved in strategic management, asset protection, public accountability, finance and risk. These groups include:
* Internal Audit - focused on control system reliability
* External Audit - focused on true and fair representation of financial position on a going concern basis
* Actuarial Science - focused on the pricing of risk for insurance
* Investment banking - focused on the pricing of risk for portfolio management, hedging, capital fees and adequacy
* Risk Management - focused on management of risk to strategic and tactical outcomes on an enterprise and societal basis
Setting aside the military and political authors, among the business community, some of the earliest work in risk management arose from the financial advisory community looking for models to minimise the downside risks to financial products investment.
==A Mathematical Basis To Risk Measurement==
As early is 1952 Harry M Markowitz published his paper "Portfolio Selection" in the Journal of Finance, exploring the advantages of risk diversification through balanced portfolio selection. The essence of portfolio theory is that risk essentially expressed the potential for a negative return (financial loss) and the
An investor can reduce portfolio risk simply by holding combinations of instruments which are not perfectly positively correlated (correlation coefficient -1<(r)<1)).
To a greater of lesser extent the professional bodies, standards organisations and government agencies have responded with guidelines and standards for the measurement, application, response and management of risk as it applies to their specific problem domains. In the 1978 the Institute of Internal Auditors - the international professional body of the Internal Audit profession issued its Standard's for the Professional Practise of Internal Audit (SPPIA). In Anne of the earliest standards based references to risk based management the standards included standard 320: "Compliance with Policies, Plans, Procedures, Laws and Regulations". The statement determined that "Internal auditors should review the systems established to ensure compliance with policies, plans, procedures, laws and regulations which could have a significant impact on operations and reports, and should determine whether the organisation is in compliance". The SPPIA standards mandated the
==Alternative Standards and Views of Risk Management==
Among the definitive pronouncements on risk management are:
* The King Report on Corporate Governance for South Africa (SA King II - 2002)
* A Risk Management Standard (RMS 2004) by the Federation of European Risk Management Association (UK FERMA)
* Australian/New Zealand Standard 4360—Risk Management (A/NZ 1995, 1999, 2004)
* COSO’s Enterprise Risk Management— Integrated Framework
* The Institute of Management Accountants’ (IMA)
* “A Global Perspective on Assessing Internal Control over Financial Reporting” (ICoFR)
* Basel II
* Standard & Poor’s and ERM
* ISO 31000:2009
Building on the work of many years, the middle of the first decade of the millenium saw a succession of enterprise risk management (ERM) related pronouncements. AS/NZS 4360: 2004 defined the risk management process as the “'''systematic application of management policies, procedures and practices to the tasks of communicating, establishing the context, identifying, analysing, evaluating, treating, monitoring and reviewing '''”. For the financial sector, the earlier BASEL I standard was superceded by BASEL II which closely mirrored by the view of AS/NZS 4360.
Expanding on an earlier Internal Control Framework from the early 1990's the Committee of Sponsoring Organisations of the Treadway Commission (COSO) releasmillenniumed the ‘Enterprise Risk Management (ERM) – Integrated Framework’ which attempted to map the COSO framework that formed the motivational basis for the US Sarbanes-Oxley compliance legislation into a broader enterprise risk management framework. The COSO/ERM framwork defined enterprise risk management as:
* A process, ongoing and flowing through an entity,
* Effected by people at every level of an organisation,
* Applied in strategy setting,
* Applied across the enterprise, at every level and unit, and includes taking an entity-level portfolio view of risk,
* Designed to identify potential events that, if they occur, will affect the entity and to manage risk within its risk appetite,
* Able to provide reasonable assurance to an entity’s management and board of directors,
* Geared to achievement of objectives in one or more separate but overlapping categories.
The standards enjoy a shared purpose to improve the predictability of business outcomes, but differ significantly in how that certainty is to be improved. While 4360 describes the process for management of risk, BASEL II mandates firm’s operational risk management (ORM) system must be “conceptually sound and implemented with integrity”, but stops short of defining the form or process of the ORM. BASEL II does specify that the ORM should be maintained by an independent operational risk management function, and that is to consist of at least “strategies, methodologies and risk reporting systems". It identifies that the purpose of the ORM is to "identify, measure, monitor and control/mitigate operational risk”.
Under BASEL II, the ORM systems should be:
* “credible and appropriate”,
* “well reasoned, well documented”,
* “transparent and accessible”, and
* capable of being validated by audit.
Among the failings of BASEL II, is the lack of definition of these key terms, which, in a sense, is where AS/NZSpractisessuperseded 4360 and the COSO ERM Framwork come in. The latter standards provides a framework under which a credible, reasoned, transparent, documented and verifiable risk management model can be established.
AS/NZS 4360 and COSO do not eliminate failure in the ORM/ERM, however, as in their implementation there is still considerable subjectivity in risk identification and assessment, and within the process documented by the standard there is not a mechanism for provining or measuring "completeness". They do, however, populate the next level of the BASEL II obligation.
This problem of "completeness" in ERM frameworks should not be underestimated. It is present in all current risk management standards and is possibly a key reason for failure in ERM frameworks. We shall explore approaches to solving this problem in later papers.
Owing to their differing origins the three standards employ slightly different terminology for shared ideas:
* AS/NZS 4360 refers to ‘Risk Treatment’, COSO to ‘Risk Response’ and Basel II uses ‘Risk Mitigation’.
While the seven ‘elements’ of AS/NZS 4360:2004 framework do not align precisely with the eight ‘components’ of the COSO process, the ‘end to end’ risk management process is the same.
<table cellpadding="10" >
<tr>
<th>
AS/NZS 4360: 2004
Framework
</th>
<th>
COSOframework ERM–Integrated
Framework
</th>
<th>
BASEL II ORM
Framework
</th>
</tr>
<tr>
<td>
Establish the context
</td>
<td>
Internal environment
</td>
<td>
</td>
</tr>
<tr>
<td>
Establish the context
</td>
<td>
Objective setting
</td>
<td>
</td>
</tr>
<tr>
<td>
Identify risks
</td>
<td>
Event identification
</td>
<td>
Identify
</td>
</tr>
<tr>
<td>
Analyse risks
</td>
<td>
Risk assessment
</td>
<td>
Assess
</td>
</tr>
<tr>
<td>
Evaluate risks
</td>
<td>
Risk assessment
</td>
<td>
Assess
</td>
</tr>
<tr>
<td>
Treat risks
</td>
<td>
Risk response and control activities
</td>
<td>
Control/mitigate
</td>
</tr>
<tr>
<td>
Monitor and review
</td>
<td>
Monitoring
</td>
<td>
Monitor
</td>
</tr>
<tr>
<td>
Consult and communicate
</td>
<td>
Information and communication
</td>
<td>
</td>
</tr>
</table>
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Risk Management]]
{{BackLinks}}
</noinclude>
0c92f2577353da0d73bf684aee6689d18b9f93ee
Risk Management
0
298
482
390
2018-10-29T12:17:44Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Risk Management=
==The Risk Management View - How the Machine Looks From the Inside==
Risk Management is a philosophy of management science that sees an organisation's state in terms of the balance of its risk and opportunity portfolio. An organisation with in a steady state will experience a rise in the value of opportunities commensurate with a rise in the volume or value of risk, while a destructively unstable scenario would be rising risks with falling opportunity and while rising value of opportunities with steady or falling risks might indicate either a desirable growth pattern or under achievement of opportunities.
In its most common implementation today, risk management focuses on the risk side of the equation. With this constraint to its domain, risk management sees the universe as a variably dangerous place measured in terms of the likelihood of an event that might be a cause of some consequence that will have a measurable impact. A group of such events with shared impacts is a risk. A risk might have a severity (based on the likelihood of its various triggering events and the worst case scenario of the impacts of those causal triggers) and it might have a value based on the impacts. With or without the value one view of risk management might claim that risk management is about cost minimisation (in terms of anything measurable like money, brand value, social standing, votes won, etc). Minimising cost does not mean minimising risk itself necessarily as other factors may influence that decision such as the risk appetite (willingness to tolerate a level or type of risk), and confidence in the dependent opportunities (not measured in a risk-only model).
The causes and consequences of a risk might be seen, through their likelihood and impact respectively, to imply a particular inherent level of risk,
once we know the risks we naturally do things to either prevent the triggers from occurring, to know when they have, and to respond with corrective action in the event that a risk manifests as an occurrence. We call these things controls or strategies, and would be right to think that this should moderate our value for a given risk in some way.
The risk manager might accommodate this control impact in multiple ways depending on the risk model in use:
#By rating the controls themselves and reducing the total risk rating by applying this value in some way to the inherent risk and getting a rating of the risk remaining after controls are added - commonly known as the residual risk. The ratings of controls and strategies is in-exact in itself and the addition of additional data for control ratings may be no more reliable than the instinctive feel for the control impact required in approach 2. Considerably more rigour may be needed in the controls understanding than is common in management.
#By rating the likelihood and impact of a risk again AFTER the raters have considered the controls thus having two ratings measuring likelihood and impact : inherent and residual. Under this approach the control impact is assumed in the revised likelihood and impact ratings. Controls should not be rated as a risk group, but can be rated separately to inform the residual likelihood and impact ratings. This method provides no way to reliably analyse the cost-effectiveness of individual control strategies from the resulting ratings.
Together these components describe the essence of the model through which risk managers view the organisation and thence the universe through which the organisation moves. With a risk only view the risk manager sees a health index in terms of risk to the organisation.
==The Risk Management Function - Keeping the Machine Healthy==
The risk manager uses the risk model to view the health state of an organisation. The risk manager improves and protects that state by managing essentially the input variables of the model. This includes:
#facilitating the process of identifying risks and their properties and the process of rating the risks.
#ensuring that every risk has a clear management responsibility attached to it.
#ensuring strategies have been devised to prevent (to some degree) causes where possible, to detect causes when they trigger and to mitigate consequential impacts.
#ensuring executive and governors are properly informed of the risk profile and changes therein over time.
#ensuring the accuracy of the model through actions such as regular review and re-rating of risks, monitoring strategy progress.
==Articles in this topic:==
Topics covered by articles include:
* [[Risk Management - Introduction]]
* [[BPC RiskManager Software Suite]]
* [[Managing Risk in Mergers & Acquisitions]]
The full category is available from:
[[:Category:Risk Management|Risk Management Topics]]
<noinclude>
[[Category:Management Science]]
[[Category:Risk Management]]
{{BackLinks}}
</noinclude>
5b321f41e0e0f3fa2c6fbf0d749aee11df42db35
BPC RiskManager V6.2 Network Architecture
0
4
484
278
2018-10-29T12:17:44Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
[[Image:BPCRM NetDiag.png]]
BPC RiskManager is an N-Tier application. The primary layers are:
* Database Server layer
* Application Server layer
* Client layer
The core application set does not require a web server but certain optional capabilities do.
You will require a web server if you will be:
* Using the browser plugin client component.
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
While the browser plugin client component can be served by any brand of web server, you will require IIS 5+ if you plan to be:
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
Both of these capabilities use ISAPI libraries running on an IIS server. If you will be using the the HTTPS communication protocol, you will also need an SSL certificate installed on the web server.
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
96accad095e3f378d468445d6bc5231ced78bf76
Managing Risk in Mergers & Acquisitions
0
297
494
478
2018-10-29T12:19:09Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Topics==
* [[Managing Risk in Mergers & Acquisitions - Causes of Success & Failure]]
* [[Managing Risk in Mergers & Acquisitions - A Success Strategy]]
* [[Managing Risk in Mergers & Acquisitions - A Review of the Literature]]
<noinclude>
[[Category:Management Science]]
[[Category:Mergers and Acquisitions]]
[[Category:Risk Management]]
[[Category:Risk Management - Applied Cases]]
{{BackLinks}}
</noinclude>
975422383bcff83e8288f0207aa4f21d1f209d44
BPC RiskManager V6 Enterprise (Enrima Edition)
0
2
496
338
2018-10-29T12:20:22Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=The BPC RiskManager Software Suite - Features=
==What is the BPC RiskManager Software Suite?==
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) in 2008. The latest release is July 2010.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width="100%"
|-width="100%"
|
* BPC RiskManager V5 (Express)
|
|-
|
* BPC RiskManager V6 (Enrima Edition)
|
|}
While there are a lot of similarities between the systems, they are not identical and not data compatible. BPC RiskManager V5 (Express) is maintained on an an annual update cycle, while BPC RiskManager (Enrima Edition) is maintained on a quarterly (every 3 months) update cycle.
In terms of scalability, both systems will handle thousands of simultaneous users, and both model risk management at the enterprise level and project level. Both systems include risk, controls/strategies, consequences, survey, compliance, incident management support and both systems feature customisable screens and field names. Both systems allow multiple simultaneously active databases.
The essential differences are in depth and complexity of issues supported and expandability of the system. Here they have significant differences. Express is designed to be extremely simple and consequently excludes both depth and breadth beyond the functions of a risk and compliance register. It therefore is able to present almost all its risk or compliance record data on a single screen.
In the Enrima V6 series this single screen display is not possible as the both multiple views and considerable anciliary management objects are brought into the system (such as documents, assets, assertions, insurance, claims, etc).
==BPC RiskManager V6.2.5 (Enrima Edition)==
[[image:BPC_RiskManager_V6261_Main_Screen.jpg|539px]]
===BPC RiskManager - Who should use it?===
====User====
BPC RiskManager designed to manage the governance function of an organisation. It therefore fits in audit, risk management, compliance management, insurance risk management, environmental risk management, project risk management, human resources, OHS and strategic planning. It delivers functions covering both ther strategic and the operational functions of these disciplines. For example the claims module actually manages insurance claims (not merely registering them), the document management system is capable of actually managing documents (not merely cataloguing them), the compliance and strategy systems actually manage the remediation of the issue, etc.
It functions best as an integrated solution with multiple governenance teams using the one system. With each release we expand the governance functions in the system.
====Scale====
BPC RiskManager is designed to scale. There are four types of clients using it:
#. Single user or small work groups running off a single user install switched to server mode.
#. Medium scale enterprises with risk and executive seats on an IT group managed server / in-cloud and database.
#. Large scale enterprise with many seats actively managing general risks and compliance issues and project risks, etc
#. Hosting consolidators providing cloud services to many clients in different organisations with many databases.
Every version of BPC RiskManager (from the single user install, up) comes capable of operating in all these modes. For each type of operation there are specific features built in to aid maintenance and management (including multi database bulk operatiions for hosting providers).
===BPC RiskManager Features===
BPC RiskManager V6.2.5 (Enrima Edition) (often referred to as RiskManager V625 or Enrima), is a powerful risk and compliance management solution with an almost unlimited range of end-user configurable solutions. It delivers:
*General
** Totally end-user configurable (change almost any label or caption or search relationship, re-task fields, define your own risk and compliance model, build your own reports, define your own work flows, customisable messages, define your own risk structure, etc)
** Runs out-of-the-box (ready to use immediately after install in single-user or small work group mode).
** Provides an optional fast configure mode (shown on first run of any client and available at any time thereafter).
** An extremely versatile ratings engine support multiple methods of ratings compliance and risk issues. Each item can simultaneously store different ratings for inherent, residual, auditor, reviewer and unlimited current self ratings for each of likelihood, impact and (residual) risk. It also holds additional ratings for compliance breach, compliance rating, and unlimited assertion sets.
** Ratings can be rolled up through trees of risks and compliance issues
*Functional
** Risk Management
** Compliance Management
** Incident Management
** Planning
** Document Management
*Registers
** General Risk register(s) with unlimited risk types and able to distinguish project and general risks
** Project Risk register(s)
** Compliance register(s) with unlimited assertions/questions and assertions/question groups AND pure HTML based compliance surveys / checklists
** Incident & Hazard register
** Insurance register
** Claims register
** Legal register
** Document register
** Causes register
** Consequence & impact register
** Standard strategies register (Type of Control)
** Strategies & control register
** Actions register
** Work flow register
** Asset register
** Business plan register
** Survey register
** Access control
*Evaluation engines
** Risk & compliance rating
** Question & assertion rating
** Assessments engine
** Survey rules engine
** Charting engine
** Email management engine
** Exception tracking engine
*Work flow control systems
** Work flow engine
** Instantaneous internal message engine
** Instant and batched email management engine
** PAX & TMS ScripterStudio scripting engines
** Survey management system
** Exception tracking engine
*Data reporting and access
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. These structures are understood by the search and reporting engines.
** Unlimited risk structuring - risk folders to any depth, risk-linking, risk categorisation, unlimited master-child structures, etc
** Tree, search and flat risk navigation simultaneously supported
** Risks/compliance issues can inhabit any number of tree folders simultaneously (allowing multiple grouping and reporting frameworks with risk roll up)
** Link Objectives, assertions, questions, processes, legislative/regulator obligation, causes, risks, consequences, compliance obligations, controls / strategies, actions, risk history, incidents / hazards, people, supporting documentation, and information web-sites, and more.
** Full live search-able audit trail of all changes
** Storable searches used through-out the application to access and feed data to tables, views, folders and reports
** Multiple reporting engines:
*** Built-in pre-written reports
*** Very powerful, programmable end user report writer and manual (outputs in various formats including HTML and PDF)
*** Word Document (mail-merge) style report engine
*** SurveyManager Instant Reporting engine (maps survey response reports back into the survey layout)
*** BPC SurveyManager operating in web forms mode is a powerful reporting engine in its own right
*** Query Exporter (Administrator only - can cross feed to the import engine creating an excellent method for doing bulk updates based on extracted data)
*** Search based end user export
*** Built-In Charting
*** End-user charting
** End user sample reports
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL and PDF
** Dashboard with drill through to risk collections, risks, assessments and incidents
** Dashboard risk collections configurable via folder tree view system (so any risk/compliance topic can be put to the dashboard with unlimited layers of drill through).
*Messaging
** Built-in automated email messaging based on events and dates for a wide range of scenarios, and occurrences, with email contents able to be fed by custom reports from the report writer.
** Multiple levels of responsibility assignment on all trackable objects
** Risk Message racking and work flow message tracking
*Secretarial, Administration and Desktop Integration
** MS Office compatible
** Copy and paste from / to word and XL
** Powerful import/export administrator only tool
** Search / chart driven general user export in various formats including XL
** Spell checking using your MS Word dictionary
** Simple point and select search system but with an option for savable advanced query writer custom searches if required.
** Extensive configuration and customisation screens to support tuning the system to do just what you want.
** Dynamic screen captions allowing you to adopt your own terminology, which also appear to the report writer as the names of the fields
** Smooth support for large and small fonts and 96dpi and 120dpi and other screen resolutions
** Works on all versions of windows from W2000 up, including Vista and Version 7.
** Fast fully automated installation and upgrade system.
** Available in single/small work group and enterprise configurations
*Compliance System
** Compliance obligations can be viewed as general risks and compliance modes
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Compliance obligations will support multiple compliance models simultaneously (SOX / Sched7 / General / etc).
** Compliance obligations are stored internally as risks so they roll up smoothly into the general and project risk register
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers. In addition to implied relational structures, there are multiple tree structures used to link objects across the application. Two of these of particular relevance to end users are the folder tree and master-child hierarchical network. Both of these tools provide ways to group risk and compliance issues in roll-up and dependency relationships, as well as pools of mutually associated items. An issue can belong to many such relationships at once.
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of compliance ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings and question/assertion ratings rules for automated rating translation.
** Compliance responses automatically convert to risk equivalent ratings so that both compliance issues and risks can be seen on the one heat map, and in comparative tables.
** Unlimited compliance milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Risk System
** General and project risks can have all compliance mode features including assertions/questions attached (Compliance/Risk views exist simultaneously for all risks).
** Master-child and folder structures can have unlimited mixed general, project and compliance risks members, across multiple registers.
** Risk Tolerances (rating and numeric) for differential risk reporting and automated condition reporting.
** Likelihood & consequence trigger points
** Separate audit comment and tracking data for each risk.
** Multiple modelling systems - inherent, current and residual risk ratings (with optional likelihood, impact, control and residual categories for each rating)
** Velocity supported at the impact/consequence level
** Selectable screen editing assignment of ratings allows you to choose where and what ratings can be changed for each model
** Risk & Control Archiving and unarchiving
** Instant live update of risk ratings and master-child roll-ups
** Unlimited assessments and simultaneous self, internal audit and reviewer assessments
** Simultaneous mixed formula and grid assignable ratings
** Confidential risks
** Risk advisory notes for each risk
** Unlimited risk milestones - snapshots of the risk record including all notes and ratings at an instant in time. Some milestone types allow restoration of the milestone to the current instance of the risk / compliance record. Uses include "balance day" records, what-of analysis, audit evidence snapshots.
*Incident Management
** Fully configurable - drop lists, business rules, screens, etc.
** Incident type determines rules and attributes
** Multiple handling steps fully tracked - recorder, assignee, reviewer, responder, escalted to, investigator
** Automatic triggers for review, escalation, investigation, etc based on user configurable rules (triggered by participant information, incident attributes, etc.)
** Configurable unlimited incident attributes with triggers (for reviews, escalation, enhancements, workflow, etc.) to classify incidents
** Unlimited configurable incident types (which determine the set of incident attributes applied to the incident)
** Incidents have a built in workflow – record, assign, review, escalate, resolve, investigate, close
** Unlimited user defined additional fields for storing extra data
** Unlimited text fields details/notes, etc for unstructured data
** Change tracking
** Separate org structure defnition that lives side by side with the risk management org structure (allowing different structures for risk/compliance and incidents)
** Structure and rule driven review, escalation and investigation
** Unlimited incidents per risk/compliance event
** Incidents attached to more than one risk/compliance topic
** Incidents can be created and attached to a risk/compliance topic at a later time
** Notifiers
** Incident Causes – immediate and underlying (mirrors risk causes)
** Incident Actions – Current (done) and future, both proposed and approved + action assignment, progress and tracking
** Proposed actions can be converted to risk / compliance topic controlls
** Large array of location types (even GPS location specification)
** Unlimited partcipants per incident (with user defined roles)
** Particpant records of interview
** Participant injury tracking
** Review and investigation reminders
*Incident Investigations
** Investigations including progress tracking/status / findings / recommendations, etc
** Configurable investigation types with differing investigation team structures
** Investigation external document links
** Configurable and managed signoff models including separate lists for investigation team members and other parties
** Investigation signoffs with qualified and dissenting opinion options
** Investigations build distinct reports
*Internal Audit System
** Separate audit risk ratings and notes per risk/compliance issue
** Separate audit external document links
** Internal-audit remediation register with assignable tasks and remediation progress, status and outcome recording.
** Automated access escalation for user flagged as auditors
** Auditors use the same screens as normal users but have extra fields and facilities
** Automated CSA survey generation
** Full change logs kept of key accountable tables (can be expanded to include additional tables including additional tables added by clients)
*Insurance and claims
** Insurance register with renewal reminders
** Insurance policies link to risk/compliance registers via the strategy and controls register, actions register and document registers.
** Claims management
** Claims link to risks/compliance registers via incident and insurance registers
** Incident/Hazards Register (plus hooks for interfacing into a separate incident management system if desired)
*Causes Register
** Unlimited risk specific causes per risk
** Type-of-Cause allows standardisation of causes while allowing complete flexibility in description and instance of a cause (similar to Type-of-Control)
** Incident and Risk/Compliance causes.
** Causes can have numeric risk event triggers (allowing concepts such as the "likelihood of exceeding x events in a year")
** Direct sub linking between causes and strategies and consequences enables cause and effect strategy design and verifiable coverage of causes
** Causes can be sub linked off Assertions/Questions (the default for compliance screens) allowing low rating compliance questions or analytic steps for remediating breaches to be structured around the causes of each question's failure. This enables the compliance model to be around built around both compliance risk and compliance topics philosophies.
** As there can be an indefinite number of question sets with an indefinite number of questions per risk / compliance issue, cause structuring can get very deep.
** Causes integrate with surveys, the scripting engine and external modelling systems to enable programmatic setting of likelihood ratings using additional fields as part of the interface (like the "risk trigger value").
*Strategies & Controls register
** Strategies and controls with progress notes and tracking
** Register and track unlimited strategies and controls
** Customisable ratings scheme for each control or strategy including any of likelihood, impact, control, (residual) risk over inherent, residual, current self, audit, reviewer, etc ratings groups, as well as five ratings defaulting to authority, reliability, efficiency, economy, and timeliness control assertions.
** Officially mandated Type-of-Control list provides a template for approved control strategies and allows strategies to be both individually described, and structurally grouped and standardised.
** Strategies & Controls directly cross link to individual causes and impacts/consequences allowing you to tie specific strategies to one or more causes and consequences of a risk or compliance item.
** Strategies & Controls can have actions.
** (Coming soon: unlimited assertion/ratable question sets similar to that used for compliance and risk screens).
** Includes Responsible officer, delegate, email reminders, assignment tracking, cost and benefit measures, link to insurance, cyclic and one off controls/strategies, flag where insurance expired, due dates exceeded, user defined categories and subcategories, etc.
** Automatic access rights escalation where read only viewer is accessing a strategy for which they have responsibility
** Fully customisable messages with or without email running.
** Survey question library links surveys to strategies
** Can feed CSA automated surveys
*Financial Elements Register
** Unlimited charts of account
** Account rollup
** Store performance metrics (budget, actual, transaction volumes, etc)
** Store audit assessments for each element
** Link to audit/risk/compliance assertions
** Ownership
** Unlimited risks/compliance obligations per account
** Test plans and test plan scheduling
** Heat maps for each element with drill through to risks and incidents
*Document Register
** Document register for unlimited documents
** Supports multiple document management strategies simultaneously: unmanaged, delegated management and full management.
** Unlimited risk/compliance issues may be linked to each managed or unmanaged document.
** Unlimited unmanaged documents may be linked to a risk-compliance issue
** Document management can be set at the document or section level on a per-document basis
** Managed documents track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Managed document sections track (optionally) full text, responsibilities, review cycles, issuing authority, compliance status, risks/compliance issues assigned, question-assertion status.
** Full snapshot version control operates on managed documents - a full time-stamped copy of the relevant records is made for each change.
** The document register presents document and section specific lists and heat maps of all risks/compliance issues attached to the document or section and supports export on that basis.
** Main listing screens support dynamically constructed QBE filters and free text search to enable isolation of documents using specific terms or any of the tracking fields.
* Store documents internally or interface to your document management system, web site links available for most objects.
*Work flow engine
** The work flow system supports two purposes (a) documenting processes with flow charts, and (b) automating RM related activities
** Work flow modelling and diagramming tool (with a built-in script-able work-flow diagramming subsystem)
** Work flows can be executed and can invoke RM screens and external applications. Executed work flows can be assigned to individuals and have multiple individuals participating in different steps.
** Work flows steps can have attachments.
*Survey engine
** Full implementation of BPC SurveyManager with customised management client built-in
** Built in survey engine
** A full scale (not limited) survey / web forms engine that is licensed for separate use and can be used for far more than just your risk management requirements. Think of something you need to collect data on the BPC SurveyManager will handle it. The SurveyManager can be used to write entire web sites on its own.
*Access and security
** Single user mode or secured access modes (end user selectable)
** Multiple access security support (LDAP,AD, NTGroups, Internal, Trusted, etc)
** Configurable access rights for access to risk type, business group, business unit, risks over multiple levels of access from none to administration
** Automatic escalation of access to individual records where the user has responsibility assigned, but otherwise would not have access
*People & resources
** People and positions (resources) may be imported in bulk, created individually or automatically created on connection.
** Resources integrate with the access control system
** SurveyManager keeps a separate list if resources mirrored with the RiskManager resource tables
** RiskManager allows for three domains of resources - survey responders (access to specific surveys), risk manager known persons (can be managed by email, assigned responsibilities but do not have access to the system), and risk manager users (access allowed).
** User access control down to individual business unit risks & issues as read / update / create (See access control).
** Resources (people) can be retired (removed from lookup windows, etc) without deletion from system (to preserve risk/compliance history integrity).
*Scalability, Networking and communications
** N-Tier architecture, can be installed on one computer with the database (as in single user mode) or distributed across multiple servers (as in Enterprise/Web mode).
** Networked comms supports simultaneous or individual use of Raw TCP/IP, HTTP and HTTPS (SSL) network communications (all with compression)
** Supports unlimited simultaneous databases ''(subject to license purchased)''
** Supports unlimited simultaneous application servers ''(subject to license purchased)''
** Supports unlimited simultaneous survey engines ''(subject to license purchased)''
** Supports unlimited installed client desktops ''(subject to license purchased)''
*Other
** Cost and benefit tracking
** Full internal scripting language to support end user expansion and external interfacing
** Interfaces for external complex risk assessment (eg Monte-Carlo modelling risk systems such as Benfield / AON Remetrics)
** Single point of update publishing for clients
==BPC RiskManager Express V5.x==
[[image:BPCRiskManagerExpressV5.jpg|539px]]
BPC RiskManager Express has a dramatically simplified and restricted user interface, does not maintain structured causes lists (but does have unlimited "contributing factors" descriptions) and allows one level of responsibility for assignment of issues and actions, and does not have an end-user report writer (although it does support both mail-merge and word / XL template driven reporting). It can be configured as either a compliance or a risk solution running on separate databases through the one application server. Like it's more powerful sibling, it will support an indefinite number of databases.
BPC RiskManager Express is targeted at organisations where simplicity of operation and user input overrides the need for granularity of input and analysis, and where the additional governance sub-systems available in BPC RiskManager are not needed (eg insurance, claims, assertion / question rating models, work-flow, assessments, security, assets, etc.)
This riskwiki focuses on BPC RiskManager (Enrima Edition).
=Additional Resources=
[http://bpc.bishopphillips.com/forum/ BPC Support Forum]<br>
[http://bpc.bishopphillips.com/riskthink/ BPC RiskThink Blog]<br>
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php Request a free fully functional trial copy of BPC RiskManager (Enrima)]
<noinclude>
[[Category:Featured Article]]
[[Category:Bishop Phillips Software]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
{{BackLinks}}
</noinclude>
81bdffb458d1875bbf1156a08c95aa2571f1e615
BPC RiskManager Software Suite
0
3
498
470
2018-10-29T12:20:22Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=BPC RiskManager Software Suite - Risk, Compliance and Certification=
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) was first released in 2008. The latest version was released in 2011. It is updated continuously throughout the year and, as a client, we encourage you to actively participate in the development direction.
The Enrima edition of BPC RiskManager is a single-user and multi-user risk management, compliance management, financial statements certification, insurance, survey, incidents & hazards system all in one application. You can manage multiple organisations and simultaneously view governance issues as risks, compliance obligations (legislation, processes and procedures) and compliance topics simultaneously. It manages email based reminders for a large variety of user expectations internally.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width=100%
|-
|
* BPC RiskManager V5 (Express)
|[[image:BPCRiskManagerExpressV5.jpg]]
|-
|
* BPC RiskManager V6 (Enrima Edition)
|[[image:BPC_RiskManager_V6261_Main_Screen.jpg|600]]
|}
=Client Base=
BPC RiskManager clients are head quartered in Australia, Canada, the United Kingdom and the United States of America. Global clients, of course have offices in many other countries. [http://www.bishopphillips.com| Bishop Phillips Consulting] has local offices in both Australia and North America.
The system is used extensively in the education sector with a very substantial presence in Universities in both Australia and Canada and commercial education providers and colleges in the USA. Other significant client groups include insurance providers (both primary insurers and reinsurers), central government agencies (such as federal & state/province departments and local government), utilities such as postal, electrical and water utilities.
BPC RiskManager implements and substantially extends the Risk Management Standards "AS/NZS 4360:2004 :Risk Management", and "ISO 31000" and complies with the "ISO/IEC Guide 73- Risk Management – Vocabulary".
The Risk Manager is not restricted to merely following the one interpretation of the risk standards. As a consequence of its long market history, BPC RiskManager implements a large number of divergent risk management methodologies or methods. Any combination of one to three assessment groups each containing ratings for likelihood, consequnce and control are possible. For example some clients use a risk management methodology that utilises risk budgets with three rating groups "Inherent, Residual and Target" where inherent ratings shift with external factors and target shifts with the corporate risk appetite (ie a risk budget) while the residual floats according to assessment ratings.
Any number of self assessments in each group can be maintained together with a separate family of assessments and remediations created by audit/expert that coexist with management's risk assessments.
Whether your preferred risk methodology uses quantification (quantitative risk analysis), or qualification (qualitative risk analysis), BPC Risk Manager directly supports the approach on a per assessment basis. Terminology (including fields names and purpose and screen captions) is fully customisable so the system can directly implement the corporate risk methodology / risk method.
=Get a Fully Functional Evaluation Copy of BPC RiskManager for FREE=
You can get a free no-obligation fully functional copy of BPC RiskManager (Enrima Edition) simply by completing the request form here:
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php I want to evaluate BPC RiskManager without obligation for free, please.]
It will work for 60 days, and if you need more time you can contact us and request a longer evaluation. There are no limitations in the evaluation version and we will even give you support for free while you get it running. It is fully self installing and will open up accessing your first risk database when the installer finishes.
If it isn't right for you, you can just uninstall after the 60 days with no further obligation to us.
=Knowledge Base=
*[[BPC RiskManager V6 Enterprise (Enrima Edition)]]
** [[BPC RiskManager V6 Enterprise (Enrima Edition)| BPC RiskManager Features]]
** [[BPC RiskManager V6.2 Network Architecture]]
** [[RM625ENT Installation Instructions|BPC RiakManager V6.2.5 Installation Instructions]]
** [[BPC RiskManager Frequently Asked Questions|BPC RiskManager - Frequently Asked Questions]]
** [[BPC RiskManager Quick Help With Common Tasks]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
** [[BPC RiskManager V6 on 64 bit Windows]]
*[[BPC SurveyManager - Overview]]
** [[BPC Surveymanager - Key Features]]
** [[BPC SurveyManager - Introduction]]
** [[BPC SurveyManager - Creating Surveys - Layout and Markup Tags]]
** [[BPC SurveyManager - Creating Surveys - The Page Script]]
** [[BPC SurveyManager - Questions and Input Controls]]
** [[BPC SurveyManager - Creating Surveys - Properties]]
** [[BPC SurveyManager - Creating Surveys - Rules Scripting]]
** [[BPC SurveyManager - The Built In Reports]]
** [[BPC SurveyManager - Advanced Database Configuration Settings]]
** [[BPC SurveyManager - Client Overview]]
** [[BPC SurveyManager - Tutorials - Survey Layouts]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
<noinclude>
[[Category:Featured Article]]
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
dc7ccfc5f7d790cb2dd0c17b50cdde25c14ee35b
BPC RiskManager V6.2 Network Architecture
0
4
500
484
2018-10-29T12:20:23Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
[[Image:BPCRM NetDiag.png]]
BPC RiskManager is an N-Tier application. The primary layers are:
* Database Server layer
* Application Server layer
* Client layer
The core application set does not require a web server but certain optional capabilities do.
You will require a web server if you will be:
* Using the browser plugin client component.
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
While the browser plugin client component can be served by any brand of web server, you will require IIS 5+ if you plan to be:
* Using the HTTP/HTTPS communication protocol between the client layer and the application server
* Using the BPC SurveyManager / Web Forms engine
Both of these capabilities use ISAPI libraries running on an IIS server. If you will be using the the HTTPS communication protocol, you will also need an SSL certificate installed on the web server.
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
96accad095e3f378d468445d6bc5231ced78bf76
BPC RiskManager Frequently Asked Questions
0
5
502
486
2018-10-29T12:20:23Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
{|width="100%"
|width="60%" VALIGN="Top" |
# [[How do I get a copy of BPC RiskManager V6.2.5?]]
# [[Would it be possible to get a copy of the BPC RiskManager V6 installation guide?]]
# [[Is there a feature listing for the BPC RiskManager windows client and the browser client?]] We are looking at the possibility of using a mixed client environment based on user specific needs and where they are.
# [[When are multiple BPC RIskManager server licenses required?]] We are looking to have RM implemented across a group of companies. They will all be using the same instance with same fields and definitions as the subject matter is the same. Can we use a single server license or will we require multiple server licenses?
# [[Can you please provide information on the cost of licensing and the type of licensing for BPC RiskManager V6.x ?]]
# [[Does your license include the cost of MS SQL Server ?]]
# [[I just purchased BPC RiskManager. Will you be sending the install disks, and when?]]
# [[What will need to be arranged prior to the installing BPC RiskManager?]]
# [[Does the RiskManager client application work with FireFox browsers?]]
# [[In what programming language is BPC RiskManager written?]]
# [[Does the RiskManager plug-in itself have a certificate like a java applet does?]]
# [[For support, what type of support is available (i.e.: email, phone, onsite, etc...)?]]
# [[What is the best way to get support?]]
# [[How do I arrange installation support and what is the timeline?]]
# [[What support packages are available and at what cost?]]
# [[Is there a cost associated with telephone support (i.e.: cost per call or issue)?]]
# [[How do I get custom features added, or request new features for BPC RiskManager?]]
# [[Is there a User Group Forum?]]
# [[What type of documentation, technical and user is available for BPC RiskManager?]]
# [[How does one decide the optimum BPC RiskManager configuration?]]
# [[Is BPC RiskManager a Client-Server application?]]
# [[What is the difference between the browser plugin and the windows executable RiskManager client?]]
# [[Database stability: Is the RiskManager essentially a SQL Server application ported to Oracle?]]
# [[Database support: Which database choice will give us the best level of support?]]
# [[Security: What is the most secure architecture for BPC RiskManager?]]
# [[What is the best client version - the browser or non browser Risk Manager client?]]
# [[What admin account rights are required to setup a browser plug-in?]]
# [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7|How do I configure IE for the RiskManager browser plugin?]]
# [[BPC RiskManager Server - After installing in production or adding an application server|We just ported our enterprise system to a new server and I can't login. What do I do now?]]
# [[Steps For Migrating RiskManager V6.x from Test To Production|How do I port BPC RiskManager from test (or dev) to production?]]
# [[BPC RiskManager V6 on 64 bit Windows|How do I install BPC RiskManager onto a computer running a 64bit Windows OS?]]
| VALIGN="Top"|
<noinclude>
{|align="right" width="100%" cellpadding="10px"
|- style="background-color:#FFEBCD; " width="100%"
|'''A Frequently asked Question is...'''
|-
|<div class="didyouknow2" STYLE="height: 600px;
border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-right:10px;" >
{{#dpl: includepage=*
|includemaxlength=3000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=RiskManager FAQ
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;"></div>
|}
</noinclude>
|}
<noinclude>
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:Bishop Phillips Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinksCategoryHead|CT=RiskManager FAQ|CN=The frequently asked Questions Category}}
</noinclude>
25cfdeccbd4a292afa2715e0cff010008b205d54
The Stakeholder Community Network Model
0
288
504
394
2018-10-29T12:21:38Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Introduction - What is the Stakeholder Community Network Model?=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this and other topics. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Re engineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, style and the detail provided, as the original text was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time.
'''''Author's Note:''''' The stakeholder community network concept was originally mapped out in the mid to late 1990's and reflected both my own search for a paradigm for online and virtual corporations. It effectively pre-dates the rise of cloud computing and social network sites as a component of business (for which it almost seems to have been designed) by some five to eight years. It did, however, benefit from existence of the fore-runners of these concepts. It was developed in the context of the observed behaviours of successful online ventures such as DELL and CISCO, the Victorian whole of government reform agenda, the tail end of the TQM experiment, shift from paper to online work flow both intra and inter business, rise of risk management, progressive adoption of balanced score cards, appearance of network trading organisations (groups of independent complementary businesses that traded together as a unit cross-feeding work and niching away from each other through specialisation - they flourished briefly locally in the mid-1990's), and the rise of on-line portals, peer managed corporate forums, application service providers, enterprise scale ERP and CRM systems, and web based B2B systems and the emergence of cataloguing standards. I have used it heavily over the years. It has been modified over time, to accommodate learnings from organisations that survived economic, technological social and political reversals and fertilised throughout by proven tactical and management philosophies, the stakeholder community network model would now seem to have come of age.
</noinclude>
==What and Why==
===What is the Community Network Theory of Organisations?===
====Organisational Community Network Theory====
'''''Organisational Community Network Theory premises that an organisation is a network of one or more communities existing in a network of other communities. The network links communities along lines of exchange such as communication, dependence, and obligation. Communities are collections autonomous agents and/or other communities that interact and share a sense of group identity, or share at least one purpose in common.'''''
Agents are essentially people, but the category could easily accommodate AI devices as these develop appropriate capabilities.
====Characteristics of a Community in Organisational Design====
Communities provide a natural, spontaneously-forming, self-organising, and evolving human organisational structure that forms because something is shared by the participants. Through the things the participants share in common, the community unit provides a framework for standardisation, streamlining, automating, and specialising in delivery of services and products to meet the shared purposes and operational needs of the individual community, and groups of communities.
Communities form initially because there is one or more needs in common among the participants (possibly only the need to identify and classify each other!). They are not inherently permanent structures, however there are some communities, that because of their survival through multiple generations or over multiple business cycles are effectively permanent. Such a list might include cities, countries, religions, professional associations, sporting clubs, and some government agencies, for example. At the other end of the continuum are communities that form spontaneously and last for little longer than the span of the first and only meeting. Examples might include emergency assemblies, concerts, demonstrations, staff inductions and rallies, etc.
Members of a community may be individuals or other communities. Communities contain eight non-exclusive classes of participant:
# Members - All participants are members, regardless of whether they are also members of the other classes.
# Beneficiaries - Information, goods and services consumers
# Suppliers - Information, goods and services providers
# Patrons - Funding providers who therefore also tend to direct
# Governors - Providers who administer, moderate, direct, control access, monitor, and tune.
# Custodians - Provide the infrastructure, durable assets, information warehouse, community tools.
# Partners - Provide compatible, complementary non competitive services or goods consumed by members in association with those of the community, but not as part of the community.
# Public - Comprised of potential participants, and participants who may also spontaneously form communities that compete with or otherwise influence the context of the community.
The more mature the community, the more clearly these roles are differentiated and actively operating. For a community to reach stability over an extended time the more important it is for these duties implied in these roles to be fulfilled.
Members of a community:
*share in a communal identity,
*have a shared purpose with other members,
*need similar access to information, and
*draw from a common set of tools.
The community will interact with other communities both individually and as a group. The more cohesive and mature the community is, however, the more likely it is that it will interact as community with other communities through nominated representatives.
The community is the fundamental building block of an organisation, but communities are structurally recursive and fluid. Communities themselves naturally subdivide into teams that service particular interests or needs of the community. These teams from their own communities, and together these internal communities form a network of interacting communities. The larger and more heterogeneous the parent community the more noticeable, numerous, segregated, larger and autonomous these internal communities become.
These internal communities may also interact directly with external communities, and have external participants in otherwise internal communities. The more predominant the external participation is, the more likely is the internal community to transition though the parent community boundary to become an external community (with respect to the originating parent community). Similarly the higher the proportion of community participation from a single community in an external community, the more likely that external community will transition to an internal contextually constrained community.
Each community is, therefore, comprised of a fluid network of communities contextually constrained by, and in some way supporting the activities of the parent community.
Community based organisational structures extend horizontally through unconstrained networks of interactions and vertically through community subdivision and absorption into constrained networks of specialised communities.
====Making and Strengthening a Community====
The longer a community survivies - the more mature it becomes - the more clearly the community identity, roles and rules become. For example, a group of people with a common interest in a playing of cricket meet by chance through visits to a local field - perhaps looking for a game being played. Over time they tend to arrive more regularly and predictably at around the same time in greater numbers. Some start bringing equipment and start a game, while others join in fielding or watching. As the predictability of the presence of other interested parties grows, participants start arriving in the expectation that others will also be present, while other participants bring supporting material - like refreshments, etc. Gradually, a community is forming with self nominated and perhaps suggested or allocated roles.
Eventually the group might suggest a common name - the Sometimes Cricket Club - and others might attempt to organise more sophisticated or permanent resources, and eventually the funding needs of the group might dictate an expansion in its membership and the need to more formally manage finances on behalf of the group, etc. Rules might initially be common-sense and unspoken (like not stealing the bat and ball from the guy that supplied it), others may be agreed through shared experience. Sharing or common interests and the need to improve predicatbility of participants in games will encourage the group members to share contact details and channels of communication. The more individuals invest their time, energy and resources on behalf of the group, the more they will expect later joining members to make a catch-up contribution for the existing investment - and the community may start placing barriers to entry in the form of membership criteria and fees.
As the group grows handshake agreements may need to be formally agreed and recorded, and individuals will be formally allocated roles and leadership agreed. Along the way as disagreements arise (like who should bat first) dispute resolution mechanisms will be required.
Thus a community has been formed and gradually self-organised. If the initial casual group fails to ever define roles, find equipment supplier(s), it will be most unlikely to ever get to the stage of even the first game. If it fails to agree its meeting place and times of meetings it will probably not achieve the second game. If it fails to identify its membership and establish an identity (and therefore a brand) and all the other functions of a cricket club it will be unlikely to last out a season.
To make an effective long term community we need to pay attention to the characteristics that form a community and ensure that these characteristics are serviced. From the simple example above we see that a community has:
*Members
*Shared resources
*Identity / Brand
*Communication
*Define and shared purpose
*Location - a meeting place (which may be virtual)
*Roles
*Rules
*Governance structure
*Barriers to entry (note this might be as small as deciding to participate)
*Patron (implied or formal)
We grow and strengthen a community by addressing these characteristics directly. Ignoring any one of these will result in the failure of the community over time. For a community that assembles for a single purpose for only a short period of time - such as a demonstration, or an entertainment event this may not be a concern. If we wish the community to have any kind of longevity we will need to consider how we enable the defining charcteristices of the community.
It is with some surprise that we note that when we look at the permanent communities within many organisations we will find that several of these characteristics are only weakly addressed - if at all - rarely understood, and even more rarely considered. Herein lies the key to the internal structural failure of many organisations that have grown much beyond the oversite of their founders splitting into many semi-autonomous communities.
====The Organisation as a Community====
Here we distinguish a physical organisation from the organisation of its operations and resources.
A physical organisation - such as a company, government agency, not-for-profit, or even a political party - is:
# a community containing a network of communities,
# a patron of both internal and external communities
# a custodian of information and provider of infrastructure for communities
# a governor of community mandate, direction, performance, and culture, etc.
The physical organisation is, by definition, a community, but its boundaries may be so fuzzily defined that as a community it is little more than a container for a network of communities, whose primary allegiances are directed outside of the physical organisational boundary. Some communities in the organisation's network are planned and facilitated communities, while others are not planned but facilitated (such as professional associations, unions, standards bodies) and others are neither planned nor facilitated (but, perhaps, accommodated) (such as schools, sporting clubs, arts groups, social movements, etc.).
As a patron the physical organisation plays it primary role. Patronage is provided through a funded pool of resources that can be applied to communities as participants and enablers of community infrastructure, and through direct funding of community operations, or through funding infrastructure provision, etc. Patronage is about funding, and every gift "in kind" of resources or equipment, etc is an implied gift of funding as well. Patronage is accompanied by some ability to influence direction - if only from the implied threat of future funding cessation.
As a custodian, the physical organisation will also provide services to communities of storing knowledge, providing and maintaining technical and physical infrastructure used by communities, and management of liquid assets, etc. These are called custodian functions because they are about the preservation of assets, wealth, capability and capacity.
In its governance function the physical organisation imposes accountability for patronage, standards, policy compliance, legal compliance, strategic direction, performance measurement, financial control and resource utilisation, etc
All organisations are simultaneously intersected by many special interest communities:
*The average workforce is riddled with communities some intersecting the organisation, some not - union(s), professional bodies, schools (if staff have school age children), political, sporting, social, OHS cases, divisional, project, etc.
*Industrial associations, standards committees, regulators, etc.
*The company is surrounded by public interest groups, political and semi political groups, consumer advocacy groups, and the public relations industries.
*Internally the organisation might have communities of buyers, marketing and sales, logistics, process & quality improvement, governance, safety, research and development, financial control, etc.
Communities do not respect the conventional boundaries of corporate or governmental agencies. Communities that interact with external stakeholders, for example, draw in members of the public and convert them into organisational stakeholders in the process, but not employees (at least in the conventional sense).
====The Advantages from using Communities to Model Organisations====
In some organisational theories, communities are represented as external and internal forces or drivers, but are not directly modelled into the organisational structure. The organisation is seen as a collection of consumer-provider relationships - whether those relationships are about transmitting instructions, funding, goods, services, resources, etc. The relationships are essentially hierarchical - even in matrix organisations - and feed back and feed forward control systems have to be imposed on the structures to make them work. Structural entropy gradually causes the structure to disassemble without constant maintenance on the organisation structure itself.
The community is an advance on the classic consumer-provider interactive model, because it:
*assumes most business relationships are multi-directional exchanges between the provider and the consumer and other providers and consumers extending over a period of time;
*recognises that all transactions between parties involve a series of micro exchanges going in both directions, not a single uni-directional exchange. For example, a purchase involves the consumer providing information (identity, location, preferences, competitor data, demand level, buying cycle, etc.) and possibly funding, a sales team matching the need to available offerings and defining and providing the promise, a legal team defining the obligations, a delivery team to deliver the good or service, a quality and support team providing quality management, logistics team providing transport, etc. All of these are participants of the same community involved in meeting client needs.
*delivers the benefits of the one-stop-shop process models, without the training cost, and inherent quality variability, by forming a community of specialists to collectively provide the single point solution.
*provides a model for structuring the online presence of an organisation.
*provides an organisational architecture that distributes the costs of providing and consuming goods and services across the community rather than exclusively concentrated in the larger party. For example, a buying community might assume some of the costs of sales by providing their details online directly into the client database, select from available product (by watching videos, reading information and product comparisons provided from central location), or submit special orders online, respond to questions from other clients in hosted forums, and advertise the organisation's products and quality in organised reviewer sites, or social networking sites.
*places the provider and consumer into the same "team" and positions them as jointly trying to meet a need. The community model facilitates all participants contributing jointly and sharing ownership of the outcome - rather than one party meeting the needs of the other.
Each community is a collection of participants (members) who share common operational characteristics, goals, interests and/or functional needs. The greater the extent to which the participants share characteristics, interests, needs and goals in common the greater the cohesion in and resilience of the community - in simple terms the community is active, "tight", involved, and the members share a sense of identity, belonging and, most importantly, ownership.
Communities are semi-autonomous, self-selecting, self directed, and inclusive. This does not mean communities are necessarily "open-access". In fact communities with higher barriers to entry often have the highest sense of cohesion because membership is something hard to attain and therefore something of value. Cohesion does not necessarily mean active, however, and lack of activity generally makes a community less interesting organisationally. Communities survive by exchanging things. The greater the volume of services, tangible goods or intangible goods (such as information), that flows through and around the community the stronger the community becomes. In the community model an organisation therefore benefits by fostering participation and particularly communication among all its members.
===What is the Stakeholder Community Network Model?===
'''''The stakeholder community network model is an organisational design and analysis paradigm that sees the organisation as a network of co-dependent stakeholder communities positioned in a larger network of interacting (but not necessarily co-dependent) communities. Within this paradigm, all of an organisation's services, functions and facilities exist to service the needs of the various stakeholder communities in the network.'''''
It should be noted from the outset, that co-dependent does not mean cooperative. As with domestic co-dependent relationships, the community network may include some positively destructive co-dependent community relationships.
The model defines an organisation as consisting of a network of operations that may extend beyond the boundaries of the organisation's body corporate. One such situation might arise in franchised operations or trading networks where an external entity provides critical services on which the corporate organisation depends.
The model works as an organisational design paradigm, a process design framework, an IT strategic design paradigm and a risk and performance analysis framework. It is directly suited to modern network, online, virtual, service operational models as well as bricks and mortar industries including utilities, government, general and project manufacturing, and education. It has not been tested in the resources sector or transport sector.
As an analysis tool identification and labelling of existing implicit and explicit communities and the physical and virtual flows between them against current planning, score cards, policies, performance measurement systems, service agreements, compliance frameworks, risk models, quality, control and feedback systems highlight areas of dysfunction, duplication, redundant effort, counter-productive strategies, missed opportunities, and structural inefficiency and ineffectiveness.
As a design tool it results in the alignment of organisation wide activities to identifiable purposes with targeted participants and measurable performance. It facilitates structurally many different and potentially divergent simultaneous strategies while painting a boundary and direction for such divergence. Such support in organisational design is essential for dealing in global, highly cyclic, or political markets where cultures, rules and geographic features may require the ability to operate as "her to him and him to her", and to retire and replace entire limbs rapidly.
As a customer, partner and supplier service process model it results in bound customers and suppliers and well integrated partners while distributing a significant portion of the organisations costs to the participants.
As an IT systems framework it provides an efficient protocol for defining shared services, community portal service architectures, intra-cloud and cloud services, virtualisation clusters, etc.
==Definitions==
===The Organisation===
Organisations are networks of communities. These communities are comprised of members drawn from inside and outside the organisation's corporate legal identity, and may include communities of which the organisation has no effective control (in traditional terms).
Under the stakeholder community network model we view an organisation as a community comprised exclusively of interconnected sub-communities of people providing and consuming goods and services. Each sub-community forms multiple sub-sub-communities within it, and the community subdivision continues recursively until the costs of organising communities out way the benefits gained from the additional community.
Contrast this view of an organisation with that of other models that classify organisations in terms of bureaucratic, divisional, matrix, and similar structures. Under the stakeholder network view all of these structures can coexist in an organisation simultaneously as they are simply overlapping communities defined around structural paradigms. The stakeholder community network model does not replace such paradigms - it absorbs them.
In the stakeholder community view an organisation is a free-flowing evolving network of teams forming and disbanding as required, with some acquiring near-permanent status, while others enjoy but a single day in the sunshine. Community membership is not exclusive and it is normal for members of one community to also be members of other communities.
===The Community===
The model first defines a structural unit (the community) that possesses identifiable and comparable characteristics, such as focus, information need, functional need, etc. Secondly, the model looks to the mechanisms of facilitating stakeholder communities in a cost effective and consistently reliable and predictable way, utilising common services designed to enable and utilise the shared or distinguishing characteristics. So initially, at least, the model is community structure agnostic.
Communities form for multiple reasons, including:
*shared geographic proximity
*shared heritage
*shared communications technology
*shared language
*shared interests
*shared skills
The things we share are like gravitational attractors around which people cluster in self organising social units we are calling communities.
As communities grow beyond a few members they form sub communities whose members service the parent community or concentrate in some specialised capacity in addition their other roles as members of the community.
The communities in which we are most commonly interested (in the general organisational performance improvement context) are those forming around shared interests and skills. Within an organisation the geographic, and language communities may be crucially important, and in some contexts would be directly accommodated but they will also usually need some form of communities formed around skills and interests (like, at the very least, consuming or providing something), in order to assist the organisation achieve its purpose.
Within each community formed around shared interests or skills are a further set shared interests such as membership, meeting space, information, branding, commercial services, engagement, arbitration, and support. As these needs are common (with minor variations) across all communities they are an attractive first target for shared service provision across all communities. In designing these shared services one should remember that a properly harnessed community can be self managing, peer supporting and self selecting. Shared service provided to communities should be designed to encourage this ownership by the community membership.
A community model assumes a multi-way conversation within the community among the community members - not a massively parallel bilateral conversation between the community members and the organisation. The latter is a client-supplier relationship and by excluding inter-member interaction it embeds the costly push model of marketing, sales and service delivery. By encouraging intra-community conversation we harness the consumers in the community into one or more of the many supply roles in the community. In a customer/client oriented community supply roles span such things as marketing assistance with reviews, discussions and forum participation to support assistance in peer help spaces, and even product improvement and testing such as in software Beta programmes. On the supplier and partner side, supplier side community roles include online supply of certifications, supplier self-registration of details, self selection of available contracts, online invoice entry directly by suppliers, and suppliers providing new product information feeds matching community standardised classifications and measures, etc.
===The Stakeholder Community===
A stakeholder community, is a collection of people, agencies, or units of an agency, that share three traits in common:
# They have an interest in the organisation being modelled or analysed (IE: they are stakeholders).
# As a group, they are co-dependent with other groups of the same organisation. (IE: the groups can not operate with complete autonomy as they depend on each other for their functioning and survival).
# They possess additional distinguishing dimensions of their interest in the organisation that allow them to be functionally separated from some members of the collection and similarly grouped with others (IE: they form an identifiable and functionally similar subgroup of stakeholders).
A stakeholder community of an organisation might be defined as geographically based, and representing all customers within a geographic area, or it might be an enterprise wide collection of staff injured in forklift truck accidents, or a worldwide extra net of ECL policy advisers, or suppliers and corporate buyers for raw materials,... or any one of a long list of possible organisation specific or related groupings.
We call the members of a community "Resources". A resource may be a person or another collection of resources such an organisation, a unit of an organisation, another community. In all cases where a collection of resources is a member of a community, that collection will participate through one or more "community representatives". So in a sense resources can be seen as ultimately comprising people (even though they may be members fulfilling constrained roles).
===The Stakeholder Community Network===
A stakeholder community network is a collection of stakeholder communities that form a network of loosely co-dependent communities.
The communities comprising the network preserve the rules of membership of a stakeholder community domain (as defined above). The links between member communities represent the co-dependencies. The dependencies are functional in nature and may be about information, goods or services - provision or supply, etc. They therefore represent the first layer of potential service level agreements in an organisation.
Technically speaking, the graph connecting all members of the stakeholder network is a digraph (directed graph) when the functional attribute of the network relationship is included in the inter-community link definition.
===The Well-formed Stakeholder Network===
In the universe consisting of all possible stakeholder communities of an organisation, a complete network would include all communities in the network topography. Such a network is said to be "theoretically complete".
Theoretical completeness is neither practical nor possible to achieve in practice. We can not know, and thus enumerate, every possible stakeholder community as each resource and every possible combination of two or more resources up to and including the entire membership of the organisation's stakeholder domain is potentially a community.
Another way of viewing completeness is to first test to ensure that all members of the stakeholder community are also members in one or more of the other communities in the network. This network is then complete in terms of a organisation's resource coverage.
It is worth noting that an organisation's stakeholder resource list may include both members of the public and entities that have no direct dealing with the organisation as well as staff, clients and suppliers (etc.) of an organisation.
===The Stakeholder Community Network Model===
The stakeholder community network model views an organisation in terms of stakeholder communities with shared needs, interests and/or purposes.
The model is a government and business meta-organisational model for organisational design, performance analysis and competitive strategy. It founded on a theory of operational design that embraces networked co-dependent business structures (such as outsourcing, join-ventures and social networking), while not mandating them. The step into communities, however, fundamentally changes the organisational focus from internal structure management to external service delivery. By rejecting all activity not designed to service an identifiable community it forces the entire enterprise to embrace a service culture at every level - everybody is a client of somebody else and in a stakeholder relationship (and usually responsible to someone, or responsible for something) with many other people.
The community structure inherently distributes some of the costs of marketing, sales and servicing, from the net providers to the net consumers within the community, but is effectively a premium willingly paid by community net consumers for greater influence over service form, more relevant and timely information, improved service speed, and risk perception confirmation (the role of public forums), etc.
Communities are essentially self determining and semi-autonomous so a community network modelled organisation naturally accommodates multiple value streams simultaneously. The ability for a community to recursively sub-divide into smaller overlapping specialised communities means the enclosing community structure can accommodate not only multiple value streams internally, but also multiple agendas. Thus financial performance can be enhanced, while quality improvement, social policy or research (and other long term strategies) are driven with equal priority. Further, new value streams can be added to the structure without compromising the integrity or culture of the existing structure.
The semi autonomous nature of communities means that both competitive and and non-competitive business architectures are compatible with the community network model.
We say it is a "meta-organisational model" because, while you might design your physical organisational structure around the model (particularly at the business unit level, or in the online context), it is more common to use it to redesign the roles, service agreements and strategies of existing organisational structures in an organisation. The meta-organisational model is one that floats through a physical organisation providing a new virtualisation of the organisation by re-engineering the service agreements, social networks and logistical networks in an organisation.
One way to think of this is that the impact of applying the community stakeholder thought process is to rearrange the plumbing, the lifts, the corridors and the internal doorways inside a heritage listed building. It is still the same building on the outside, but now you don't get lost inside it, and clients and customers start sharing your destination, not just what you do.
Sure you could tear down the building and replace it with a campus that modelled your stakeholder community structure exactly, but you do not need to do so to get the benefits, and in fact doing so might be counter productive to your market.
The model does tend to have certain organisational impacts - even as a thought exercise:
*The model encourages networked structures and specialisation of semi-autonomous co-dependent internal units.
*The communities share common servicing needs and efficiency dictates some form of shared service provision for these common needs. These structures imply additional cost, which in a zero-sum change process implies that resources will have to be transferred from somewhere else.
*The network model will tend to reach across multiple divisions of an organisation in defining communities.
In the normal entity (government or business) an individual or even business unit might participate in multiple stakeholder communities at once. So the communities are not necessarily defining an organisational structure as much as a set of interlocking co-dependence structures around which services can be consolidated and streamlined, duplication identified and removed, and context specific organisational purposes can be clearly articulated.
=Applying the Stakeholder Community Network in Practice=
==Step 1. Identifying and Defining Stakeholder Communities==
We must fist decide whether we a looking for a directed outcome such "quality improvement" or an undirected (normal) outcome. This impacts the design of every community.
In a directed outcome model the directed outcome becomes a community in its own right that is automatically a participant in every other community. This allows consideration of the requirements of the directed outcome community to be capture and implemented in every other community structure.
In the undirected model no such imposed membership is mandated and the community architecture is left to optimise the framework with which it has been equipped.
In most situations we use the undirected model for analysis and the directed model in conceptual design (refactoring into an undirected model once the directed redesign has been finished).
==Step 2. Identifying and Defining the Community Ennoblement Functions==
In the model, the central object of the organisation is to ensure communities are facilitated, serviced, and harnessed for the purposes of the organisation as best it can, or otherwise "actively managed". The model sees only communities - so every participant within and without the organisation must be able to be defined as falling into one or more stakeholder communities if the model is to be considered "well-formed" (read "complete").
Within the model, the aim of the enterprise is to facilitate communities (generally) and a defined set of communities specifically - which translates into:
*identifying stakeholder communities
*mapping new and existing stakeholder communities to organisations objectives, mandate and purpose as they change
*mapping inter-community work flows testing and identifying duplicated communities, duplicated flows, and under resourcing, etc
*seeding communities as required
*funding stakeholder communities (eg seed capital, cross charging, external billing, etc)
*organising stakeholder communities
*branding stakeholder communities
*fostering community participation and outcome ownership
*providing the and possibly managing the infrastructure for community self organisation
*liaising/interfacing between stakeholder communities (eg. client community versus customer community)
*delivering the community's requested service or goods
*harnessing community ownership of the service/product improvement process
*trapping and archiving expert knowledge from both internal (to the organisation) and external community participants over time
Within an organisation adopting the stakeholder community network paradigm operationally, the stakeholder community network must be actively managed. This means it must be facilitated, moderated and funded. Resourcing is required to make it fast and efficient to implement and equip new communities and retire existing ones. Part of equipping a community is establishing its charter, budget, performance measures, governance, operating rules (constitution), core membership, decision model, meeting space, common (shared) tools and specialised applications or services need.
This necessitates the creation of a new centralised or distributed role of community facilitator(s) and a central role of community registrar (manager). The former is about equipping and assisting new communities, identifying and seed communities as required and advising and improving existing communities. The latter is about containing, policing, funding, planning, judging and budgeting communities.
==Step 3. Considerations in Designing the Stakeholder Community Analytical Structure==
Once we have a standard definition of the community concept as it applies in our analysis and organisation, the next step is to define a framework of communities through which to analyse the organisation.
As each community shares facilities between their members, the fewer top level communities there are the better the efficiency gains in the entire model will be. Unless, of course, their are too few and the resulting groupings are not homogeneous over sufficient characteristics, or the communities are badly chosen with many shared characteristics between the groups rather than within the groups.
Secondly, the choice of communities can slant the servicing view internally or externally, or indeed could simply mirror existing organisation structures. None of these effects are likely to produce efficiency gains sufficient to justify the operational overhead of the stakeholder community support systems. The gain comes from achieving 100% coverage of participants, with communities comprised of both external and internal participants, with the minimum need for intra-community process or system customisation. By demanding the mixing of internal and external members aim to eliminate duplication between external and internal systems and processes servicing the same need.
So, ultimately, the choice of top level stakeholder communities proves to be crucial to the outcome of the model - on all fronts.
In our experience, if the model is well designed the chosen top level community groups will tend to be highly co-dependent which automatically provides a structure and focus for service level agreements, and intra-community risk profiles will be highly consistent.
The choice of stakeholder communities used is prima-facie up to the organisation and the purpose of the analysis. While generalisation is possible at the highest level, as the view descends through the communities into their member sub-communities the groupings become quite specific to an organisation.
After many years of using and refining the concept we have settled on a standard top level stakeholder community model we call SCNM03. It has proven to be work predictably in both government and commercial agencies in both physical (eg manufacturing) and virtual (eg software) organisations. Alternative models include the groupings under Porter's Theory of Competitive Advantage.
=Standard Stakeholder Community Network Model: SCNM03 in Practice=
==SCNM03: Bishop's Model Stakeholder Network==
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
In the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
==Risk and the Stakeholder Community Network Model==
Risk in the model tends to vary with time and the degree of influence the organsiation (the meta-community) has in the specific community being examined. This influence will vary over time.
Consequently, in the longer time frames (ie. the strategic time frame) the Public and Governance communities are usually the highest inherent strategic risk communities in the model. The organisation tends to have the least influence over the sub-communities contained there-in and may participate only as a guest (information receiver, price-taking customer, subject of legislation, etc.), or not at all. Public attitudes can swing against the activities of the organisation, and influence the legislaters, who, in turn, can legislate the marketplace or the organisation out of existance. COnsumer preferences can change as technology progresses, making the organisation's business model irrelevant. The stakeholder network model therefore naturally tends to encourage both lobbying and active public relations management (or the exact opposite: invisibility!), and participation in external communities for information gathering.
Where timeframes being considered are shorter, ie. from an operational or tactical risk perspective, Workforce will rank as one of the highest risk spaces. If we think of Workforce as being comprised of smaller communities - say contractors and employees, and then each of these in turn being comprised of even smaller communities - say divisions, units and ultimately individuals we see that the more we subdivide the group the closer we get to a community of one member - the individual. In the very short term humans thus represent a highly variable factor.
In the micro-community of one person. the only member of the community that exists inside the employee's head is him or her self. All the risk minimisation and behaviour modification controls naturally present in a larger commnunity are dependent on that one member. In that community one person fulfills all the roles of the multi-member community. Strategies such as training, and standard processes work over an extended time frame to reduce the probability of incidents and create predictability across the workforce as a group, but in the very short or immediate timeframe the individual is still entirely responsible for each action with little chance for other community members to intercede (because there aren't any!). In the instant, this micro-community can make an unsafe decision that impacts the well being of the larger organisation (as well as themselves). Planning, thorough and extended training, careful member selection, 'idiot-proof' machine and user interface design will improve the predictability of the individual - but all these strategies take time to design, implement and achieve their effects. So, over the shortest unit of time - say, a second into the future - the individual can make a very bad decision with disastrous outcomes. This is a technical way of saying that people do dumb things that can be prevented with enough preparation and training - but only if enough time is available.
==Competition and the Stakeholder Community Network Model==
The SCNM03 model captures a deliberately divergent view of competitive strategy from that presented by many earlier authors. In this model, competitors are seen as potential suppliers, partners, clients, customers or workforce and strategies to bring them into one or more of those communities would be pursued.
Crucial to understanding the SCNM03 stakeholder model is that, purely applied, the model sees the entire universe in terms of these communities. It starts with the ideal vision built-in and therefore models a best fit to that scenario.
One obvious issue, then, is that there is clearly no community of "competitors". Under the pure SCNM03 stakeholder network model our aim is to make competitors a member of one or more of the other communities. We are therefore encouraged to both define our service offering away from competition and structure ourselves as complementary to another's offering or needs. The extent to which we are not able to achieve this influences the inherent risk that lays in the public communities.
We do not loose the unresolved participants, instead they appear as sub-communities of the public community and are subject to a range of risk mitigation strategies.
==Stakeholder Communities and Sub-Communities in SCNM03==
Each of these 8 communities is comprised of smaller communities with more specialised shared needs. For example, workforce is comprised of two specialised communities: contractors and staff (or other appropriate terminology). While many requirements of these groups are the same, there are specific differences in engagement, management, ancillary services, social interaction and disclosure levels between these groups to warrant separate community identities.
Conceptually the stakeholder network organisation is (almost) a franchiser of community management systems within a defined product/service space and in a given organisational cultural context. An organisation adopting this model will naturally look to standardise the managerial and technological profile of the communities it manages.
Applying the stakeholder network model in process design, performance analysis, compliance management or risk assessment often results in process structures and views that differ dramatically from the Divisional, Matrix, Hierarchical and Service models under which the organisation may operate. The community network model is agnostic when it comes to organisational structure (with the one exception being an organisation exactly mirroring the network model itself).
By way of example, an organisation that produces widgets, might traditionally see itself in terms of functions and processes concerning widgets. It has widget raw materials planning and acquisition, inventory management, widget production, widget distribution, widget order management and sales, etc. The same organisation in the stakeholder network model would see the world in terms of satisfying the needs of defined stakeholder groups first - not the things they were manufacturing.
In the SCNM03 stakeholder network model the natural home of the manufacturing functions is in the customer community where they are firmly focused to the customer (note - not client) desires, and materials acquisition function might be seen to contract the services of both the partner and supplier communities to satisfy material demand.
A couple of outcomes of the model are immediately apparent from this example, the model blurs the distinction between internal sourcing and external sourcing,
From a computing perspective, the model automatically leads to service portal based architectures, systems consolidation, cloud structuring (whether internal or externally hosted), and highlights the places where inter-system integration and system standardisation are needed. From an operations perspective it leads to service focused organisational architectures with defined client groups and document service standard agreements.
==The SCNM03 Communities Explained==
An individual is often a member of multiple communities (eg Customers and Clients). Our standard stakeholder communities (which in 12 years have yet to be wrong) are:
{|
|-
|Clients
|style="padding-bottom: 10px; padding-top: 10px; border-bottom: 1px solid black;bottommargin:10px;"|Stakeholders who receive or deliver services Clients are interested in rapidly finding information, requesting service, reporting hazards / incidents / events / ideas.
A classic result of the client stakeholder focus are client portals. In a local government these might take the form of a resident portal, where a city rate payer can find in one spot all the online systems for garbage collection, events, bylaws, parking permits, voting, pet registration, planning applications and objection lodgment, etc. In a direct-to-customer manufacturer the client might have access to a portal with product information, product enhancements, support, manuals, training, online-store, peer forums, product reviews, newsletter/blog, and peer/expert hints and suggestions all in one spot.
|-
|Customers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Stakeholders who pay for services that clients receive. This is separation is very common.
Customers want to pay for things in as convenient and consolidated a way as possible, and have mechanisms available for enquiring, revoking or monitoring services for which they pay. Companies that send multiple bills for the different services they provide are examples of firms that seriously need to look at their customers as a stakeholder group.
Governments provide the classic examples of customer and client separation: A State Government might pay for (or part-pay for) some services that are received by citizens of a city government. The state government is the customer, while the citizen is the client.
|-
|Suppliers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Suppliers of services and materials to the organisation. Suppliers have common service interests such as finding tenders, quotes, interfacing supply catalogues to purchase order systems, checking on payment status, locating standard contracts, etc.
|-
|Partners
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Partners are providers of complementary services. A “meals on wheels” charity provider may function as a partner to a local government, delivering services complementary to those of the city government, but funded by non-City sources.
Partners are mainly interested in ensuring their services stay complementary and not competitive with the organisation. So information on strategies, management of joint projects, identification of opportunities, etc are of interest.
Roads constructions authorities are partners who provide accident minimisation services, and traffic impact control services, etc. that complement those of the local or city government roads teams.
The relationship between insurance companies and the fire service is another example of a partnering structure. Insurance companies have an interest in facilitating the fire control services as they reduce their insured risks.
Franchised sales teams for a retailer, independent software manufacturers for a computer or games console manufacturer, and joint-ventures are all examples of partner community networks.
|-
|Workforce
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Workforce include both employees, contractors and consultants. HR systems, payroll, contract management, OHS, incident management, etc. are examples of services needed by this community.
|-
|Treasury/Custodians
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Treasury & other custodians are always an internal community. Their members are charged with maintaining assets and lowest level enabling systems for the other communities.
IT/IS, Building Management, Maintenance and Treasury are always members of the custodians group. They protect assets and provide the infrastructure on which the community specific applications reside.
Email, communications, data storage, server management clearly fit under this group.
|-
|Governance
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The governance community, like the workforce community includes multiple sub-communities, such as the executive, regulators, government bodies, risk management, compliance management, etc. These communities use services related to the provision of control and performance monitoring. Finance, council management, boards, executive team, performance review committee, inter-government reporting, risk, and compliance systems, and planning/budgeting systems are typically included here. Governance community members are both internal and external bodies with which the organisation has an accounting and reporting relationship.
|-
|The Public
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The public includes everyone else. This is a very important community as it has the ultimate power to remove the entire organisation from existence, or cause government to legislate it out of existence.
It is also the group from which all the other stakeholders originally come. From a strategic perspective, the aim of every organisation should be to get every member of the public community to transition to one of the other stakeholder groups.
The public need to know about the services an organisation provides, its ethics, and social performance.
While most membership of this community is reasonably obvious, the presence of public relations teams, lobbying and marketing in this community may be less so.
An organisation is always a member of the public stakeholder communities of all other organisations.
|}
=Applying the Stakeholder Network Model=
The stakeholder networks model is recursive. It applies organisation wide and through each sub grouping down to the individual business unit level (in fact it can also work at the individual level – but not generally in an IS context). Just as the organisation has these broad stakeholder groups, each business unit has the same stakeholder breakdown, all be it with most stakeholders in the various communities being internal to the organisation – rather than external to the City.
The stakeholder community network has clear relationships between the elements - particularly as realised in SCNM03 - and provides a model under which social networking and portal systems naturally fit. The model leads naturally to both network organisations (those using mixed in and out-sourcing, shared service models and joint-ventures as their standard business model.
The stakeholder community model has a number of applications:
#As an IT system design paradigm and idea promoter.
#As a full organisational modelling paradigm. In this form it results in dramatically different organisation models from those in general usage and is thus often too radical for executive comfort.
#As an analytic “best practice” benchmark it is outstanding, and even when only partly applied results in improved and more cost efficient process design.
#In designing and online and web service business presence. With a little thought it should be apparent how effective the stakeholder model is in designing an online presence and structuring of mutual obligation social networks.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
f54d609f8d240d7ba168fd4d101ce36b7edfe76b
Business Process Reengineering - Process Charting
0
289
506
396
2018-10-29T12:21:38Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
=Introduction - Business Process Charting=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2012 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
==Charting the Business Process - A Unified and Holistic Approach==
===Why Chart?===
There are many reasons we may wish to chart a business and its businesses processes including mapping of data flows, documenting process steps, designing automated and hybrid systems, defining intra and inter-organisational relationships, defining or analysing service agreements, etc.
===What is a (Business) Process Chart?===
A process chart is a diagramatic representation of a set of processes, that models the enveloping organisations as if it were a machine with a functional domain that encompassed the diagrammed processes.
From a computational perspective, a business process chart is a diagramatic program describing human, machine, natural, organisational, functional and non-functional systems using digraphs.
===What are the Characterisitics of a Good Process Charting Method?===
====Objectives====
This author proposes that the objectives of a good process charting system should be to:
* improve the understanding and clarity of the data represented in the chart,
* enable domain specific analysis (such as efficiency, economy, effectiveness, reliability, etc),
* enable viewing of the processes at multiple levels of detail simultaneously,
* chart the target analysis domain completely,
* seemlessly represent both automated and non automated processes in the same chart,
* enable the automated modelling of the system directly from the chart (which implies the charting "meta-language" should have a consistent "syntax" and semantics - similar to an "ideal" computer language),
* represent processes across diverse operations, industries, products and services without context specific modification of the syntax or semantics,
* produce charts from unfamiliar industries (etc) that are understandable to a moderately experienced chart reader, with no prior background in the subject charted, and
* enable the construction of "proofs" of the processes.
In this author's view these objectives are assisted when the charting system assumes the properties and conventions of well designed computer programming language - albeit a visual one. These properties include the grammatic (semantic and syntactic) consistency, structured functional encapsulation, object reuse and polymorphism, conceptual inheritance, simplicity and functional expansion.
====Consistent Identifiable Grammar====
The grammar of a process charting method defines the symbols, their meaning, and the rules for "legal" combinations of these symbols and meaning of such combinations.
In computational languages the atomic element in a programming language's grammar is called a token. In a text based computational language these tokens are strings of one or more characters, some of which are defined in the language with a special meaning. The tokens comprise the syntactic elements of the grammar. The grammar itself defines a consistent semantic interpretation of the syntactic elements when combined in pre-defined combinations.
In a process chart the atomic element is a symbol that maps to a real world object such as an organisation, a person, a data element, a process (or function), a data store, etc. These symbols comprise the syntactic elements of the charting method's grammar, and the charting rules document a grammar which delivers a consistent semantic interpretation of the syntactic elements when combined in the pre-defined combinations.
====Completeness====
A well designed charting system is internally consistent in atomic structure and behaviours, while mapping completely (in a mathematical sense) to the real world scenario being modelled.
To be conceptually useful, "completeness" chould be able to be "proven" - at least theoretically. This explanation implies an algebraic representation (eg predicate calculus) of the process charted should be derivable from the charting language. Having said that, it should be noted that few computing languages have such a mathematical validity test available (SQL being one notable exception).
====Minimal Syntactic Complexity====
Completeness in oricess modelling is a complex topic, and one fraught with some potentially counter productive implied solutions.
For example, a charting system with a unique symbol for every-process might achieve completeness, but it would achieve this at the expense of very high grammatic complexity.
The strength of process charting approach lies specifically in its ability to categorise, simplyify, and standardise our view of a social system. If one measure of language complexity lies in the number of rules in a grammar, then the greater the range of predefined (or reserved) symbols in the language, the greater the number of rulee that will be required to define their use.
Complexity, under such a measure, is minimised when the number of unique predifined "terms" is minimised. The mover restricted is symbol set, however, the more symbols must be used to represent simple everyday-repeating processes.
===The BPC Business Process Charting Method===
The core symbols of the process charting language are defined in the BPR overview. This author postulates that all human-machine processes can be documented with this minimum set of symbols. The simplicty of its symbol set (and therefore grammar) can lead to diagramatic complexity.
Certain objects and their processes occur with such rapidity, that diagrammatic complexity is reduced significantly by expamding the core set of symbols as shown in [[Business Process Reengineering - Chart Key]].
==Charting Example - Electronic Grants Management System==
The process charting method included on the following pages demonstrates the business process charting method as designed by this author and improved with input from clients and staff of BPC over 24 years. The example charts represent the BPC Process Reengineering Modelling and the BPC Stakeholder Community model in action in a real world situation. The resulting demonstration is a fully functional government grants management process for whole-of-government administration of government grants to the public.
*[[Business Process Reengineering - Chart Key]]
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
73ac152a4b245146897670bf781740106b14b9ef
Business Process Reengineering - Chart Key
0
290
508
398
2018-10-29T12:21:39Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
==Chart Symbols and Their Meanings==
[[IMAGE:BPRChartKeyV4.gif]]
==Process Charting Design Rules==
===Introduction - Key Concept===
The full process charting model forms a language for accurately describing processes and other object relationships. The language can be represented either diagrammatically or descriptively (textually). A chart drawn according to the charting method describes a network of unstructured interacting objects (processes, people, etc) and the data output states of this network as it consumes data through its inputs.
The charting method goes beyond a standard process flowchart in that its symbol grammar is sufficiently consistent and structured as to enable the translation of the chart to a text description. The text description takes the form of a program that in turn could be executed directly or translated / re-coded into a standard application programming language as an executable application.
This ability to reliably define a program simply by documenting a real world process according to the design rules below allows an automated modelling testbed to be constructed from the chart, and then stress tested with different data loads, or different error types, or checked for deadlocks, bottle knecks or compared against alternate process designs, etc. Such testing and anlysis can be done either manually or via automation.
There are a number of different symbols and descriptive encoding rules, but in essence many of thesee enhancements are for diagramtic efficiency. The core of the charting system revolves around one meta (undrawn) symbol - data - a few drawn symbols. The full model merely expands on these to provide a richer descriptive set, and more analytic detail with fewer individual diagramatic elements being required to represent the idea than otherwise.
All symbols are one of three classes:
* Objects - Things that originate, transform, store or consume data
* Events - Both consumers and orginators of event data. Events may receive and/or generate an excite or inhibit signal.
* Connectors - Lines joining events and objects through which data flows
===The importance of Data===
The life blood of the process diagram (or description) is "data". It is data that flows through the connectors to join event or object to event or object. Data is created when an event fires, or a data orgination object manufactures or otherwsie supplies data. Data is stored in data stores and transformed in processes. Data is discarded in data sinks.
Data is inherently transient and never drawn as a symbol, although it is documented. When data is stationary it is held in a data store. A document with writing on it is therefore a data store - not the data itself. Likewise a database record is a data store, not the data itself.
Data is virtual and can take many forms. It may be a piece of information a human would understand or an electronic blib with a voltage value to excite or inhibit the recipient proportionately.
Data is infinitely divisable, imutable and transformable.
Like energy, data can neither be created or destroyed across the entire universe of processes, but within the context of any subset of processes less than the infinite set of all possible processes, data can be orginated and discarded.
When data is held in a data store it transforms the data store in some way. In a paper document datastore, it results in a blank sheet displaying written or image data. In a manufactured item "data store" it results in the transformation of petro chemicals and metals into a consumer item like a lamp shade or a car.
===The Class of Objects===
<div class="mainfloatright" style="width:40%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" align=right>
[[Image:RecursiveShapes.png]]
''All objects are recursive and containers.''
[[Image:BPC4KeyChartObj.png]]
''All objects or events are connected by lines called connectors.''</div>
The key chart comes with a number of design usage rules that are perhaps a little unusual and therefore should be considered carefully:
* All symbols are either events, objects or connectors ( lines or arrows).
* All objects are (except events) are recursive - meaning that they can include nested members of the same type as the parent (as well as other types), a constrained subset of the child objects or, in some cases, unrestrained subsets. In computational terms a recursive function is one that invokes itself, while this form of pure recursion of objects is rare in process maps, it is legal within the charting rules.
* All objects are potentially containers of other objects and, therefore, all objects are notionally sets of one or more objects. (Object encapsulation)
* Objects contained within a parent inherit the in and out flows (connectors) of the parent - or rather they inherit the right to use the flows. (Object inheritance)
* All objects and/or events are connected by lines called connectors, or by being recursively embedded in a parent object - which then becomes a container for that object.
* Data flows through the connecting lines into the objects where it is stored, and/or transformed and/or distributed. Data is ethereal and moves from one place to another transforming and being transformed by the vessels in which it is store. A document, for example, is therefore considered to be a data store - not the data itself. A manufactured item, is also a data store, containing the end result of multiple processes each transforming the storage vessel. This is the key concept that enables this process charting method to transcend both service and manufacturing process modelling domains.
* The arrows connecting objects are data-flows - referring to the movement of information, not explicitly the media on which the information is stored at the time.
* Connecting Arrows can take a number of annotations, including:
** identification of the data stream (or data streams)
** a filter condition for access
** selector bars
** optional (conditional) flags
** authorisation signature lock
** global type flags (like E for error flows) and/or
** weight and fuzzyfiers (mainly used for neural and bayesian process modelling)
* Objects are scriptable
* All objects (and ideally, but not mandated - connectors) have unique identifiers.
* All objects can be contained in multiple container objects simultaneously - but each occurrence of object is globally unique - and therefore has the same definition everywhere where it appears.
* All objects can be containers and as such may be "drilled through" to their content
* A process object may be a "map" (tranformational or distributive) or a "controller" (quality governor).
* A process fires or executes when all required inflows have data present (asynchronous).
* Events impose a block on some or all functions of the connected object until the event fires.
* All processes are assumed to operate concurrently when data is present on their incoming connectors, or an event fires, unless also constrained by other events blocking the object's functions. Events may thus operate as a clock, or trigger and as a governor or inhibitor.
* The data-flow method is capable of modelling both excitatory networks and inhibitory process networks.
* Everything, that is not a connector or event, is an object of one type or another - including the organisation itself.
===Object Hierarchy===
There is an implied object as container hierarchy (although not in any way mandatory):
* Entities can contain processes and all other objects
* Processes can contain processes and all other objects
* Data-stores can contain data-store objects
This hierarchy is very much a rough rule of thumb, for there are many cases where a data-store will be modelled with containing processes and data-stores - such as where the data-store is intelligent. Entities like organisations or people are, however better seen as external to the process unless they are containers of the process, as they will always have some processes that are not modelled in any given chart and therefore are potentially unreliable.
===Entities and Entity Groups===
Notionally, every process, can have a controlling entity (particularly where a person is actually doing the process itself). In the charting method, processes are not "owned" by people (although this is how one tends to conceptualise them), so much as controlled by them. In its pure form the process chart would show "process owners" as controlling entities connecting to their processes and thus, like events, constraining their execution unless present and active. To avoid diagrammatic clutter, where a process is controlled by a single entity (or single entity group), that entity (or entity group) can be identified in the process "owner-controller" property in the process description.
An entity group might be a typing pool, call centre staff pool, a community, etc. Each member of the entity group is inter-changeable for each other member with respect to the process concerned. Individual entities within the entity group may have other filters, conditions and constraints that subsequently exclude them from actually controlling the process. An entity group may be a sub-group of another entity group such as C-level executives in a company entity, or administration staff in a stakeholder community.
With the exception of community entities (which are effectively both an entity and an entity group), all entities and entity groups are presented using the same symbol. This is consistent with the central assumptions about entities with respect to the view of the process flows presented in a chart.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
2a16cb2e0b8c5acd961534a7b1bbbfc19b9883c1
Main Page
0
1
510
1
2018-10-29T12:27:29Z
Bishopj
1
1 revision imported
wikitext
text/x-wiki
<strong>MediaWiki has been installed.</strong>
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]
5702e4d5fd9173246331a889294caf01a3ad3706
514
510
2018-10-30T07:15:45Z
Bishopj
1
wikitext
text/x-wiki
='''The BPC RiskWiki'''=
__NOTOC__
'''''SPONSORED BY:'''''
[[Image:BPCTitle75PERC.jpg]]
{|width="100%"
|-width="100%"
|
<table align=left style="background-color:#FFEBCD;margin-right:0.9em" cellpadding="2" cellspacing="1" >
<tr>
<td>
==Quick Index==
* [[Contents]]
*'''Articles about BPC Software Systems'''
** [[BPC RiskManager Software Suite|BPC RiskManager]]
** [[BPC SurveyManager - Overview|BPC SurveyManager]]
** [[BPC RiskManager Frequently Asked Questions]]
** [[Bishop Phillips - Software Library Reference for Developers]]
*'''Articles about Governance Function Business Methods'''
** [[Internal Audit]]
** [[Risk Management]]
** [[Managing Risk in Mergers & Acquisitions]]
*'''Articles about General Management Methods'''
** [[Business Process Reengineering]]
** [[Report Writing]]
*'''Articles about Virtual Worlds'''
** [[Virtual Learning Systems]]
*'''About The RiskWiki'''
** [[About The RiskWiki]]
** [[Contributors]]
</td>
</tr>
</table>
==Introduction to the RiskWiki==
This wiki is sponsored by Bishop Phillips Consulting (http://www.bishopphillips.com/) for the education, use and enjoyment of our clients, educators, the public and professionals involved in management consulting and risk advisory, compliance, internal audit, insurance claims management, safety, governance and risk analysis industries. It provides reference articles on management, risk and risk related functions including: Risk Management, Internal Audit, Governance, Compliance, and Process Reengineering, etc.
The RiskWiki is based on the articles, methods, manuals and papers of primarily three firms: Bishop Phillips Consulting P/L, Stanton Consulting Partners and Bishop Finance P/L. These firms are contributing a large body of work amassed over many years experience with hundreds of clients. The project to convert and upload much of our BPC software help & manuals, extended body of consulting, risk and internal audit methods and models, and education and research materials is a large and time consuming project so the RiskWiki content changes frequently and will do so for the foreseeable future.
With the exception of all software documentation, and those additional documents marked otherwise, all written material on this site may be used freely by readers for any purpose including reproduction, subject only to the retention of moral rights by the authors. Some articles may include images for which additional permission may be reuired prior to reproduction. Software documentation may be duplicated in hard-copy for internal use by registered users of the systems with current maintenance agreements. Other uses of software systems documentation will be considered on written request.
==Things to See in The RiskWiki==
===BPC RiskManager===
*'''''Are you looking for BPC RiskManager Documentation or to learn more about the software?'''''
<div class="imagewrap" style="float: left; margin-right:10px;" >[[image:BPC_RiskManager_V6261_Main_Screen.jpg|100px|link=BPC RiskManager Software Suite]]
</div>Bishop Phillips supplies [[BPC RiskManager Software Suite|the BPC RiskManagement suite of governance software]] that provides a complete governance solution across risk management, controls management, compliance management, insurance management, claims management, incident & hazard management, audit risk management, governance document management and survey generation and management. The system can be installed in configurations ranging from single-user to very large scale enterprise configurations.
The system is particularly suited to managing and reporting on the risk and compliance management tasks of government agencies, whole of government, special project, not-for-profits, insurance providers, service industries, utilities, and tertiary education sectors. You will find an extensive body of information covering [[BPC RiskManager Software Suite|technical, administration and user level tasks here]].
If you have questions they may be answered in our [[BPC RiskManager Frequently Asked Questions|frequently asked questions]].
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: auto; padding-left:10px; padding-right:10px;" >
{|align="left" width="100%"
|- style="background-color:#FFEBCD; padding-bottom:10px;" width="100%"
|[[BPC RiskManager Frequently Asked Questions|'''Frequently asked Questions About BPC RiskManger''']]
|-width="100%"
|
<div class="didyouknow2" STYLE="height: 400px;
border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-top:20px; padding-bottom:20px; padding-right:10px;" >
{{#dpl: includepage=*
|includemaxlength=1000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=RiskManager FAQ
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;">
</div>
|- style="background-color:#FFEBCD; width:100%;"
|'''Featured Article...'''
|-width="100%"
|
<div class="didyouknow2" STYLE="height: 400px; border: thin solid black; display: block; overflow: auto; padding-left:10px; padding-right:10px; padding-top:20px; padding-bottom:20px; " >
{{#dpl: includepage=*
|includemaxlength=4000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|category=Featured Article
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%| Read More..]],\n
}}
</div>
<div style="clear: both;">
</div>
|}
</div>
===BPC SurveyManager===
*'''''Are you looking for BPC SurveyManager Documentation or to learn more about the software?'''''
<div class="imagewrap" style="float: left; margin-right:10px;" >[[image:BPCSurveyManager_DTCV7_SurveyEdit_Screen.jpg|link=BPC SurveyManager - Overview|100px]]
</div>Bundled with the BPC RiskManager suite and also supplied in both hosted and installed forms, the BPC SurveyManager software solution is an outstandingly versatile interactive web page generation engine using a survey model as the design and data storage paradigm. While being outstanding at survey creation and management the software is powerful enough to build build conventional data-input web pages. The full [[BPC SurveyManager - Overview|technical and SM language programming documentation is available from here]].
===Research into Virtual Worlds in Business & Education===
*'''''Are you looking for our virtual Learning research papers?'''''
<div class="imagewrap" style="float: left; margin-right:10px;" >[[image:Second_Life_042.jpg|link=Virtual Learning Systems|100px]]
</div>Through our Virtual Worlds research group - "Waisman Learning Systems", we do extensive work in the development of virtual learning and business spaces in SecondLife, and undertake considerable formal research into the application of Virtual Worlds to learning. You will find technical and text book material in our [[Virtual Learning Systems|Virtual World Learning Systems pages]]. There is an extensive overview of the literature, and history of virtual worlds, a very large bibliography, details of our in-world networked lecture theatre control systems and lecture delivery systems, and a complete documentation of an extensive academic study undertaken by our WLS team into the effectiveness at achieving learning outcomes of different approaches in delivering course material in 3D virtual worlds.
You will find an extensive reading list and bibliography of works covering virtual worlds and virtual reality concepts, history, ideas, related technologies, and application in learning as well as relevant papers on learning taxonomies and teaching concepts relevant to [[VirtualWorldLearningReferences|virtual world learning systems here]].
===Internal Audit and Management Science===
*'''''Are you heading up an Internal Audit Team or learning internal audit methods?'''''
<div class="imagewrap" style="float: left; margin-right:10px;" >[[image:ALSBA.png|link=Internal Audit|100px]]
</div>If yes, you will find complete enterprise level internal audit methods and manuals on this site cross linked to our other management papers. The internal audit manuals cover everything from managing the audit team through planning the audit program to the detail of designing the audit, conducting interviews and undertaking the controls analysis; to reporting the results. Everything you are likely to need to [[Internal Audit|manage and train an internal audit team is here]].
*'''''Are you a manager, management consultant or student of Management Science?'''''
<div class="imagewrap" style="float: left; margin-right:10px;" >[[image:BPRAnalyticStructure.png|link=Category:Management Science|100px]]
</div>You will find articles covering topics of general management and process management methods in the RiskWiki including the detailed theory and practice of plannning, process re-engineering, control theory and our proven theories in stakeholder network organisation modelling. The work here is generally unique to this site. All methods have been used extensively and effectively in practice. Start here with [[Business Process Reengineering|process engineering]].
*'''''Are you managing a merger or an acquisition?'''''
<div class="imagewrap" style="float: left; margin-right:10px;" >[[Image:MnA_WhyMerge.jpg|link=Managing Risk in Mergers & Acquisitions|100px]]
</div>Take a look [[Managing Risk in Mergers & Acquisitions|here first and learn about the risks]] in mergers and acquisitions and successful strategies for managing them from our team who have been through it successfully from both sides or the equation multiple times.
|}
==Take A Random Look At The RiskWiki==
{|width="100%"
|- style="background-color:#FFEBCD;" width="100%"
|'''From the Vault of the BPC RiskWiki...'''
|-
|
<div class="didyouknow" width="100%" STYLE="height: 400px;
border: thin solid black; display: block; padding-left:10px; padding-right:10px; overflow: auto;" >
{{#dpl: namespace=
|includepage=*
|includemaxlength=1000
|escapelinks=false
|resultsheader=__NOTOC__ __NOEDITSECTION__
|randomcount=1
|mode=userformat
|addpagecounter=true
|listseparators=<H2>, [[%PAGE%]]</h2>\n,[[%PAGE%|Read More..]],\n
}}
</div>
<div style="clear: both;">
</div>
|}
fe11cdf09d2919b101f866094f5422febc55b2fd
Category:Featured Article
14
327
511
2018-10-29T12:39:59Z
Bishopj
1
Created page with "RiskWiki Articles Selected for featuring on the main page."
wikitext
text/x-wiki
RiskWiki Articles Selected for featuring on the main page.
bde26f2fdecd84e7be185c5790c62c7702428a5f
Template:BackLinks
10
267
512
271
2018-10-30T04:26:50Z
Bishopj
1
wikitext
text/x-wiki
<section begin=BackLinks />
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
<section end=BackLinks />
4f77f055f7967a0fba0a9aab32936d542a230633
515
512
2019-09-10T03:54:14Z
Bishopj
1
wikitext
text/x-wiki
<section begin=BackLinks />
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
<section end=BackLinks />
8eff6ab712292a9972eb320dbdad3d855bfe979b
516
515
2019-09-10T04:53:56Z
58.179.33.71
0
wikitext
text/x-wiki
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
1b51f6dd361c25b7f9b0783f6230584ce61c344b
517
516
2019-09-10T04:58:24Z
58.179.33.71
0
wikitext
text/x-wiki
=BackLinks=
{{ linksto={{FULLPAGENAME}} }}
a285e920470feeb67666f88004691740f751d198
518
517
2019-09-10T05:00:06Z
58.179.33.71
0
wikitext
text/x-wiki
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
1b51f6dd361c25b7f9b0783f6230584ce61c344b
Getting Started
0
328
513
2018-10-30T07:13:51Z
Bishopj
1
Created page with "<strong>MediaWiki has been installed.</strong> Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User's Guide] for information on using the wiki so..."
wikitext
text/x-wiki
<strong>MediaWiki has been installed.</strong>
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]
5702e4d5fd9173246331a889294caf01a3ad3706
Template:Extension DPL
10
329
519
2019-09-10T05:15:05Z
Bishopj
1
Created page with "<noinclude> [http://mediawiki.org/wiki/Extension:DynamicPageList DynamicPageList (DPL)] is a flexible report generator for MediaWikis. <br/> <u>The following articles in th..."
wikitext
text/x-wiki
<noinclude>
[http://mediawiki.org/wiki/Extension:DynamicPageList DynamicPageList (DPL)] is a flexible report generator for MediaWikis.
<br/>
<u>The following articles in this wiki use DPL:</u>
{{#dpl:
|uses=Template:Extension DPL
|format=,\n [[%PAGE%|%TITLE%]],
}}
{{#dpl:
|noresultsheader=
|resultsheader=<u>Some of the above articles use a special cache to store DPL results:</u>\n\n
|uses=Template:Extension DPL cache
|format=,\n cache ID = %PAGEID% [[%PAGE%|%TITLE%]],
}}
</noinclude>
9e92a63af3ec449a1c80b61531a63488e1e05492
BPC SurveyManager - Overview
0
330
520
2019-09-10T05:55:48Z
Bishopj
1
Created page with "=Overview= Note: If you are here looking for installation instructions for SurveyManager as bundled with BPC RiskManager you will find instructions here. BPC RiskManage..."
wikitext
text/x-wiki
=Overview=
Note: If you are here looking for installation instructions for SurveyManager as bundled with BPC RiskManager
you will find instructions here. [[BPC RiskManager - Install The SurveyManager]]
BPC SurveyManager is a powerful web/HTML rules based forms engine that creates, distributes, manages, analyses and reports dynamically generated html and paper surveys and can even be used to build and manage general web sites. The system is available as part of the BPC RiskManager suite, individually or as a hosted internet service. The name does not do this application justice. With a large range of built-in forms, components and input control capabilities it is versatile enough to be used to create an entire website, smart enough to hold interactive rules driven natural language “conversations” with users, and simple enough that a survey can go from design to live in less than 15 minutes. It produces reports instantly by changing a couple of parameters, and supports complex hierarchical organisational structures such as a state with regions and schools or a corporate group, with divisions, departments and business units, or a university with faculties, schools, and departments, etc.
Originally designed to allow an organisation of 70,000+ employees to collect compliance checklist data in the last few minutes of a working day it was designed from the outset to be very fast and scale to tens of thousands of simultaneous users. The system will happily work on a laptop, or a large server farm. For example, the system has been used for many years by the Victorian Government to provide the annual Learner’s Satisfaction Survey covering thousands of students in the state of Victoria.
Consisting of two engines – the maintenance engine and the survey engine, BPC SurveyManager is delivered with multiple front ends:
*Survey Creation & Management:
**A simplified pure web/html survey management client for general use
**A powerful full featured windows application survey management client for general use
*Survey Delivery & Response
**A stateless ISAPI dll hosted on MS Windows IIS (any version) delivering dynamically constructed HTML pages providing surveys and reports.
=Example Real world applications:=
*Compliance surveys and control checklists (built in management of repeating surveys, archiving of responses, auto-locking options on completion)
*Marketing surveys
*HTML & rules driven application configuration interfaces
*User response & rules driven training systems
*Learning management system for student testing with prerequisite control (built in learning management capabilities allow for testing and marking and checking of prerequisites of students)
*Content management system for a web site with dynamically generated page content based on rules and user selections
*360 degree HR surveys (dedicated capabilities for support of 360degree survey designs)
*Management and board performance report production using surveys as templates (switch between data entry and read-only modes with a single command)
*Performance audits and customer satisfaction surveys
*Employee induction and exit interview recording
*Online job and tender applications
=BPC SurveyManager Technology Requirements=
BPC SurveyManager will run on MS Windows 98se, 2000, XP, 2003, Vista and 2008 and 64 bit Windows. The survey engine and browser based maintenance clients are ISAPI dll’s. BPC SurveyManager requires:
*If running on 64bit Windows, MS WoW is required.
*SQL Server 2000 / 2005 / 2008 or MSDE 2000 / SQL Express.
*MDAC 2.8 (standard with all windows OS’s after Windows XP
*MS IIS 5+ (IIS6+ preferred)
The minimum PC hardware is 500Mb RAM and 16 Mb disk space for the survey engine, 20mb disk space for the browser based management client and 20mb disk space for the windows management client. If the survey engine will generate graphical report components a reasonable graphics card on the server is required as images are dynamically generated rather than stored. Surveys response databases can grow quite large where thousands of responders are involved so production installations should allow for significant database growth.
The survey engine is stateless and will support server farms.
You can get BPC SurveyManager today by completing this enquiry form, or emailing sales@bishopphillips.com
=Example Surveys=
There are a number of example surveys showing just a few of the things you can do with BPC SurveyManager at:
[http://cool.bishopphillips.com/sm/OTXSurveyManager1.dll/DoSurvey? Live Examples]
Everything at this link is a "survey" displayed by the surveymanager, including the front page and the randomised news server. There is no "hand" customisation of the pages. Having said that, some page layouts date back to 1999 so some of the pages look a little dated now. The purpose of the demo link is to show the functionality rather than illustrate how nicely we can design a web page. Hence you will see in some survey som pretty ugly transitions on a page which are done to demonstrate the degree to which a layout can be changed mid stream.
=Knowledge Base=
*[[BPC Surveymanager - Key Features]]
*[[BPC SurveyManager - Introduction]]
*[[BPC SurveyManager - Creating Surveys - Layout and Markup Tags]]
*[[BPC SurveyManager - Creating Surveys - The Page Script]]
*[[BPC SurveyManager - Questions and Input Controls]]
*[[BPC SurveyManager - Creating Surveys - Properties]]
*[[BPC SurveyManager - Creating Surveys - Rules Scripting]]
*[[BPC SurveyManager - The Built In Reports]]
*[[BPC SurveyManager - Advanced Database Configuration Settings]]
*[[BPC SurveyManager - Client Overview]]
*[[BPC SurveyManager - Web Client Manual]]
*[[BPC SurveyManager - Tutorials - Survey Layouts]]
*[[BPC RiskManager and BPC SurveyManager Importer Masks]]
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
2a31a6d09a0ceef91b28d3cd8f10e3d6318370cf
Bishop Phillips - Software Library Reference for Developers
0
331
521
2019-09-10T05:58:23Z
Bishopj
1
Created page with "=Developer Reference Library - Introduction= Language: Delphi 7 - 2007 These pages are being mounted on the riskwiki for the convenience of delphi software developers usin..."
wikitext
text/x-wiki
=Developer Reference Library - Introduction=
Language: Delphi 7 - 2007
These pages are being mounted on the riskwiki for the convenience of delphi software developers using the BPC Software libraries. Unless you are a developer with access to the source or binaries of these libraries these pages will be of no interest to you. Over 15 years these libraries have grown to such a scale that they are no longer easilly referenced by the help manuals.
The libraries are not currently distributed as standalone linkable binaries or compilable source, except to programmers engaged on BPC software development who have appropriate third party licenses where required. In some cases the libraries require current development licenses for additional third party products. BPC will consider individual requests for supply of specific libraries, but emphasise that not all requests can be satisfied as individual libraries may use code that is the copyright of third parties, and for which you must have a valid developer's license. Each such request will be considered on a case by case basis.
The following pages are extracts from the header comment sections of the relevant libraries.
=BPC Core Libraries=
These libraries are the most heavilly used support libraries in SM and RM technologies. All are Win32 D7-D2007 compatible, but are not unicode adjusted.
*[[BPCStndLib1]] - Language Tokenising and Parsing, tree, date and file support library
*[[bpcSMScriptLibrary_1]] - BPC String Manipulation Library 1
*[[bpcSMScriptLibrary_2]] - bpcXML Language Version 1
*[[bpcSMScriptLibrary_3]] - PopUp Menu Utility Routines
*[[bpcSMScriptLibrary_4]] - Plugin DLL remote command node & data exchange routines
*[[bpcSMScriptLibrary_5]] - Value List Editor Utility routines
*[[bpcSMScriptLibrary_6]] - ADO Database Connection Utility routines
*[[bpcSMScriptLibrary_7]] - Useful Types 1
*[[bpcSMScriptLibrary_8]] - bpcXML Data Transfer Utility Routines
*[[bpcSMScriptLibrary_9]] - TStringGrid manipulation Library from BPC
*[[bpcSMScriptLibrary_10]] - TTIWDBAdvWebGrid Manipulation Routines
*[[bpcSMScriptLibrary_11]] - TClientDataSet BPC Library
*[[bpcSMScriptLibrary_12]] - MHTML MS CDO Interface - BPC Library
*[[bpcSMScriptLibrary_13]] - Convert an ADO Recordset to XML and back again
*[[bpcSMScriptLibrary_14]] - Graphics manipulation and conversion routines JPG and BMP
*[[bpcSMScriptLibrary_15]] - TDBAdvGrid Routines
*[[bpcSMScriptLibrary_16]] - TwwDBGrid Routines
*[[SpareTemplatePage]] - Ignore Page Template only
=BPC General Component Libraries=
*[[bpcStringList]] - TStringlist Name/Value pair manipulation
*[[bpcDBBookMarkList]] - DB Bookmark list manager
*[[bpcADSI]] - Active Directory support through ADSI interface
*[[bpcWin32Service]] - Win32 Service Management
*[[ExportADOTable]] - Exporter for ADO tables
*[[bpcMSSpellCheck]] - MS Spellcheck and Thesaurius Wrapper
*[[bpcwwRichEdSpellChck]] - InfoPower RichEdit mod to use the bpc late binding MS Spellcheck and Thesaurius Wrapper
*[[BPCPageControl]] - Modified MS Windows page/tab control to support customised colouring of tabs and borders
*[[WSXDCompressUtilities1]] - Streaming Compression using LHA for comms
*[[bpcDBGrid]] - Modified InfoPower WW grid to support column sorting
*[[RMSQLAdminLib]] - Powerful DB Desktop Support Library for MS SQL Databases
*[[DataTranADO]] - Data transfer library for moving data between a central DB and many remote DB's with support for synchronising across DB's when DB's cannot use an ADO connection.
=BPC Email Support Libraries=
*[[bpcMailLib1]] - Email DNS resolver and mail authority routines
*[[SMTPIndySendMail]] - Smart email sender (supports html & text, attachments and embedded images in email. Spam safe)
*[[OutlookSendMail]] - Sends email via Outlook (Alternative for desktop apps when SMTP is not available
*[[EMailUnit]] - Message wrapper class. Used to assemble a message for SMTPIndySendMail
*[[SendMailThreadUnit]] - Threaded emailer for sending emails in the background for SMTPIndySendMail
=BPC HTML & Browser Component & Support Libraries=
*[[bpcHTMLEditDesigner]] - TbpcHTMLEditDesigner (implementation of Lindsay Larson's MS HTML edit wrapper)
*[[bpcHTMLEditHost]] - TbpcHTMLEditHost
*[[bpcDBEmbeddedWB]] - TbpcDBEmbeddedWB : A DataBase aware implementation of TEmbeddedWB from Balsa
*[[bpcDBEmbeddedWB_INIT]] - Factory initialiser for the bpcDBEmbeddedWB.
=Non-BPC Controls=
*[[GUIDEx]] - Control for Manipulating GUID's
*[[ParseRequest]] - Improved Multi-part WebRequest data handling
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
ff4512e6894545787694dc8f7bd8b1d6f24cf28e
Internal Audit
0
332
522
2019-09-10T06:00:19Z
Bishopj
1
Created page with "==Topic List== * [[Internal Audit Method]] * [[Report Writing]] * [[Internal Audit Policy & Framework For the Larger Insurance Company]] ==Backlinks== [[Main Page]]"
wikitext
text/x-wiki
==Topic List==
* [[Internal Audit Method]]
* [[Report Writing]]
* [[Internal Audit Policy & Framework For the Larger Insurance Company]]
==Backlinks==
[[Main Page]]
91a13a3f1fb8b7281d7dc54a35aedb3f4c7cd615
Business Process Reengineering
0
333
523
2019-09-10T06:06:49Z
Bishopj
1
Created page with "==Topic List== * [[Business Process Reengineering - Introduction]] * [[Business Process Reengineering - Project Plan]] * [[The Stakeholder Community Network Model]] * Bu..."
wikitext
text/x-wiki
==Topic List==
* [[Business Process Reengineering - Introduction]]
* [[Business Process Reengineering - Project Plan]]
* [[The Stakeholder Community Network Model]]
* [[Business Process Reengineering - Process Charting]]
==Backlinks==
* [[Main Page]]
e410958e8139ccd91e22137f79b6a37392ab792c
Virtual Learning Systems
0
334
524
2019-09-10T06:10:09Z
Bishopj
1
Created page with "==Waisman Learning Systems== Through its virtual world research group (Waisman Learning Systems) Bishop Phillips Consulting has done extensive work in the development of 3D s..."
wikitext
text/x-wiki
==Waisman Learning Systems==
Through its virtual world research group (Waisman Learning Systems) Bishop Phillips Consulting has done extensive work in the development of 3D social space learning systems particularly in SecondLife. This section of the riskwiki is dedicated to surfacing some of the work our people have been doing in this space.
We are particularly proud of the cutting edge work done by one of our team - Dianne Bishop - for her master's thesis and her thesis is available in wiki form here.
* [[Real Learning In Virtual Worlds]]
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
32a760c9d8325ac1faa3d6b370fbedc42ec65ff3
About The RiskWiki
0
335
525
2019-09-10T06:17:00Z
Bishopj
1
Created page with "=What is The RiskWiki & How Do I Contribute?= ==About The RiskWiki== This wiki is sponsored by Bishop Phillips Consulting (http://www.bishopphillips.com/) for the use and en..."
wikitext
text/x-wiki
=What is The RiskWiki & How Do I Contribute?=
==About The RiskWiki==
This wiki is sponsored by Bishop Phillips Consulting (http://www.bishopphillips.com/) for the use and enjoyment of our clients, educators, the public and professionals involved in management consulting and risk advisory and analysis industries. It provides reference articles on management, risk and risk related functions including: Risk Management, Internal Audit, Governance, Compliance, and Process Reengineering, etc.
Over the years Bishop Phillips Consulting has amassed a large body of work and articles based on our experience with hundreds of clients and we take great pleasure in sharing that work with the world wide community. Over the coming months we will be progressively converting and uploading much of our body of consulting, risk and internal audit methods and models. It is a major task and will take many months, so please come back often.
While Bishop Phillips Consulting is, at least initially, the primary contributor, you are welcome to add material and expand the reference base. As the volume of non-BPC contributions grows, we will revise this front page to be general and non-BPC branded in nature.
==How do I contribute?==
Contributors are encouraged to:
<ul>
<li> Create a short biography and add it to the contributors section.
<li> Add their organisations to the contributing firms page, with an appropriate description of your business.
</ul>
To protect the content on this wiki from the random destructive editing that can occur on the wikipedia while still allowing freedom of shared editing, all contributors must apply for an edit enabled account. This is done by clicking on the login/createan account link at the top of the page as normal and completing the application page. A short 50 word (minimum) biography about you is required, but this is only to establish that you are genuinely keen to participate constructively. The information will be kept confidential if you desire. You do not have to be an expert to participate - just well intentioned, and desirous of having your work read by others.
We check the request logs at least daily, so you should receive your account details by return email within the day.
The wiki master reserves the right to move your article to other parts of the riskwiki, should it be deemed to be better suited to being in a different location from that in which it is originally placed. Initially, we suggest that you place your work in the general content index and we will place it in the correct topic set as it becomes clear where it belongs.
All contributions must be provided with the understanding that the work is public, and may be edited by others (although you may request the WikiSysop to protect your articles from changes), and that the material must be able to be used freely by readers, subject only to the retention of moral rights by you (i.e. you must be attributed as the author in any re-use of your work and your work must be represented accurately).
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
35ca466d8e8c0bf6a586c2174025fbdd2a568dfd
Contributors
0
336
526
2019-09-10T06:20:48Z
Bishopj
1
Created page with "=Contributing Authors & Firms= ==Authors== Special thanks are given to our contributing authors: * [[Jonathan Bishop]] * [[Dianne Bishop]] * [[Paul Reynolds]] * Rachel Curr..."
wikitext
text/x-wiki
=Contributing Authors & Firms=
==Authors==
Special thanks are given to our contributing authors:
* [[Jonathan Bishop]]
* [[Dianne Bishop]]
* [[Paul Reynolds]]
* Rachel Curry
==Contributing Firms==
* [http://www.bishopphillips.com Bishop Phillips Consulting]
* [http://www.bishopphillips.com/canada Bishop Phillips Consulting (Canada)]
* [http://www.bishopfinance.com/ Bishop Finance (Australia)]
* Bishop Finance Pty Ltd
* Stanton Consulting Partners (Australia)
* N2Vision (Canada)
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
b596da2a1769ba3fd81823af43c3ac19567edba2
527
526
2019-09-10T06:21:34Z
Bishopj
1
wikitext
text/x-wiki
=Contributing Authors & Firms=
==Authors==
Special thanks are given to our contributing authors:
* [[Jonathan Bishop]]
* [[Dianne Bishop]]
* [[Paul Reynolds]]
* Rachel Curry
==Contributing Firms==
* [http://www.bishopphillips.com Bishop Phillips Consulting]
* [http://www.bishopphillips.com/canada Bishop Phillips Consulting (Canada)]
* [http://www.bishopfinance.com/ Bishop Finance (Australia)]
* Stanton Consulting Partners (Australia)
* N2Vision (Canada)
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
2c3a3dcd60d4718d1813a770d6dfd8ab2809d4d7
Contents
0
337
528
2019-09-10T06:22:59Z
Bishopj
1
Created page with "==RiskWiki Detailed Index== *[[BPC RiskManager Software Suite]] *[[BPC RiskManager Frequently Asked Questions]] *[[BPC SurveyManager - Overview]] *Bishop Phillips - Softwar..."
wikitext
text/x-wiki
==RiskWiki Detailed Index==
*[[BPC RiskManager Software Suite]]
*[[BPC RiskManager Frequently Asked Questions]]
*[[BPC SurveyManager - Overview]]
*[[Bishop Phillips - Software Library Reference for Developers]]
*[[Internal Audit]]
** [[Internal Audit Method]]
***[[RIAM:Overview of the Method]]
**** [[RIAM:Overview: Rational Internal Audit Method - Introduction|Rational Internal Audit Method - Introduction]]
**** [[RIAM:Overview: Overview of the Scope of Work|Overview of the Scope of Work]]
**** [[RIAM:Overview: The Five Arms of RIAM - At a Glance|The Five Arms of RIAM - At a Glance]]
**** [[RIAM:Overview: The Client Service Plan (CSP)|The Client Service Plan (CSP)]]
**** [[RIAM:Overview: Risk Based Planning (RBP)|Risk Based Planning (RBP)]]
**** [[RIAM:Overview: Control Implementation Services (CIS)|Control Implementation Services (CIS)]]
**** [[RIAM:Overview: The Assertion Linked Systems Based Audit (ALSBA)|The Assertion Linked Systems Based Audit (ALSBA)]]
**** [[RIAM:Overview: Tactical Quality Assurance Strategy (TQAS)|Tactical Quality Assurance Strategy (TQAS)]]
***[[RIAM:Risk Based Audit Planning]]
***[[RIAM:Control Theory & Analysis]]
***[[RIAM:Conduct of the Very Large Audit|RIAM:Conduct of the Very Large Audit Project]]
****[[RIAM:VLA:The Four Phases of the RALSBA|The Four Phases of RIAM Control Systems Analysis in the very large audit project]]
*****[[RIAM:VLA:AUDIT INTERVIEWING|PHASE 1 to 4: INTERVIEWING]]
*****[[RIAM:VLA:FAMILIARISATION, SCOPE & PLANNING|PHASE 1: FAMILIARISATION, SCOPE AND PLANNING]]
*****[[RIAM:VLA:STANDARDS FOR, AND TYPES OF, AUDIT EVIDENCE AND WORKING PAPERS|PHASE 1: STANDARDS FOR, AND TYPES OF, AUDIT EVIDENCE AND WORKING PAPERS]]
*****[[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL|PHASE 2: DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]]
*****[[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
*****[[RIAM:VLA:ANALYTIC REVIEW PROCEDURES IN INTERNAL AUDIT|PHASE 1 to 3: ANALYTIC REVIEW PROCEDURES]]
*****[[RIAM:VLA:AUDIT RISK ASSESSMENT & SENSITIVITY ANALYSIS|PHASE 1 to 3: RISK ASSESSMENT & SENSITIVITY ANALYSIS]]
*****[[RIAM:VLA:AUDIT SAMPLING AND AUDIT TESTING|PHASE 3: AUDIT SAMPLING AND AUDIT TESTING]]
*****[[RIAM:VLA:AUDIT REPORTING PROCEDURES|PHASE 4: AUDIT REPORTING PROCEDURES]]
*****[[RIAM:VLA:IA REVIEW AND QUALITY ASSURANCE|PHASE 1 to 4: REVIEW AND QUALITY ASSURANCE]]
****[[RIAM:SPECIALREVIEWS:Follow Up Audits|Special Reviews - Follow Up Audits]]
****[[RIAM:SPECIALREVIEWS:Payment Systems Implementation Audits|Special Reviews - Payment Systems Implementation Audits]]
****[[RIAM:SPECIALREVIEWS:New Programme Reviews (Government)|Special Reviews - New Programme Reviews (Government)]]
****[[RIAM:SPECIALREVIEWS:Programme Reviews (Government)|Special Reviews - Programme Reviews (Government)]]
****[[RIAM:SKILLS:CONDUCT OF EXIT INTERVIEWS|CONDUCT OF EXIT INTERVIEWS]]
** [[Report Writing]]
** [[Internal Audit Policy & Framework For the Larger Insurance Company]]
* [[Business Process Reengineering]]
** [[Business Process Reengineering - Introduction]]
** [[Business Process Reengineering - Project Plan]]
* [[Risk Management]]
** [[Risk Management - Introduction]]
** [[Managing Risk in Mergers & Acquisitions]]
* [[Virtual Learning Systems]]
** [[Real Learning In Virtual Worlds]]
* [[Contributors]]
==General Wiki Help==
Consult the [http://meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
* [http://www.mediawiki.org/wiki/Manual:Configuration_settings Configuration settings list]
* [http://www.mediawiki.org/wiki/Manual:FAQ MediaWiki FAQ]
* [http://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
==BackLinks==
*[[Main Page]]
ea4305dda7fb63c10560455c8ec56f85b476a9ea
Internal Audit Method
0
338
529
2019-09-10T06:25:34Z
Bishopj
1
Created page with "==About The Author & The Article== [[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] Copyright 1995-2019 - Moral Rights Retain..."
wikitext
text/x-wiki
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2019 - Moral Rights Retained.
This article and all pages referenced from here may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com]. The principal author of all pages in the following list is [[Jonathan Bishop]]. The author acknowledges the contributions, improvements, help and comments of the inumerable critics and users of the method over many years, however errors and ommissions are the responsibility of the principal author alone.
==The Method In Detail==
*[[RIAM:Overview of the Method]]
** [[RIAM:Overview: Rational Internal Audit Method - Introduction|Rational Internal Audit Method - Introduction]]
** [[RIAM:Overview: Overview of the Scope of Work|Overview of the Scope of Work]]
** [[RIAM:Overview: The Five Arms of RIAM - At a Glance|The Five Arms of RIAM - At a Glance]]
** [[RIAM:Overview: The Client Service Plan (CSP)|The Client Service Plan (CSP)]]
** [[RIAM:Overview: Risk Based Planning (RBP)|Risk Based Planning (RBP)]]
** [[RIAM:Overview: Control Implementation Services (CIS)|Control Implementation Services (CIS)]]
** [[RIAM:Overview: The Assertion Linked Systems Based Audit (ALSBA)|The Assertion Linked Systems Based Audit (ALSBA)]]
** [[RIAM:Overview: Tactical Quality Assurance Strategy (TQAS)|Tactical Quality Assurance Strategy (TQAS)]]
*[[RIAM:Risk Based Audit Planning]]
*[[RIAM:Control Theory & Analysis]]
*[[RIAM:Conduct of the Very Large Audit|RIAM:Conduct of the Very Large Audit Project]]
**[[RIAM:VLA:The Four Phases of the RALSBA|The Four Phases of RIAM Control Systems Analysis in the very large audit project]]
***[[RIAM:VLA:AUDIT INTERVIEWING|PHASE 1 to 4: INTERVIEWING]]
***[[RIAM:VLA:FAMILIARISATION, SCOPE & PLANNING|PHASE 1: FAMILIARISATION, SCOPE AND PLANNING]]
***[[RIAM:VLA:STANDARDS FOR, AND TYPES OF, AUDIT EVIDENCE AND WORKING PAPERS|PHASE 1: STANDARDS FOR, AND TYPES OF, AUDIT EVIDENCE AND WORKING PAPERS]]
***[[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL|PHASE 2: DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]]
***[[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
***[[RIAM:VLA:ANALYTIC REVIEW PROCEDURES IN INTERNAL AUDIT|PHASE 1 to 3: ANALYTIC REVIEW PROCEDURES]]
***[[RIAM:VLA:AUDIT RISK ASSESSMENT & SENSITIVITY ANALYSIS|PHASE 1 to 3: RISK ASSESSMENT & SENSITIVITY ANALYSIS]]
***[[RIAM:VLA:AUDIT SAMPLING AND AUDIT TESTING|PHASE 3: AUDIT SAMPLING AND AUDIT TESTING]]
***[[RIAM:VLA:AUDIT REPORTING PROCEDURES|PHASE 4: AUDIT REPORTING PROCEDURES]]
***[[RIAM:VLA:IA REVIEW AND QUALITY ASSURANCE|PHASE 1 to 4: REVIEW AND QUALITY ASSURANCE]]
**[[RIAM:SPECIALREVIEWS:Follow Up Audits|Special Reviews - Follow Up Audits]]
**[[RIAM:SPECIALREVIEWS:Payment Systems Implementation Audits|Special Reviews - Payment Systems Implementation Audits]]
**[[RIAM:SPECIALREVIEWS:New Programme Reviews (Government)|Special Reviews - New Programme Reviews (Government)]]
**[[RIAM:SPECIALREVIEWS:Programme Reviews (Government)|Special Reviews - Programme Reviews (Government)]]
**[[RIAM:SKILLS:CONDUCT OF EXIT INTERVIEWS|CONDUCT OF EXIT INTERVIEWS]]
fe3fe070bba7c18cbdd0d012b1d611759a670ad3
RIAM:Overview of the Method
0
339
530
2019-09-10T06:26:49Z
Bishopj
1
Created page with "==About The Author & The Article== [[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/] Copyright 1995-2019 - Moral Rights Retain..."
wikitext
text/x-wiki
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2019 - Moral Rights Retained
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
==Rational Internal Audit Method - Volume 1==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="left">
* [[RIAM:Overview: Rational Internal Audit Method - Introduction|Rational Internal Audit Method - Introduction]]
* [[RIAM:Overview: Overview of the Scope of Work|Overview of the Scope of Work]]
* [[RIAM:Overview: The Five Arms of RIAM - At a Glance|The Five Arms of RIAM - At a Glance]]
* [[RIAM:Overview: The Client Service Plan (CSP)|The Client Service Plan (CSP)]]
* [[RIAM:Overview: Risk Based Planning (RBP)|Risk Based Planning (RBP)]]
* [[RIAM:Overview: Control Implementation Services (CIS)|Control Implementation Services (CIS)]]
* [[RIAM:Overview: The Assertion Linked Systems Based Audit (ALSBA)|The Assertion Linked Systems Based Audit (ALSBA)]]
* [[RIAM:Overview: Tactical Quality Assurance Strategy (TQAS)|Tactical Quality Assurance Strategy (TQAS)]]
</div>
</td>
</tr>
</table>
===About this series===
This volume is the first in the Bishop Phillips Consulting Internal Audit series. It presents a brief introduction and overview of the Rational Internal Audit Method (RIAM).
The entire series, taken as one, is a complete course in the conduct, management, and reporting of internal audit. RIAM is unique in that it presents a systematic approach to management assurance, incorporating principles of Total Quality Management in the methods for managing the audit operation, the approach to conducting the audit and in the focus the reviews.
RIAM is not a static product. Consistent with principles of KAIZEN it is continuously improved and updated with the experience and suggestions of your staff and clients. It is the result of our wide experience in providing management assurance services to many different clients in both Government and Commercial environments.
In RIAM, the traditional focus of Internal Audit (IA) on financial issues is considerably expanded to cover all aspects of business functions. Features of RIAM include:
* A wide focus for IA covering business planning through quality control to product delivery;
* Incorporation on "Best Practice" models that have evolved into separate consulting services;
* Incorporation of, and consistency with, the Institute of Internal Auditors Standards, Statements and Pronouncements;
* Consistency with the "Best Practices Statement" for Internal Audit developed by the Australian National Audit Office for the use and guidance of Commonwealth Agencies;
* Incorporation systems for the participation of Internal Audit in the design stages of management and computer systems development;
* Use of the concept of "Assertions" as the basis to the Systems Based Audit.
* Incorporation of Risk Analysis in every aspect of the audit, from planning through to systems analysis; and
* Adoption of current theories in the science of Systems Analysis developed at Glascow University and used by its consulting division;
It must be recognised that reading this course will not make you a good an Internal Auditor. The importance of experience can not be stressed too much. What RIAM will do is provide a method for systematically interpreting that experience and ensuring a predictable standard of service delivery.
* [[Internal Audit Method| Back To The RIAM (Main)]]
* [[Special:Whatlinkshere/RIAM_:_Overview_of_the_Method|What links here?]]
05cfde8600e4894fc8dcdec5a5ffdef3bba6bbd8
RIAM:Overview: Rational Internal Audit Method - Introduction
0
340
531
2019-09-10T06:29:34Z
Bishopj
1
Created page with "==What is Internal Audit?== Fundamentally, Internal Audit is a data collection and analysis arm for management. It objectively collects, sifts, analyses, evaluates, projects,..."
wikitext
text/x-wiki
==What is Internal Audit?==
Fundamentally, Internal Audit is a data collection and analysis arm for management. It objectively collects, sifts, analyses, evaluates, projects, and reports data on which management make decisions. It acts as a calibrator with which management can assess and tune its performance.
Internal Audit (also called Management Review or Management Assurance) goes beyond merely assessing system compliance, by evaluating the efficiency and effectiveness of the systems reviewed and adding constructive advice directed at areas of key concern to management.
RIAM extends this concept to one of Management Assurance. It recognises that management risk can be seen to have one source: failure in the quality of systems. These include strategic and tactical decision systems, financial control systems, product and service design systems, production systems, marketing systems, reporting systems, and delivery systems.
TQM, which incorporates both Total Quality Control and KAIZEN schools of management thought, emphasises the need for:
* Use of the scientific method and structured logical deduction to the isolation of causes and effects of production, management and marketing problems;
* Focus on process rather than merely results;
* Customers defined quality;
* Management by consensus;
* Rapid application of new technology - innovation;
* Continuous improvement in all systems;
* Adoption of a management philosophy emphasising:
** People and Teams (including Suppliers and Customers)
** Process Improvement
** Training and Education
** Statistical Quality Control
** Statistical Process Control
** Quality Assurance (Quality Planning and Quality Control); and
* Cross functional management (as well as the conventional functional management).
Criticism of TQM often revolves around the high cost of implementation of the associated concepts of quality circles and suggestion systems used in the continuous improvement process, and the loss of direction that may be experienced if the process itself becomes uncontrolled. Merging Internal Audit into the TQM process offers one way to minimise the TQM implementation cost and directly and efficiently manage the quality improvement systems.
==Why is Internal Audit Important?==
Internal audit's principal concern is with quality of control systems and risk. Control is about mitigating management's risks. Risk can either be:
* Transferred Eg: Insured, Incorporated, Factoring.
* Tolerated Eg: Assumed or Ignored, Cost-Benefit Analysis.
* Treated Eg: Controlled, Control Systems, Security.
* Terminated Eg: Exit or cease the business line, operational area, division, etc.
Internal Audit provides the research, information, and analysis necessary for management to make an informed choice as to the appropriate handling of Risk.
==What is the focus of Internal Audit?==
The focus of Internal Audit reviews is to:
* identify aspects of administration which demonstrate sound management practice; and
* identify aspects of administration which should be improved; and
* recommend appropriate (constructive and achievable) modifications and improvements.
Through interviews and preliminary collection, the end product of the audits should be focused, from the beginning, to the expectations of management.
Both external and internal audit aim to express opinions. The key differences are the scope of the opinions and the identity of the client:
* External Audit expresses an opinion on the Financial Statements for the primary purposes of users of the financial statements such as share holders and creditors; while
* Internal Audit expresses an opinion on the systems operating within the Co-operative in line with the Scope of Work (below) for the use of the Board of Directors and management.
==How does Internal Audit deliver its service?==
The underlying aim is for audit and management to come to an agreement over the nature, presentation of and approach to the findings of a review. Audit should aim to facilitate an atmosphere giving management confidence in directing audit to their areas of greatest concern, and saying what they wish.
The fundamental purpose of the audit is to:
'''''Bring about excellence in systems design and operation.'''''
To achieve this Internal Audit needs the cooperation of management because they ultimately have to act on Internal Audit's findings for the improvement to be realised. A consultative process from planning the audits to be undertaken, through the commencement of the audit, collection of data and finally the report and exit interview helps establish management ownership of the recommendations.
Internal Audit works at the direction of Audit Committee and management, on issues important to, and specified by these two groups. Generally work is prioritised in accordance with management's perceived risks.
==What is the bottom line?==
The existence of effective, properly resourced Internal Audit within an organisation is a powerful statement of management's dedication to:
* quality,
* probity,
* accountability,
* legality,
* performance; and
* integrity.
More than a statement, Internal Audit should have a noticeable positive impact on the organisation's performance in delivering it's product. It focuses on the "engine" leaving direction, product and service definition to management.
==Backlinks==
[[RIAM:Overview of the Method| Back To The RIAM : Overview (Main)]]
98bec63132d273ba1b6a65162cd82ef7d0acad6a
RIAM:Overview: Overview of the Scope of Work
0
341
532
2019-09-10T06:31:45Z
Bishopj
1
Created page with "==What is the Scope of Work for Internal Audit?== The Institute of Internal Auditors Standard 300 states Internal Audit's scope of work has five specific objectives of audit..."
wikitext
text/x-wiki
==What is the Scope of Work for Internal Audit?==
The Institute of Internal Auditors Standard 300 states Internal Audit's scope of work has five specific objectives of audit opinion formation:
* [[#Safeguarding assets|'''S'''afeguarding assets]];
* [[#Compliance with policies, plans, procedures, legislation and directions etc|'''C'''ompliance with policies, plans, procedures, legislation and directions etc]];
* [[#Accomplishment of established goals and objectives for plans, procedures, operations and programs|'''A'''ccomplishment of established goals and objectives for plans, procedures, operations and programs]];
* [[#Reliability and integrity of information|'''R'''eliability and integrity of information]]; and
* [[#Economical and efficient use of resources|'''E'''conomical and efficient use of resources]].
These are explained in turn on the following pages.
==Compliance with policies, plans, procedures, legislation and directions etc==
Management is responsible for creating systems which ensure compliance with Co-operative policies, plans and relevant legislation while the Internal Auditor evaluates whether the systems thus created comply with management's objectives and the law, and ultimately determines whether operations comply with the systems model.
Legal compliance extends beyond the Public Service Acts, Insurance Acts and Corporations Law to the various operational statutes covering issues such as handling of dangerous chemicals, workplace safety, equal employment opportunity, education policy, import and transport regulations and offsets programs.
==Accomplishment of established goals and objectives for plans, procedures, operations and programs==
Management is responsible for establishing operating and program objectives and goals while the Internal Auditors should verify whether the department or section is achieving them.
Internal Auditors can assist management in developing and evaluating these objectives and goals by evaluating whether their underlying assumptions are appropriate, accurate, and consistent with the stated objectives, and whether current and relevant information is available and being used.
Commonly referred to as effectiveness, this role includes evaluating the existence and method of measurement/feedback systems for goal achievement.
==Reliability and integrity of information==
Information systems whether manual or computerised provide data for decision making, control and compliance with external requirements. It is therefore essential that the financial and operating records contain accurate, reliable, timely, complete and useful information. In addition the controls over the record keeping and reporting must be complete and effective.
==Economical and efficient use of resources==
Management is also responsible for '''''setting operating standards to measure economical and efficient use of resources'''''. Internal auditors are responsible for determining that:
* these standards have been '''''established''''';
* these standards are '''''understood''''' and are '''''being met''''';
* departures from these standards are being '''''identified, investigated and corrected'''''; and
* the action has taken place to ensure the '''''departures are not repeated'''''.
Audits should identify:
* under-utilised resources;
* non-productive work or work practices;
* uneconomical procedures;
* inappropriate staffing; and
* ineffective organisation design.
In addition to issues such as organisation design, IT resource utilisation, personnel management and fleet management; are the pure financial areas of treasury management and financial management reporting systems.
==Safeguarding assets==
This refers, firstly, to the protection of assets from theft, fire, improper, unauthorised or illegal activities and exposure from nature (eg. sunshine, rain and wind etc); and secondly to the application of assets. It is commonly formulated as an assertion:
<div align=center>"''Assets are appropriately protected and applied''."</div>
Remembering that cash is an asset, the breadth of this objective includes the appropriate security and application of cash resources. Properly protecting and applying cash includes such matters as cash flow management, solvency and liquidity risk control.
==Backlinks==
[[RIAM:Overview of the Method| Back To The RIAM : Overview (Main)]]
ab4aaf5cebb5c4cd31ae2f265e234d0f71b6ed95
RIAM:Overview: The Five Arms of RIAM - At a Glance
0
342
533
2019-09-10T06:33:05Z
Bishopj
1
Created page with "==Introduction== The five arms of RIAM are: * [[RIAM:Overview: The Client Service Plan (CSP)|The Client Service Plan (CSP)]] * RIAM:Overview: Risk Based Planning (RBP)|Ris..."
wikitext
text/x-wiki
==Introduction==
The five arms of RIAM are:
* [[RIAM:Overview: The Client Service Plan (CSP)|The Client Service Plan (CSP)]]
* [[RIAM:Overview: Risk Based Planning (RBP)|Risk Based Planning (RBP)]]
* [[RIAM:Overview: Control Implementation Services (CIS)|Control Implementation Services (CIS)]]
* [[RIAM:Overview: The Assertion Linked Systems Based Audit (ALSBA)|The Assertion Linked Systems Based Audit (ALSBA)]]
* [[RIAM:Overview: Tactical Quality Assurance Strategy (TQAS)|Tactical Quality Assurance Strategy (TQAS)]]
Driving these five arms is the motive of Management Assurance, which comes from the need to equip management with the information necessary for making the risk decision (transfer, control or ignore). This means providing comfort that the Internal Audit process results in the continuous and measurable improvement in the quality of the systems it uses.
When implementing and operating the audit function and conducting reviews the auditor must keep in mind the following key principles:
* '''Comfort stems from ownership of the outputs, ownership stems from receiving what our client wants in the way they want it.'''
The Client Service Plan (CSP) aims to involve our client in defining the "shape" of the service they receive from planning through performance standards and reporting. The CSP must result in us understanding their needs and the organisation we are auditing.
* '''Assurance is a about management knowing their risks and making informed Cost / Benefit decisions for risk control, transfer or assumption.'''
The Risk Based Planning (RBP) uses an agreed basis for measuring risk and feeds both annual planning data and ongoing review results into a risk model for the organisation. This model forms the basis not only for prioritising activities and allocating resources but also for measuring the significance of audit findings. Our work should see a measurable reduction in the total risk of the organisation.
* '''Prevention is cheaper than detection and subsequent correction.'''
The Control Implementation Services (CIS) product provides advisory assistance in the design and construction of Control Systems in a structured, reproducible and verifiable format.
* '''A favourable or unfavourable opinion about a system must have a clear and logical basis.''' A report user must clearly understand exactly what issues are included in the review when relying on its findings.
The Assertion Linked System Based Audit (ALSBA) is a significant advance on conventional Systems Based Audits because it uses an agreed set of "hypotheses" against which systems are tested. We call these hypotheses "Assertions" because we assert the truth or falsity of the system's ability to sustain them. At the beginning of each review appropriate Assertions are identified and agreed with management.
The ALSBA is a remarkably versatile structure for analysis and forms the core of both the review process and the reporting structures. It is critical that assertions are agreed with management before a review commences, and that all findings are precisely tied back to assertions when reported. You must be able to back up your opinion by precise identification of which assertions are effected, and how they are effected.
* '''The reliability of Internal Audit's work is directly related to the standard of Tactical Quality Control imposed.'''
The Tactical Quality Assurance Strategy (TQAS) addresses issues ranging from Assignment Management through training, interim Reporting, timeliness and usefulness of reports and advisory services. It includes such things as the planning, methods, reporting, training, use of technology, review, client feedback, and control of variances in standards of our processes and outputs.
Juran & Blakemore describe 6 principles of quality. Adapted to the Internal Audit function, these can be summarised as:
* Satisfying the client's/auditee's needs;
* Building quality as the intent of all processes;
* No waste;
* Employee & auditee involvement;
* Reduce variation;
* Training.
==Backlinks==
[[RIAM:Overview of the Method| Back To The RIAM : Overview (Main)]]
5f4bfe4a173afe7cd72449030fad435e51f12a9d
RIAM:Overview: The Client Service Plan (CSP)
0
343
534
2019-09-10T06:34:10Z
Bishopj
1
Created page with "==Introduction== The preliminary steps in providing an Internal Audit (IA) service are in establishing the audit function. The Client Service Plan (CSP) details the method to..."
wikitext
text/x-wiki
==Introduction==
The preliminary steps in providing an Internal Audit (IA) service are in establishing the audit function. The Client Service Plan (CSP) details the method to be used to implement, manage and improve the IA function within the client. Most of the CSP is directed at managing the quality and operating conditions of IA.
==The Core Objectives of the CSP==
The core objectives of the CSP are:
* Market the IA service to the client's staff. We must aim for a clear understanding of the services, method of operation, and systems for improvement to be used by IA.
* Implement and resource the key controlling bodies. This includes defining the quality control teams, relevant quality circles and particularly establishing the Audit Committee.
The Audit Committee is the principal co-ordinating body of both the external and internal audit activities. The committee generally includes Internal Audit management, senior executive management and Board representatives. The committee defines and administers the "Audit Charter" and defines and oversees "The Audit Terms of Reference". It meets regularly, participates in the planning process, reviews all audit reports and ensures recommended action plans are implemented.
* Establish the Client Service Team (CST). The CST forms the core management team for the engagement and is the primary mechanism for ensuring consistency of audit personnel for a client, capturing client feedback on our performance and implementing the continuous improvement of the service provided to the client.
* Design and develop the key controlling documents. These documents are:
<table width="100%">
<tr>
<td>
'''The Audit Charter'''
</td>
<td>
One of the critical documents, the Charter forms "the constitution" governing the audit committee and the audit function. It details the functions, obligations and responsibilities of the various members of the audit process.
</td>
</tr>
<tr>
<td>
'''The Terms of Reference'''
</td>
<td>
The second of the critical documents, the Terms of Reference establishes the protocol for action and activities of the Internal Audit function.
</td>
</tr>
<tr>
<td>
'''The Client Specific RIAM Manual'''
</td>
<td>
The procedures and report designs to be used must be tailored to each client. This is done in conjunction with management to ensure that both our methods and reporting standards are those required by the client. Quality starts with a client focus, and is defined in terms of the client's needs.
</td>
</tr>
<tr>
<td>
'''The Performance Guarantee'''
</td>
<td>
The document identifies Internal Audit Performance Guarantees and Critical Success Factors, with accompanying strategies and Performance Indicators. This is the key document against which IA is evaluated as the engagement progresses.
</td>
</tr>
</table>
* Gain familiarity with the organisation's culture and management's key risk and opportunities. The key activity to achieve this is the Strengths Weaknesses Opportunities Threats and Constraints Analysis (SWOTC) conducted during the Initiation Forum. It primarilly provides information to the IA planning team to help identify the Risk Ranking criteria for planning, relevant legislation, skill requirements, and the probable areas of greatest benefit for the organisation. It forms the basis of the Organisation Risk Model developed during the Risk Based Planning phase.
* Establish the current standing of Quality Control within the organisation. Basically a Quality Audit. This is an optional exercise, following the Soin model which provides us with information on existing quality management systems used by the organisation (to allow us to better integrate IA into the existing quality control systems), and define both the current attitude to, and sophistication of, quality management. Where this has not been conducted within the organisation before, a side benefit is that the organisation's executive usually identify many opportunities for improvement from this one activity.
The review requires the involvement of the senior executive. The preferred model is Soin's under the headings:
<ul>
<li> Planning Process
<li> Customer Obsession
<li> Improvement Cycle
<li> Process Management
<li> Total Participation
</ul>
The joint activities at this stage allow Internal Audit management and the Organisation's executive to establish a rapport and develop confidence operating as a unit.
==The Process of the Client Service Plan==
The basic steps in the CSP are:
At Start Up
* Establish the Audit Committee
* The Audit Initiation Forum
* Tailor methods and client liaison systems
* Development of the key control documents
* Development of the internal IA marketing plan
* Establish the Client Service Team
* Identification of Internal Audit Performance Guarantees and Critical Success Factors;
* Conduct the Quality Audit
Throughout the Engagement
* Seek client feedback through formal and informal means, including regular visits by the Independent Quality Service Partner to facilitate feedback in an anonymous manner;
* Revise and improve delivery systems;
* Annual (or more regular) analysis of Internal Audit Productivity and Quality (called "Qualativity" in TQM);
* Implement IA marketing plan;
* Identification and integration of Industry Best Practice with the client's organisation;
* Monitoring of Internal Audit Performance Guarantees, Indicators and Critical Success Factors.
==Backlinks==
[[RIAM:Overview of the Method| Back To The RIAM : Overview (Main)]] ||
[[RIAM:Overview: The Five Arms of RIAM - At a Glance | Back To The Five Arms - At a Glance]]
a2764ba6ff2e03fa2d52a24d18fc3335704843d5
RIAM:Overview: Risk Based Planning (RBP)
0
344
535
2019-09-10T06:35:34Z
Bishopj
1
Created page with "==Introduction - The Result of the Internal Audit Planning Process== The planning process results in a 3 year Strategic Internal Audit Plan and a 1 year Rolling Tactical Int..."
wikitext
text/x-wiki
==Introduction - The Result of the Internal Audit Planning Process==
The planning process results in a 3 year Strategic Internal Audit Plan and a 1 year Rolling Tactical Internal Audit Plan and a Risk Model for the organisation. The Risk Model should integrate and utilise the corporate strategic risk plan.
===The Strategic Audit Plan and Tactical Audit Plans===
The Strategic Audit Plan details the:
* Objectives of the Plan and the Period Covered
* The Plan Scope and Boundary
* Principal Acts Effecting Operations
* The Total and Annual Man Day Requirements
* The Personnel and Skill Requirements (with estimated time requirements)
** Plan Administration and Criteria for Plan Modification
** The Activities to be Audited, Relevant Objectives and Procedures
** The Schedules
*** 3 Year by activity
*** Annual by activity
*** Annual by chronology
* The Tactical Audit Plan details the tasks, commencement dates, duration and (in later years) summarises the status of the tasks undertaken to date
Auditable activities are identified within their organisational units together with:
* appropriate audit objectives and assertions,
* specific issues of legislative compliance,
* specific issues of management assistance to be addressed,
* the general types of procedures required to meet the objectives and establish or refute the assertions,
The working papers contain additional information such as records of interviews and background data.
===The Organisation Risk Model===
The risk model is initially developed during the strategic audit planning stage and continuously updated with the results of audit reviews, and improvement effects of the implemented recommendations or other management strategies and action plans.
==The Basis to Planning==
There are two pillars to the planning process proposed:
# Administrative Infrastructure for the Plan;
# The Plan.
The basis to the Plan is:
* Establishment of objectives
* Prioritising objectives
* Selection of appropriate goals
* Determine Procedures to meet goals
* Cost Procedures (in staff resources and time)
* Review and select most efficient procedures
The basis to the Administrative Infrastructure for the Plan is the need for administration of the plan and flexible planning. The performance of a plan must be monitored during execution to ensure adherence and the need for additional or different audit resources. The detail of the plan must be regularly reviewed for the need to modify the plan for changing circumstances.
The administrative infrastructure supports these needs and provides the framework for change management. By separating it from the Plan, we highlight the importance of plan management in the Internal Audit function and allow separate budgeting, evaluation and control of the ongoing quality of the Plan.
==The Approach to Planning==
The primary objective of the approach is Management Assistance.
The Internal Audit section is ultimately a tool for management to use in controlling and tuning operations. The reports generated from the Internal Audit program must be seen by management to be relevant. We place a high emphasis on interviewing staff at all levels to help establish a comprehensive management assistance program.
The objective is met by work programs that are focussed to five goals:
<table border=1 align=center>
<tr>
<th >ITEM</th ><th >OPINION FOCUS</th ><th >SUB FOCUS</th >
</tr>
<tr>
<td >
1
</td>
<td>
Reliability and Integrity of Information
</td>
<td>
* Accurate Information
* Reliable Information
* Timely Information
* Complete Information
* Useful Information
* Correctly Accumulated Information
* Fully and Correctly Disclosed
</td>
</tr >
<tr >
<td>
2
</td>
<td>
Compliance With
</td>
<td>
* Policies
* Plans
* Procedures
* Legislation
* Regulations and Treaties
</td>
</tr >
<tr >
<td>
3
</td>
<td>
Assets are Safeguarded.
</td>
<td>
</td>
</tr >
<tr >
<td>
4
</td>
<td>
Efficient and Effective Use of Resources.
</td>
<td>
</td>
</tr >
<tr >
<td>
5
</td>
<td>
Accomplishment of Goals, Objectives for Programs, Policies and Management's Critical Success Factors.
</td>
<td>
</tr >
</table >
The goals are seen to be achieved through management's successful implementation and maintenance of control systems. These control systems are examined within the following ten classes:
<table border=1 align=center >
<tr><th>ITEM</th><th>CONTROL CLASS</th></tr>
<tr><td >1</td><td>Organisation of the section</td></tr>
<tr><td >2</td><td>Personnel</td></tr>
<tr><td >3</td><td>Policies</td></tr>
<tr><td >4</td><td>Procedures</td></tr>
<tr><td >5</td><td>Accounting</td></tr>
<tr><td >6</td><td>Budgeting and Planning</td></tr>
<tr><td >7</td><td>Reporting</td></tr>
<tr><td >8</td><td>Documentation</td></tr>
<tr><td >9</td><td>Internal Review</td></tr>
<tr><td >10</td><td>Physical Security</td></tr>
</table>
(These classes can be modified as necessary to a particular business entitie's environment.)
Each control class will have a control risk associated with it determined by the quality of its:
* Preventive Controls,
* Detective Controls, and
* Corrective Controls.
The types of audit activities planned will look to the 5 opinion goals by examining these ten control classes. The activities are thus directed to standard audit and management identified areas of concern. Activities to be planned might include:
* Efficiency and Effectiveness Reviews
* Operations Research
* Control System Reviews and System Based Audits
* Performance Measurement Strategy Reviews
* Compliance Testing
* Applications Reviews
* ADP Reviews (Applications, Environment, Software development and change control, Data Integrity Reviews)
* Quality Reviews
==The Process of Planning==
<table border=1 align=center >
<tr ><td >
Planning and Familiarisation Phase
</td >
<td >
* Determine the overall objectives of the plan
* Determine Scope, Boundary and Timing of the plan
* Establish Quality Assurance procedures relevant to the overall planning objectives
* Establish how plan efficiency will be measured
* Establish risk ranking criteria
</td></tr>
<tr ><td >
Analysis Phase
</td >
<td >
Recursively categorise the organisation to be audited
<table >
<tr ><td>
For each level and category of the organisation:
</td>
<td >
* Analyse and categorise the activities of the target level and component
* Determine relevant background data
* Determine projected changes during lifetime of plan
* Determine requirements for legislative (etc.) compliance
* Determine specific concerns and expectations of management
* Determine the objectives and goals of each category
* Determine assertions (performance standards) appropriate to achieving category goals and legislative compliance
* Rate the activities for auditability and risk
</td></tr>
</table >
</td></tr>
<tr ><td>
Specification Phase
</td >
<td >
* Direct the plan to meeting specific objectives through achieving specific goals in accordance with the risk rating and management directions
* Estimate time requirements and tune the plan to maximise the efficiency rating
* Identify skills required for each component of the plan
</td></tr>
<tr ><td>
Scheduling Phase
</td >
<td >
* Ensure plan is dynamic by establishing the process and conditions for plan execution, control, performance evaluation, modification and annual re-scheduling
* Schedule activities
</td></tr>
</table >
==Measuring Plan Efficiency==
Plan efficiency is measured by maximising the coverage of Auditable Areas addressing those with the highest risk (or priority) ranking first. The time allocated to each area will be a function of the complexity of the area, nature of existing control systems and specific concerns of management.
==Measuring Risk==
The measurement of risk may follow any one of a number of methods. Our planning strategy is largely independent of the method of risk assessment adopted. We expect that the selected ranking criteria will be the result of discussions with the client's management.
The primary restriction to risk analysis reflected in the proposed approach is the assumption that risks may be separated into:
# Inherent Risks
# Control Risks
# Detection Risks
The familiarisation and planning phase attempts to determine the inherent risks while the analysis phase forms preliminary judgements on the control risks.
Detection risks are largely the responsibility of the planner and auditor. At the planning stage they are limited by a comprehensive planning methodology, project management and quality assurance program.
The starting point for discussions on ranking techniques might be the Weighted Average Scoring Technique. A common domain of variables is selected under which inherent and control risks may be analysed, and a ranking of 1 to 5 is determined. Variables might include:
<table border=1 align=center >
<tr ><td >Previous audit results</td><td>Employee Turnover</td></tr>
<tr ><td >Assets controlled</td><td>Unit Revenue/Turnover</td></tr>
<tr ><td >Confidentiality and Privacy</td><td>Legislative Compliance</td></tr>
<tr ><td >Systems maturity</td><td>Management Concern</td></tr>
<tr ><td >Change Control</td><td>Performance Indicators</td></tr>
<tr ><td >Complexity of the Systems</td><td>Workload volumes</td></tr>
<tr ><td >Administration</td><td>Public Relations</td></tr>
</table >
Techniques such as the Delphi technique are used to capture the key risk areas, or threats to the organisation, and gain a weighting for each threat area. The Delphi Technique is a one pass survey strategy in which management vote on relative importance of threats in pairs. The technique is suggested by the Institute of Internal Auditors in the Risk Analysis course.
After the risk variables are selected, the Weighted Scoring method calculates a score for each auditable activity based on percentage weights (totalling 100%) reflecting the relative importance of the variable. For example:
<table border=1 align=center >
<tr ><th>Weight</th><th>Audit Variable</th><th>Ranks</th><th>Weight * Rank</th></tr>
<tr >
<td>20 %</td><td>Maturity of System</td><td>
5. Less than one year<br>
4. Less than two years<br>
3. Less than three years<br>
2. Less than four years<br>
1. Greater than four years<br>
</td><td>Eg 2 * .20 = .4</td>
</tr>
<tr >
<td>40 %</td><td>Yearly Expenditures</td><td>
5. $1,000,000 + <br>
4. $500,000 to 1,000,000<br>
3. $100,000 to 500,000<br>
2. $10,000 to 100,000<br>
1. Less than $10,000<br>
</td><td>Eg 4 * .4 = 1.6</td>
</tr>
<tr >
<td>40 %</td><td>Privacy Exposure</td><td>
5. Most Exposure<br>
4. Significant<br>
3. Average<br>
2. Moderate<br>
1. Minimum Exposure<br>
</td><td>eg 5 * .4 = 2.0</td>
</tr>
<tr >
<td ></td ><td >TOTAL RANK</td><td></td><td>4</td>
</tr>
</table>
The number of ranks (1 to 5 or 1 to 10) would be determined after preliminary surveys established the range of conditions relevant.
'''''The strategy requires the implementation plan and procedures to be prepared in accordance with each client's different circumstances.'''''
Professional judgement and experience are applied to identify and evaluate risks and to determine the most appropriate response. '''''It is clear that a thorough understanding of the Organisation's business environment is necessary with the approach.'''''
==Backlinks==
[[RIAM:Overview of the Method| Back To The RIAM : Overview (Main)]] ||
[[RIAM:Overview: The Five Arms of RIAM - At a Glance | Back To The Five Arms - At a Glance]]
6b5f1da65446d93e2cb21896b9778b8c00b31857
RIAM:Overview: Control Implementation Services (CIS)
0
345
536
2019-09-10T06:37:04Z
Bishopj
1
Created page with "==Introduction== From the audit perspective, incorporation of CIS in the internal auditor's service offering is a dramatic advance in prevention of errors in systems by invol..."
wikitext
text/x-wiki
==Introduction==
From the audit perspective, incorporation of CIS in the internal auditor's service offering is a dramatic advance in prevention of errors in systems by involving specially trained Internal Audit staff in the design phase of both manual and computer systems. We call these specialists "'''''Controls Analysts'''''".
A Controls Analyst must generally be trained both in the CIS techniques and the underlying discipline to which the system relates. For example, in the computer environment, where CIS originated, this means skilling as dual specialty as an Internal Auditor and Systems Analyst.
In a nutshell, the idea behind CIS is that the auditor participates as an advisor to a systems implementation / development team during the design stage and at each module delivery phase. Properly performed, CIS should provide substantial cost savings both in initial systems development, and subsequent systems maintenance and review.
In general management consultancy terms we might describe the CIS function as process reengineering, process design, system analyst, business simplification, and a host of other names essentially refering to the function of designing business processes.
From the internal audit perspective, the particular dimension of the function that interests us is the control model embodied in the process. RIAM therefore includes the CIS as part of its control verification model, because we aim to eliminate control failure at the earliest possible point in any process.
If we consider the function of developing or implementing a new business system as a control process in itself, logic dictates that we should stop a bad system at the source...not just after it has gone live.
==Aren't auditors only meant to shoot the wounded?==
The idea that auditors should participate in systems delivery in some form, is contentious in auditing circles. The argument advanced by those opposed to auditors participating in system design is that processes and systems are management's reponsibility, and auditors must maintain independence in order to be able to assess them. While reasonable, this argument ignores the mechanics of the audit process. Our argument is simple:
<ol>
<li> As auditors, we review systems that are already in place and make recommendations as to how these systems should be changed to satisfy some independent standard of control. In a properly governed organisation, these recommendations come with the full force of the audit committee, which is a committee of the board, or governing council of the entity. Assuming the recommendations are endorsed by the Audit Committee, management have no choice but to implement them.
<li> The whole point behind RIAM is that auditors should be sufficiently confident of their analytic skill set, to make firm systen redesign recommendations, not just recommend that management "take another look" at the system.
<li> If, as part of management, you implement a system, and I as the auditor recommend various changes to the processes to improve control, which you then assess, accept and implement under the mandate of the Audit Committee...who has really designed the resulting system? In the event that the redesigned system fails because of its design management can still claim that they merely implemented audit's recommendations.
<li> Worse, as the cost of retro-fitting new processes and controls climbs as the software nears completion (or goes into production), waiting until after the system has gone live, and a failure has occurred to review the control model as tantamount to professional negligence.
</ol>
If we, as auditors, are eventually going to review a system in production, we should be doing as early in the implementation as possible so that thye costs of implementing a decent control regime are minimised.
If our real reason for delaying, is because we are not capable of making correct judgement calls on the sufficiency of control, or standards of good control design, then as a profession we are beneath contempt, and deserve to be relegated to the periphery.
There is little benefit from wandering arround the battle after a strategic disaster clubbing the wounded and counting the dead to prove that the battle was badly planned, when everyone who paeticipated already knows this. Far better to fix the strategy before it is applied, and ensure victory.
==Scoping the Control Implementation Service==
Lets start by clarifying what is NOT part of the CIS:
1. The purpose of the process. The purpose of the process or system is management's responsiblity.
2. The cost benefit ratio of the process. The relationship between risk, cost and return is also management's responsibilty.
CIS is about taking those structural parameters and applying scientific control modelling to optimise processes/systems with a control regime that makes those planned outcomes as predictable and reliable as possible.
The Internal Auditor works with the business analyst and other designers to:
* Translate defined system specific objectives, existing business policies, external regulations & statutory obligations and already mandated business procedures into control standards for the new system.
* Make recommendations of new policies and procedures that may be required or desirable as a consequence of implementing the new system.
* Assess risk and properly inform management of risks and treatments so they can make the decisions to Tolerate, Treat, Terminate or Transfer/share risk, or apportion their reaponse between those options.
* Design the control procedures, within agreed transactional throughput and reliability levels where treatment is selected as the strategy by management.
* Specify test sets/test scripts that will verify successful design (or design failure) during the project, and adequacy of imp0lementation of the control design.
The method of operation is therefore one of a consultant.
The Internal Auditor works independently of the business design team, possibly as part of the testing team to verify that:
* The control design works as expected,
* Has been implemented as intended.
Ideally a separate auditor/audit team monitors and reports on the implementation project itself. In any case, this is a normal internal audit function and not delivered as part of the CIS.
==What are the CIS Control Attributes?==
The narrower focus of the controls analyst comparecd to the more general business analyst, allows us to precisely specify the focus of the consultant's work.
Using a specially developed process of Annotated Data Flow Diagrams you should analyse developing systems at points of:
* Input;
* Processing;
* Output;
* Data Storage; and
* Transmission.
The reader will quickly notice the link between the CIS service focus and the ALSBA method (see [[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]] & [[RIAM:VLA:ASSERTIONS]]). In fact there is virtually no difference. Manual and computerised information systems are evaluated within a frame work of Control Attributes such as:
<table >
<tr >
<td >
* Access Security</td><td>Ability to protect information and other resources against unauthorised, accidental or deliberate modification, disclosure or use.
</td>
</tr>
<tr >
<td >
* Accountability</td><td>Ability to establish a relationship between information and the entity or individual which/whom created or updated it.
</td>
</tr>
<tr >
<td >
* Auditability</td><td>Ability to provide documentary evidence of processing which can be used to trace between transactions and related records.
</td>
</tr>
<tr >
<td >
* Continuity</td><td>Ability to minimise the impact of interruptions to operations and processing support for business functions.
</td>
</tr>
<tr >
<td >
* Information Integrity</td><td>Ability to ensure that information is complete, accurate, and reliable.
</td>
</tr>
<tr >
<td >
* Process Integrity</td><td>Ability to ensure that the process is consistent complete, correct and timely.
</td>
</tr>
<tr >
<td >
* Effectiveness</td><td>Ability to accomplish the intended purpose of the system by consistency of design with the intended business functions, and purposes of those functions.
</td>
</tr>
<tr >
<td >
* Efficiency</td><td>Ability to achieve stated objectives with the minimum of cost. Cost is measured in a variety of ways.
</td>
</tr>
<tr >
<td >
* Timeliness</td><td>Ability to ensure that the control system achieves prevention, detection and correction in a timely fashion.
</td>
</tr>
</table >
==How is the CIS Engagement Implemented?==
The process of CIS involves the auditor becoming part of the development team working in tandem with the Systems Analyst, from the initial planning and specification stages through to the design stage. To preserve independence, the auditor must NOT be part of the implementation or maintenance of the system, but may participate in the testing, evaluation and review phases.
The basic steps in the CIS engagement are:
# Development of User Requirements;
# Threat based Risk Analysis;
# Feasibility analysis;
# Assumption Specification
# Specification of the Control Requirements;
# System Design;
# System Testing;
# Acceptance Testing;
# Crash Testing
# System and Performance Evaluation; and
# Review.
==The Method of Performing CIS==
The technical details of process modeling and design used in CIS can be found in the RiskWiki under [[Business Process Reengineering - Introduction]].
The technical detail and the theorectical control model to apply as the control portion of the process objectives in the [[Business Process Reengineering - Introduction| Business process model]] for conducting control analysis used in CIS can be found in the RiskWiki under [[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]], [[RIAM:VLA:ASSERTIONS]], and [[RIAM:Overview: The Assertion Linked Systems Based Audit (ALSBA)]].
==Backlinks==
* [[RIAM:Overview of the Method| Back To The RIAM : Overview (Main)]] ||
* [[RIAM:Overview: The Five Arms of RIAM - At a Glance | Back To The Five Arms - At a Glance]]
b64349c0ec082db7063313b46807ce7fba865764
RIAM:Overview: Tactical Quality Assurance Strategy (TQAS)
0
346
537
2019-09-10T06:40:10Z
Bishopj
1
Created page with "==Introduction== Quality assurance is a significant part of RIAM. Quality commences with product definition in accordance with the client's specifications and progresses thr..."
wikitext
text/x-wiki
==Introduction==
Quality assurance is a significant part of RIAM. Quality commences with product definition in accordance with the client's specifications and progresses through the process with methods and review. As with any control system, Quality Assurance is seen to include Preventive, Detective, Corrective & CONTROLS. (Just like the systems, you audit!)
Preventive control features in the quality assurance program include:
* Institute of Internal Auditors membership;
* Thorough training;
* Thorough planning;
* Standard working papers;
* Standard methodologies;
* Standard report formats; and
* Thorough senior review.
Detective and corrective controls commence with our checklists, cross referencing, and three layers of file and report review - Dual Manager and Partner. All corrections, changes and additions are routed back through the layered review structure. We must not waste the client's time with our errors.
Ultimately, quality is assured during the execution of the project. This is achieved by:
* A high level of senior partner and manager involvement;
* Thorough planning of the project;
* Establishing and documenting standardised procedures;
* Identifying milestones and critical points;
* Evaluating performance against assessment criteria at each milestone or point;
* Training;
* Ensuring continuity of staff throughout the engagement and that reporting deadlines are met;
* Compliance with relevant legislation;
* Efficiency through reliance on systems that reduce risk; or reliance on focused substantive tests;
* Peer and senior reviews at all levels;
* Use of standard forms and check lists;
* Issuing draft reports for client comment before finalisation;
* Regular interim reporting; and
* Evaluating all errors detected by considering the potential for further error and quantifying the likely amount of error.
A key aspect of Internal Audit’s quality assurance program is the use of specialist staff at all stages of the project. Typically, Internal Audit should have a strong field of specialists covering such areas as:
* Management Consulting and Business Planning
* Organisation Structures and Staffing
* AD Audit
* Information Technology
* Internal Audit methods
* Financial Systems
A model for quality assurance in large national and international audit teams is provided in:
* [[RIAM:VLA:IA REVIEW AND QUALITY ASSURANCE|Quality Assurance]]
[[RIAM:Overview of the Method| Back To The RIAM : Overview (Main)]] ||
[[RIAM:Overview: The Five Arms of RIAM - At a Glance | Back To The Five Arms - At a Glance]]
8608bb03196d9108e9724bd0dd475cf7ea9da890
Jonathan Bishop
0
347
538
2019-09-10T06:42:50Z
Bishopj
1
Created page with "BCOM BSC CIA CISA CA MAICD MANAGING DIRECTOR & CHAIRMAN - BISHOP PHILLIPS CONSULTING GROUP <table width="100%"> <tr > <td width="25%" valign=top > [[Image:bishopj.png]] ==Ed..."
wikitext
text/x-wiki
BCOM BSC CIA CISA CA MAICD
MANAGING DIRECTOR & CHAIRMAN - BISHOP PHILLIPS CONSULTING GROUP
<table width="100%">
<tr >
<td width="25%" valign=top >
[[Image:bishopj.png]]
==Education & Qualifications==
* BSc (Computer Science)
* BCom (Accounting/Economics)
* Chartered Accountant
* Certified Information Systems Auditor
* Certified Internal Auditor
==Current==
* Chairman – Bishop Phillips Consulting (Aust) & (Canada)
* Managing Director - Bishop Finance Pty Ltd
</td >
<td valign=top >
==Summary==
Jonathan has more than 25years experience consulting in the accounting, governance, IT and strategy consulting industries, and 16 years in CEO, senior executive and board positions with a variety of commercial, educational and semi-government entities. His work history includes leading and authoring a variety of commercial software systems, enterprise scale project management, internal audit leadership and audit committee membership, strategy and performance management leadership, process design and the authoring of books, methods and papers covering a wide range of management technologies.
He was the first person in Australia to be tripply qualified as an ACA, CIA and CISA.
Jonathan is a thought leader in the areas of governance and control theory. With other thinkers in the field of governance systems design, he argues that a concept of holistic organisation design based around a network of stakeholder communities and assertion based process control objectives with predictive reporting can provide economic, realistic, flexible, measurable and sustainable proof of governance and deliver efficient operations and effective market response. He is a strong proposer of control self assessment models for internal control and risk management. His theories have been applied in practice on many commercial and government orgainsations (see client list) and he was a significant advisor in the government reform space during the mid to late 1990's that saw some Australian governments embrace accrual accounting, whole-of-government shared service delivery, public-private partnerships, multi-year budgeting, and formal organisation governance and risk management.
His team was one of the first to define and adopt risk based project management throughout all project advisory and large scale systems implementation services. They have of a 100% success rate in systems implementation projects across a period of 10 years using the approach and during that time delivered more than 20 whole of organisation financial, HR and business management systems.
He has had the rare privilege of having been engaged by major consulting firms to advise them on internal systems delivery and project governance matters.
An active software developer in Delphi, .Net and a variety of web languages, Jonathan also leads the software engineering team of Bishop Phillips Consulting (a publisher of among other things BPC RiskManager, BPC SurveyManger and BPC IncidentManager).
He is currently actively exploring the application of massively multiplayer virtual worlds for corporate, education and social networks and over the last two years his virtual design company has completed a number of large design and construction projects in Second Life, one of the leading online virtual world environments.
</td>
</tr>
<tr >
<td valign=top >
==History==
* Joint Managing Director - Acumen Alliance, Victoria (5yrs)
* Joint Principal-in-Charge – Stanton Consulting Partners (7yrs)
* Vice President – William Angliss Institute of TAFE (7yrs)
* Vice President – Central Health Interpreter Service (1yr)
* Chairman – Angliss Consulting Pty Ltd (8yrs)
* Snr Mngr – Deloitte Touche Tohmatsu (2yrs)
* Chairman – Angliss Performance Review Committee (6yrs)
* Chairman – Angliss Audit Committee (2yrs)
* Other Directorships (Crnt & Frmr)
** Acumen Alliance Australia (4yrs)
** Acumen Alliance (Victoria) (5yrs)
** William Angliss Institute of TAFE (8yrs)
** Radio Beacon (Aust) (2yrs)
** Asia Alliance (Hong Kong) (2yrs)
** Acumen (UK) (2yrs)
* Other:
** Price Waterhouse (Snr)
** Australian National University (Academic)
** RMIT University (Academic)
** Judge - ACT Enterprise Workshop
</td >
<td >
==Capabilities & Skills==
* Project Management
* IT & Corporate Strategy
* Risk Management, Enterprise Governance & EGMS
* Process design & Control Systems Analysis
* Financial Management, Performance Measurement & Activity Based Costing
* Project Management
* Government Accounting
* Business Case Preparation
* Probity, Internal Audit, QA Reviews
* System Evaluations and Selection
* Information Systems – Design Specification
* IT Architecture, Network Design & Implementation
* Software Engineering – Design & Development
* Second Life simulator design, development and construction
==Governance & Consulting Client Engagements include:==
* '''''Managed the Internal Audit Units and Client Service Delivery''''' for over 50 Internal Audit & Governance Clients (over 18 years) including:
** Toyota, General Motors - Holden, United Automobile Manufacturers Association of Australia, City of Melbourne, City of Monash, Victorian Egg Marketing Board, VicRoads, Victorian Electricity Commission, VCGA/VCGR (Victorian Casino & Gaming Authority), Gippsland Water, William Angliss Institute of TAFE, Victorian College of the Arts, DEET/DEETYA, ComCare, ACT Dept. of Health, Woden Hospital, Royal Women’s Hospital, PSE Credit Union, Catholic Bank, NAB, National Mutual Insurance Ltd, Telstra, Dept. of the Treasury, Dept. of Finance, JHD, Dept. of The Senate, Dept. of The House of Representative’s, Dept. of Parliamentary Library, ACT Urban Sevices, ACTEW, AusSpace Ltd, ACT TSA, BTR Nylex, etc
* ''''Process Reengineering and Strategy/Organisational Advice'''' for many, many clients across corporate, government and not-for-profit sectors including:
** Department for Victorian Communities (Grants Management), Department of the Senate, Central Health Interpreter Service, First Mildura Irrigation Trust, SCOPE Victoria, TabCorp, JB Were, Country Fire Authority, FrontLine Defense Services, ACT Health, Royal Women's Hospital, Australian National University, Sirius Communications, William Angliss, Department of Human Services (Victoria), Building Services Agency, City of Melbourne, VEC, Museum of Victoria, Department of Education (Victoria), Sutherland City Council, AIATSIS, Central Health Interpreter Service, etc.
* '''''Designed the Internal Method''''' and wrote the Internal Audit manuals for DEET/DEETYA, National Mutual, Acumen Alliance, Stanton Consulting, Deloitte Touche Tohmatsu,
* Designed and prototyped the online Board Internal Audit Reporting System for National Australia Bank
* '''''Enterprise Systems Project Management''''' or Advisory on implementations in Oracle Financials, PeopleSoft, SAP R3, Microsoft Networks and other vendors for clients such as TabCorp, FrontLine Defense Services, JB Were & Sons, Melbourne University, RMIT University, Dept of Treasury, Ansett Air Freight, SSOV, Trade Measurement Victoria, Farmer Brothers, National Library of Australia.
==Software Systems Developed Include:==
* '''''Co-Author of BPC RiskManager''''' used by – Australia Post, Telstra, Gippsland Water, United Energy, Deakin University, Monash University, Melbourne University, Victoria University, Swinburne University, Simon Fraser University, University of British Columbia, Nova Scotia University, BCIT, University of Calgary, University of Technology, BHP, AMP, Benfield
* '''''Author of BPC SurveyManager''''' used by various among others, the OTTE - Department of Education (Victorian Government) and ACFE - Department for Victorian Communities (Victorian Government) for versy large scale state-wide student surveys, and by various BPC RiskManager clients as the core compliance engine for that system.
* '''''Co-author of BPC IncidentManager''''' (incident support system for BPC RiskManager)
* '''''Authored or co-authored other successful business software applications''''' including BPC TenderEvaluation, Arthur Andersen BAS Reporting, Arthur Andersen Practice Management, VG Dxf, TMV CRIS, Farmer Bros Retail System, FDS EIS & Stock Management System, TabCorp Tender Evaluation System, etc.
==Papers & Management Science Technologies==
Jonathan has a particular strength in policy and governance administration strategy & processes and has authored or co-authored a large volume of papers and manuals on a variety of related topics including:
* Authored various strategy and business modeling concepts including recent systems based on community networks, e-business enablement, commercial competitive tactics and strategies
* Authored the Deloitte Internal Audit Method (over 1000 pages), NAB Internal Audit Reporting manual, DEET Internal Audit Method and National Mutual Internal Audit Method, Stanton Internal Audit Method, Acumen Internal Audit Method.
* Authored the Stanton & Acumen Business Process Reengineering Method, Stanton & Acumen Victoria Management Reporting Design Method, Acumen Victoria Strategic Planning Method, Report Writing Style Guide, etc
* Authored and/or presented at conferences on Neural Networks, and Diagnosis Assisting Databases using Neural Networks. Jointly designed and built one of the first practical neural network databases in the 1980's for pattern recognistion and predictive reasoning.
</td>
</tr>
</table >
23dbc69f306576994e0ecd401a707c240bfc65f9
RIAM:Risk Based Audit Planning
0
348
539
2019-09-10T06:46:19Z
Bishopj
1
Created page with "To be uploaded"
wikitext
text/x-wiki
To be uploaded
6ae5bf35e38e0cc2a65587ced2d7f3a045ae22b4
540
539
2019-09-10T06:47:58Z
Bishopj
1
wikitext
text/x-wiki
To be uploaded
* [[Internal Audit Method| Back To The RIAM (Main)]]
874bf57ac01887e3daf1fa94df23c108d1a1a89e
RIAM:Conduct of the Very Large Audit
0
349
541
2019-09-10T06:49:35Z
Bishopj
1
Created page with "==Introduction== This article covers the approach to delivery of Internal Audit assignment in very large organisations. Many of the sections and discussions are shared with..."
wikitext
text/x-wiki
==Introduction==
This article covers the approach to delivery of Internal Audit assignment in very large organisations. Many of the sections and discussions are shared with the other papers on internal audit throughout the RiskWiki, but in this topic tree we enhance that coverage with issues specific to the larger more complex organisational structures.
The term "very large" does not so much apply to organisation size as organisation complexity.
RIAM distinguishes very large organisations from other organisations because the former frequently have:
<ol>
<li> Separate units covering:
* Risk,
* Compliance, and
* Internal Audit;
<li> Multiple locations/campusses across a variety of geographic locations;
<li> Multiple (sometimes competing) jurisdictional responsibilities;
<li> Multiple autonomous or semi-autonomous divisions/corporate entities;
<li> Mixed legal and organisation types within the group (eg trusts, companies, joint-ventures, partnerships, partially owned subsidiaries, legislation enabled agencies, etc);
<li> Mixed management organisation structures (divisional, matrix, project, etc); and
<li> Separated (geographically and managerially) internal audit units with differing possible competing reporting lines.
<li> Internal audit teams with an internal critical mass for self sustainability (i.e. they are big enough to train their own staff and provide a career path), but requiring of dedicated internal administration, HR and management teams (as opposed to a one to five-man consulting team) .
</ol>
While the method of analysis at the individual review level may be essentially the same, regardless of client organisation size, internal audit service delivery in the very large organisation will require some unique strategies for management and planning of the internal audit project across a large and diverse organisation (often involving mutliple auditors on the one project) and across an extended (or even multiple) time period(s).
Not responsible for a very large organisation audit? No matter - you may be surprised how much of this applies to you anyway.
This volume is, therefore, about doing the internal audit project once it has been identified during the strategic planning phase. It is NOT about conducting strategic planning in large organisations as that is the topic of a separate paper.
This Volume of the Manual does not attempt to prescribe in detail how each type of audit or aspect of an audit should be conducted, rather the issues involved in undertaking audits and a common approach and set of skills are presented. For specific types of reviews appendices are provided detailing the steps to be applied. Internal Auditors are expected to make reference to the Internal Audit Technical Library and elsewhere for technical details. This manual, with specific work programs/field audit plans should form the core reference for the conduct of the audit. Theoretical discussion and specialist techniques should be the subject of further research.
For example, under the section on sampling and testing, testing methods are discussed and reference made to the application of testing; however, the manual does not go on to describe the theoretic probability basis for assessing sample results or provide the reference tables necessary to apply a testing approach. We do provide the formulae with worked examples demonstrating the calculation of sample sizes.
==Other Sources of Information==
The following documents will be of use:
* Procedural Directions
** Institute of Internal Auditors Standards, Statements & Pronouncements
** RIAM Strategic Planning For Internal Audit
** RIAM Internal Audit Charter & Terms of Reference
** RIAM Audit Manual
** RIAM Audit Guidelines
** External Auditor (if corporate)/ Auditor General (if government) Audit Guidelines and sundry publications
** National (insert country) Accounting Standards
** International or National (insert country) Statements of Auditing Practice
* Technical References
** (Gleim, 1989) - CIA Examination Review - 2 Vols, 3rd Ed. IIA 1988
** (Sawyer, 1988) - Sawyer's Internal Auditing, Sawyer and Sumners, IIA 1988
** (Brink, 1982) - Modern Internal Auditing, 4th Ed, Brink and Witt, Wiley & Sons 1982
** (Wilson, 1989) - Systems: Concepts, Methodologies and Applications, 2nd Ed, Wilson, Wiley & Sons 1991
** (Valbahaneni, 1988) - Information Systems Concepts and Foundations, EDPAA
** (Settler) - Stettler's Systems Based Audits
** Theory and Practice of Australian Auditing
* Legal Compliance
** Audit Act (as appropriate to your jurisdictions)
** Finance Regulations and Directions (as appropriate to your jurisdictions)
While the Internal Auditor should adhere to his/her professional responsibilities, the auditor must also comply with organisation specific directives. These are generally contained in the Audit Charter and this Manual.
* The Audit Charter / Audit Policy Manual
One of the critical documents, the Charter forms "the constitution" governing the audit committee and the audit function. It details the functions, obligations and responsibilities of the various members of the audit process. [The Charter is reproduced at the front of this manual.]
==Audit Strategy & Systems Based Audits==
The structure of this Volume will broadly follow the major parts of a Systems Based Audit (SBA) in order of conduct. The applicability of each part to the other types of audits to be conducted by DRT will be briefly discussed in each function. The major parts are:
* Planning
* Interviewing & Documenting the System
* Evaluating the System of Internal Controls
* Review and Quality Assurance
* Reporting
* Follow-up
In addition to the actual conduct of the audit it is necessary to accurately present the results of the audit review. Following the descriptions of the audit phases is a section on documentation and the preparation of working papers.
==The Very Large Audit in Four Phases==
* [[RIAM:VLA:The Four Phases of the RALSBA|THE FOUR PHASES OF THE RIAM CONTROL SYSTEMS ANALYSIS]]
==Backlinks==
* [[Internal Audit Method| Back To The RIAM (Main)]]
*[[RIAM:Overview: The Assertion Linked Systems Based Audit (ALSBA)|Back to The Assertion Linked Systems Based Audit (Overview)]]
fa0765946e9e0560580f0e578eda688d78a22067
RIAM:VLA:The Four Phases of the RALSBA
0
350
542
2019-09-10T06:50:50Z
Bishopj
1
Created page with "==Introduction== The keys to the Rational Internal Audit Method (RIAM) are structure and focus. RIAM is a Risk, Systems and Assertion Based approach. Many systems based a..."
wikitext
text/x-wiki
==Introduction==
The keys to the Rational Internal Audit Method (RIAM) are structure and focus. RIAM is a Risk, Systems and Assertion Based approach. Many systems based approaches merely measure the compliance of an organisation's staff with a particular system. RIAM is a substantial enhancement to this commonly used approach. RIAM attempts to analyse both compliance with policy & procedure, and the potential risks in the systems themselves.
==What is a Risk, Systems and Assertions Based Approach?==
===The Objectives===
The objectives of Internal Audit's reviews are summarised as:
* Document the procedures in operation within the section so far as they relate to the target activities;
* Collect sufficient data and analyse that data to support assertions that address management's critical success factors:
** In the case of a transaction system review typical assertions are:
**# Data recorded is bona fide;
**# Data reported/processed is :
**#* Attributed to the proper period,
**#* Accurately calculated,
**#* Correctly accumulated,
**#* Accurately recorded,
**#* Correctly disclosed,
**#* Properly authorised with respect to transactions,
**#* Providing benefits to which the recipients are eligible,
**#* Complete;
**#* Compliant with external requirements (eg. the Auditor-General's requirements for the financial statements)
**# The relevant management directions and legislation are observed;
**# The assets of the organisation are efficiently, effectively and otherwise appropriately protected and applied.
** In the case of other reviews such as ADP, Management and Performance other assertions are adopted;
* Identify risk and efficiency exposures to the organisation and the critical success factors of management;
* Recommend relevant and practicable changes in the systems and procedures to management where these exposures are present; and
* Form an opinion as to the overall reliability of the systems in place and as modified.
See:
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
===Meeting the Objectives===
The structure of the approach that meets the above objectives has ten stages in four phases. Here, we introduce those stages and phases and provide links to more detailed discussions of the issues involved.
Some skills are relevant to all phases and these are covered in the following sections:
* [[RIAM:VLA:AUDIT INTERVIEWING|Internal Audit Interviewing - Preparation, Management and Conduct]]
* [[RIAM:VLA:IA REVIEW AND QUALITY ASSURANCE|Quality Assurance]]
The Four Phases are:
====PHASE 1: FAMILIARISATION, SCOPE AND PLANNING====
<ol>
<li> Identify the objectives and purposes of the section being reviewed, and the review being conducted; document critical success factors. Entrance interviews are held with senior management during which management's concerns and directions are communicated as well as the Critical Success Factors of the audit and the section being audited. Certain objectives, such as legislative compliance, are always assumed to be present;
<br>
<br>
<li> Identify the functions in place to realise the objectives, critical success factors and purposes. A series of initial interviews are conducted with relevant middle and line management and staff to:
<ul>
<li> Introduce the review and reassure staff as to the assisting rather than policing nature of the review,
<li> Identify the operations and organisation structure adopted to meet the objectives, purposes and critical success factors.
</ul>
</ol>
See [[RIAM:VLA:FAMILIARISATION, SCOPE & PLANNING|PHASE 1: FAMILIARISATION, SCOPE AND PLANNING]]
====PHASE 2: DOCUMENTATION, ASSERTION SETTING AND SYSTEMS ANALYSIS====
<ol>
<li> Investigate the control systems in place to implement the functions in the Ten Means of Achieving Control (refer section ?? ). Tasks include:
<ul>
<li> Document the procedures in operation so far as they relate to the scope and boundary of the Audit task,
<li> Compare actual procedures to legislation, policies, guidelines and documented procedures noting exceptions;
</ul>
<br>
<br>
<li> Establish the assertions to be made, the satisfaction of which will represent a "pass" result. The assertions represent the criteria for evaluation;
<br>
<br>
<li> Examine management information and reporting systems in place to monitor the operations;
<br>
<br>
<li> Evaluate the systems against the assertions to be supported, noting key controls in the systems, and which assertions they affect, to determine:
<ul>
<li> Potential strengths and weaknesses of the designed systems;
<li> Preliminary ranking of risk and exposures including efficiency exposures.
</ul>
</ol>
See
* [[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL|PHASE 2: DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]]
* [[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
====PHASE 3: TESTING AND RESULTS ANALYSIS====
<ol>
<li> Design a testing program and Test the system and its transactions and/or data for:
<ul>
<li> Compliance of operations with specified system;
<li> Occurrence of the identified weaknesses, risks or exposures;
</ul>
<br>
<li> Analyse the results of systems analysis and compliance testing stages to accept or refute the established assertions and operating compliance.
</ol>
See also:
* [[RIAM:VLA:ANALYTIC REVIEW PROCEDURES IN INTERNAL AUDIT|PHASE 1 to 3: ANALYTIC REVIEW PROCEDURES]]
* [[RIAM:VLA:AUDIT RISK ASSESSMENT & SENSITIVITY ANALYSIS|PHASE 1 to 3: AUDIT RISK ASSESSMENT & SENSITIVITY ANALYSIS]]
====PHASE 4: REPORTING AND FOLLOW UP====
<ol>
<li> Conclude and report in which we:
<br>
<ul>
<li> Identify risk and efficiency exposures to the Institute;
<li> Recommend changes in the systems and procedures to the Institute's management where these exposures are present;
<li> Form an opinion as to the overall reliability of the systems in place and as modified;
<li> Report to both management and the Audit Committee after and during each task;
</ul>
<br>
<li> Conduct exit interviews, produce the final report and review action plans as required.
</ol>
===Establishing the framework===
The framework on which these phases are based has four stays:
* Interviews to scope and focus the review.
* Assertions as criteria for evaluation.
* Analysis of control systems performance in meeting objectives.
* Clear discussion and specific recommendations to provide improvements.
==Backlinks==
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
*[[RIAM:Overview: The Assertion Linked Systems Based Audit (ALSBA)|Back to The Assertion Linked Systems Based Audit (Overview)]]
6bc89331bade623b193a4ac403e1148eb5b184ea
RIAM:VLA:AUDIT INTERVIEWING
0
351
543
2019-09-10T08:38:22Z
Bishopj
1
Created page with "=PHASE 1 to 4: INTERVIEWING= ==Introduction== When conducting an audit, Internal Audit interviews client management to obtain both qualitative and quantitative information...."
wikitext
text/x-wiki
=PHASE 1 to 4: INTERVIEWING=
==Introduction==
When conducting an audit, Internal Audit interviews client management to obtain both qualitative and quantitative information. Ultimately the objective is to elicit frank, complete and honest answers from client management who have more information about a particular audit area than Internal Audit.
There are three major types of interviews in the conduct of an audit. They are:
* Entry Interviews (Introductory Survey)
* Mid-Point Interviews or During Interviews
* Exit Interviews
Conducting a successful interview requires careful preparation. Some of the major steps to be taken by Internal Audit during preparation are to:
* Perform background research as this allows Internal Audit to became familiar with the client management's policy and terminology relating to the audit area
* Identify those personnel within the audit area who can provide accurate information
* Identify clearly the objectives of the interview and make a list of the information to be sought during the interview.
* Prepare an agenda for the interview so the interviewee can adjust his/her answers to the time available for the interview, and to assist the interview stay on track.
==What is an Introductory Survey and what is its purpose?==
===Entry Interviews or Introductory Surveys===
The preliminary survey, is the initial interview(s) with the key personnel (and arguably each of the personnel, key or not) with whom the internal auditor will be dealing with during the review. The Entrance Interview, the first phase of the preliminary survey, generally refers to the interview(s) during which the scope and purpose of the audit are finalised, and the initial introductions are made. Entrance Interviews are discussed below.
The main purposes of the preliminary survey are to convey to each of the personnel the objective of the review, and address any concerns the staff may have either about the review, its conduct and their relationship with the auditor.
===What are the activities performed during the Introductory Survey?===
The first activity during this phase is an entrance interview with the senior personnel of the area in which you will be performing the review. This entrance interview is to:
* outline the objectives of the review;
* determine or confirm the scope and boundary of the review;
* convey to the senior personnel the approach you will be taking during the review;
* identify any particular problems or concerns which the senior personnel have and which they would like the internal auditor to review; and
* determine the timing of the review and the anticipated dates for issue of the draft and final reports.
The second activity is to get the senior personnel to introduce the internal auditor to the personnel with whom the internal auditor will be dealing during the course of the audit. The appropriate introduction of the internal auditor by the director or other senior officers will lend authority to the audit, emphasise the client-consultant relationship between Internal Audit and management, and help ease the officers' traditional concern regarding auditors and the audit process.
It is important at this stage of the audit to gain the confidence of the personnel with whom we will be dealing by conveying that we are not performing a "witch-hunt" (unless, of course, we actually are !), rather we are there to assist them by observing their concerns and problems and identifying solutions or recommendations to overcome or compensate for them. We should also convey that in discussions of faults in operations or systems we first focus our comments on the systems themselves rather than criticising personel in our report. Further, in reporting we attempt to argue both sides of a question, and we will seek there assistance if necessary to help explain the advantages of the current approach or their suggested solutions and improvements. The role of the auditor as a facilitater should be emphasised.
Generally this approach helps overcome the apprehension an auditee may have about the audit. The sooner the auditee is working with us rather than against us, the more thorough and effective the audit will be.
We should see our role in the control system as having elements of all three classes of control:
{| cellpadding="2" cellspacing="1"
|-
|width="20%" align="center"|Preventive||Analysing the potential for error, encouraging general staff and management to think in terms of control, and encouraging auditees to see us as assisting their work by reducing errors and minimising wastage of effort.
|-
|width="20%" align="center"|Detective||Identifying when errors and control systems failures are occurring and identifying the causes and solutions.
|-
|width="20%" align="center"|Corrective||Reporting the findings and recommendations in such a manner as to have them accepted and implemented by both the management and the staff.
|}
This state of mind should be communicated by:
* manner,
* attitude, and
* action.
===What skills are required for the Interview?===
Here is an incomplete list of what the internal auditor needs to be:
* objective and independent;
* convincing;
* sincere;
* trustworthy;
* respectful and polite;
* friendly (not overbearing);
* able to obtain the officer's confidence and respect immediately - first impressions are always the most important;
* able to ask questions (open-ended) which will lead to the officer answering in the most productive manner;
* able to argue his case by directed questioning;
* understanding;
* informed.
===What preparation is required for the Introductory Survey?===
The internal auditor should obtain and read as much information as they can about the section or area in which they will be performing the review. This information could include:
* the Department's/Corporate's/Division's/(Etc.) Annual Report;
* the Department's/Corporate's/Division's/(Etc.) Organisation Chart;
* internal reports and memorandums;
* previous Internal Audit reports and resultant Action Plans;
* the external auditor's audit reports and other correspondence;
* the Department's/Corporate's/Division's/Section's/Unit's procedures manuals;
* relevant legislation; and
* any management briefings provided.
===Conduct of the Entrance Interview(s)===
At the entrance interview, Internal Audit will meet the client management and discuss:
* Internal Auditors - who are we, how do we fit into the organisation
* Issues regarding our compliance with the various Internal Auditing Standards
* The Internal Audit staff that will be conducting the audit
* The nature of the proposed audit, i.e. whether it is a compliance or efficiency audit
* The audit objectives
* The audit scope and assertions (including discussions of the issues to be covered during the audit especially client concerns)
* The audit methodology
* The concept behind the assertions used during the audit
* Boundary of the audit i.e. whether it is being conducted in a number of States/regions
* The date of commencement of the audit
* The approximate time of the conclusion of the audit
* The importance of relevant client officers and documents being readily available to avoid increasing the time of audit
* Acceptance of scope and objective of audit by client management
* The issue resolution approach involving discussions with the client management when the field work is completed or during the course of the audit if that is appropriate
* Resources which Internal Audit may require from management, i.e., accommodation, etc
* Reporting arrangements
* Actions to be taken when a fraud or significant malpractice is discovered.
===How should the meeting be documented?===
The proceedings of the meeting should be documented by the internal auditor. This should include the following:
* attendees;
* date and time of meeting;
* matters discussed including the scope and boundary of the review, timing of the review and release of the draft and final reports, administrative needs (such as an office, photocopier etc, and concerns and problems raised by the senior personnel;
In addition, any major changes in policy, operation or organisation and or applications and issues management advises Internal Audit of should be documented. Similarly if Internal Audit has ascertained the status of recommendations from a previous audit or ANAO should also be documented.
==Midpoint or During Project Interviews==
===Administrative:===
It will usually be desirable for a formal or informal interview to be held with client management i.e. Director, specialists, operating staff of the relevant audit area to:
* Discuss the progress of the audit
* Discuss tentative findings and invite auditee's comments
* To conduct survey and/or questionnaire where applicable
* To identify appropriate staff to obtain relevant documentation and information
Document the outcome of the interview as part of the working papers of the audit.
===Data Collective:===
The backbone of the SBA is the data collection/systems documentation process performed through interviews. A well conducted interview program, probably based arround processing cycles and systems walkthroughs can provide the majority findings ultimately reported (after collaboration by testing and systems analysis).
A rule of thumb for conducting such interviews is to:
* Identify cycles in the staff members duties (yearly, quarterly, monthly, weekly, daily, etc) and the tasks and durations associated with those cycles.
*For each task relevant to the audit, conduct a systems walkthrough collecting example blank and completed control documentation and exploring branches in the process for the full range of options at each step.
* Trace a single path to its end, and then backtrack to each preceeding major branch where alternative options are possible.
Your narration of the interview should cross reference to the sample documents collected and numbered during the interview.
Interviewees should be encouraged to talk freely as they will often be aware of the problems in an area, but not of there importance to our analysis.
==Exit Interviews==
===Administration===
At the conclusion of the audit, Internal Audit will request for an exit interview with the client management to discuss the draft audit findings and to resolve any discrepancies or differences.
Some audits cover a program or activity encompassing more than one of the Department's State/Territory Offices. It may be necessary or desirable to hold separate exit interviews in each office, although the need for this should be carefully considered. Where issues to be discussed are non-contravertial or non-confidential, or the interview is expected to be a formality, consideration should be given to arranging single exit interview which could be attended by representatives of each area or region affected by the audit.
It should be remembered that the larger the group of people the more difficult the interview will be to manage.
At the interview Internal Audit should discuss with the client management:
* audit findings; both positive and negative
* the impact of the audit on the audit area
* results of testing
* audit recommendations
The client management is invited to comment on the validity of the audit findings and the appropriateness of the recommendations. The client management should challenge the audit findings and recommendations if it finds them to be inaccurate, unreasonable or unsubstantiated, but it should be in a position to support its arguments.
Internal Audit should document:
* agreements, disagreements and discussions of the interview
* actions to be taken
* items to be deleted and/or amended
After the exit interview, Internal Audit will issue the client management a draft report communicating the principal audit results. The client management is given 10 days to respond to the draft report with management comments. The management comments will be included in the final report.
Where Outlet Audits are concerned, at the completion of the field work the auditor discusses and supplies the draft copy of the audit findings to the client management incorporating auditee's comments. This allows client management to be prepared for the exit interview and provide opportunity for the client management to confirm and agree on the findings.
===Notifying Client Management of Exit Interview===
Internal Audit will forward a minute formally advising the client management on the:
* date
* time
* venue
Five days before the exit interview. The Internal Audit minute should also include discussion papers so that the client management can fully discuss the issues raised.
===The Exit Interview is Part of The Audit Control System===
The discussion of the administration of the Exit Interview (Sect 5.9.1) makes the interview sound quite cut and dried. This is missleading.
The Exit Interview should never operate in adversarial mode. It is a part of the consultative process which Audit undertakes when it conducts an Internal Audit for management.
The Interview is best driven by Management, not the auditor. This means that while the auditor might establish an agenda, and keep the meeting moving, management determines about what they wish to talk.
The underlying aim is for audit and management to come to an agreement over the nature, presentation of and approach to the findings of the review. Seating, atmosphere and conversational approach must all be set so as to relax both the auditor and the client management. We should aim to facilitate an atmosphere to give management every confidence in saying what they wish. Situations where the auditor is in a dominant or overbearing situation should be avoided.
We must always remember that the fundamental purpose of the audit is to:
* '''''Bring about excellence in systems design and operation.'''''
To achieve this we need the cooperation of management because they ultimately have to act on our findings for the improvement to be realised. The exit interview helps establish management ownership of the recommendations. This boils down to a win/win result for Internal Audit and Management.
Guidelines to the conduct of exit interviews are presented in Appendix E Conduct of Exit Interviews.
==Interview Techniques==
The Technical Libraries as well as libraries in general contain many texts and other publications on interview techniques and procedures. These texts should be periodically reviewed to ensure audit interviews remain productive and are properly conducted.
[[Science of persuasion]]
==Backlinks==
* [[RIAM:VLA:The Four Phases of the RALSBA| Back To The Four Phases of RALSBA]]
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
070cdf7db670752f2d9a83372e0a8c1d5ad72d04
RIAM:VLA:FAMILIARISATION, SCOPE & PLANNING
0
352
544
2019-09-10T09:28:28Z
Bishopj
1
Created page with "=(PHASE 1:) AUDIT FAMILIARISATION, SCOPE & PLANNING= ==INTRODUCTION== There are two principal components of a given SBA's planning data # The Broad Audit Guideline (BAG); an..."
wikitext
text/x-wiki
=(PHASE 1:) AUDIT FAMILIARISATION, SCOPE & PLANNING=
==INTRODUCTION==
There are two principal components of a given SBA's planning data
# The Broad Audit Guideline (BAG); and
# The Field Audit Plan (FAP).
The BAG is the "permanent" component of the plan. It contains "task specification" type data which will be substantially the same for a particular review from one year to another. The FAP is specific to a particular task and a particular year. It is prepared at the commencement of the audit drawing on the BAG for background "information and specification".
==TYPES OF AUDITS==
Broadly, DRT internal audits are classified as either National or Local. A National audit involves multiple states with both national and state coordinationation and quality control requirements, while a local audit is specific to a state. This division is primarilly administrative in nature. Together with the administrative class of audit, the BAG will identify the type of audit to be conducted as one or more of:
===1. Organisational===
Organisational audits focus on the activities of an organisational unit and the controls present over the range of activities of that unit. They tend to be longer in nature than functional audits as the look at multiple functions.
Examples include:
* Personnel Section Review
* Contracts Section Review
* Receiving Section
* Purchasing Section
These reviews generally require a high level of skill as they consider not only the transaction processing operations, but the all of 8 Sawyer's Means of Achieving Control. This means that the auditor might need to understand issues of management theory, organisation design, transaction systems design, accounting, cost accounting, industrial relations, etc.
===2. Functional===
These reviews follow a process from beginning to end, possibly crossing organisational lines. This audit focuses on a particular activity, operation, or document. Examples include:
* Occupational Health & Safety Practices
* Asset Control
* Contracting
* Fraud Evaluation
* Waste Disposal Practices
===3. Cycle===
A cycle audit looks at an entire transaction cycle. These are broader than functional audits. The flow chart presented in Session 1 was for one such review. Examples of such cycles include:
* Sales & Cash Cycle
* Acquisitions and Payments Cycle
* Inventory & Warehousing Cycle
* Financial Capital & Payment Cycle
* Personnel & Payroll Cycle
===4. Management Study===
These are specific in-depth studies focussing on management needs for additional data. Examples might include:
* Queuing Analysis for mail and telephone handling systems in an office
* Design of change control methodologies for software
* Business continuity planning and exposure analysis
* Optimisation of stores organisation to minimise costs
These studies are generally at the request of management and reflect the data collection and analysis role of audit admirably.
===5. Performance/Program Results===
This review collects information about the costs, outputs, benefits and effectiveness of the program under review. It generally involves both an analysis of existing performance indicators, but an identification and calculation of additional appropriate indicators. The examines the degree to which the program is achieving its critical success factors and hence its contribution to the key result areas of the department. This kind of review is generally undertaken by high level management staff, as it necessarily involves senior management and policy formation issues.
===6. Other===
In the unlikely event that the audit is not classifiable as one of the above, it will be classed as "other".
==THE SCOPE OF WORK==
===The Scope of Work (Institute of Internal Auditors Audit Standard 300).===
The scope of the work includes the examination and evaluation of the adequacy and effectiveness of internal controls and the quality of the performance of assigned tasks and responsibilities. Management provide a general direction as to the scope of work and activities to be audited.
The scope of work has five specific objectives (or classes of assertions) giving the acronym SCARE forming an opinion concerning:
# '''C'''ompliance with the relevant policies, plans, legislation and directions etc;
# '''A'''ccomplishment of established goals and objectives for plans and procedures;
# '''R'''eliability and integrity of data; and
# '''E'''conomical and efficient use of resources.
# '''S'''afeguarding assets;
In the larger organisation, IA establishs "Broad Audit Guidelines" (BAG) to manage the audit project on a large scale and allow for successful delegation of the work and component reports to potentially many auditors while preserving the overall direction, coordination and consistency of focus and approach across the organisation.
The BAG outlines the scope of work in terms of the Statement of Audit Objectives and the Statement of Assertions.
In a smaller organisation, where direct management of all audit staff may be asserted by the direcor on internal audit, the BAG is not so critical, and the same purpose may be accomplished by a more constrained scope and planning statement.
In either case, the five specific objectives of the scope of work remain essentially the same, although not all assertion areas must be covered in every audit. In a systems based audit, however, all areas should be covered for the designated function/audit focus. It is generally preferred to restrict the boundary of the function under examination, rather than absent part of the scope of work.
These assertion areas (or objectives) are explained in turn:
====1. Safeguarding assets====
This refers, firstly, to the protection of assets from theft, fire, improper, unauthorised or illegal activities and exposure from nature (eg. sunshine, rain and wind etc); and secondly to the application of assets. It is commonly formulated as an assertion:
"Assets are appropriately protected and applied."
Remembering that both cash, and intellectual property are an assets, in addition to the more obvious plant, equipment, land, fittings, and stock classes; the true breadth of this assertion class becomes apparent.
====2. Compliance with policies, plans and relevant legislation etc====
Management is responsible for creating systems which ensure compliance with professional corporate, departmental and Government policies, plans and relevant legislation while the Internal Auditor evaluates whether the systems (procedures) thus created comply with management's objectives and the law, and determines whether operations comply with the systems model.
In smaller entities, where compliance management is not a separate function, Internal Audit directly assesses the compliance of the systems and processes with the characterictics in the previous paragraph.
In a larger entity the role of compliance management may be assigned to a dedicated group, separate from internal audit. In these organisations the internal auditor views the functions of the compliance team as one more control system (like any other process or system) and therefore should review the operations of the compliance unit to:
# Determine the reliability, completeness, compliance with legal and other guidleine, and efficiency of the compliance measurements, etc from the perspective of its design (just as you would an accounts payable finance system); and
# Determine the reliability and effectiveness of the operation of the system. I.e. The extent to which the compliance reports can be relied upon by management (and internal & external audit). To this end, Internal Audit should treat compliance management systems operated by the compliance management team in the same way they would treat a financial control system operated by the finance team - by testing assertions about the compliance management system of:
#* Accuracy of reporting
#* Completeness of recording and disclosure
#* Authorisation of the compliance component reports
#* etc. (See separate discussions on assertions)
The existance of a compliance function in no way reduces internal audit's responsibilities - it simply distributes one layer of IA's work and changes the sampling model and focus.
====3. Accomplishment of established objectives and goals for the operations and programs.====
Management is responsible for establishing operating and program objectives and goals while the Internal Auditors should verify whether the department or section is achieving them.
Internal Auditors can assist management in developing and evaluating these objectives and goals by evaluating whether their underlying assumptions are appropriate, accurate, and consistent with the stated objectives, and whether current and relevant information is available and being used.
====4. Reliability and integrity of information.====
Information systems whether manual or computerised provide data for decision making, control and compliance with external requirements. It is therefore essential that the financial and operating records contain accurate, reliable, timely, complete and useful information. In addition the controls over the record keeping and reporting must be adequate and effective.
====5. Economical and efficient use of resources.====
Management is also responsible for setting operating standards to measure economical and efficient use of resources. Internal auditors are responsible for determining that:
* these standards have been established;
* these standards are understood and are being met;
* departures from these standards are being identified, investigated and corrected; and
* the action has taken place to ensure the departures are not repeated.
Audits should identify:
* under-utilised resources;
* non-productive work or work practices;
* uneconomical procedures;
* inappropriate staffing; and
* ineffective organisation design.
==BROAD AUDIT GUIDELINES (BAG)==
===Introduction===
For large audits, and generic systems audits a BAG will be prepared. For smaller audits, such as local reviews a BAG may be an unnecessary overhead. In these latter cases the Field Audit Plan and the Permanent Audit File fullfills the same purpose.
Where required Broad Audit Guidelines will be prepared and approved for the various reviews and audits as follows:
* Program Reviews(Whole of Organisation/Group)
**Produced by: Assigned audit manager.
**Approved by: Director Internal Audit.
* National Audit (Whole of National Organisation)
**Produced by: National Functional Audit Manager (Note: For multi-org groups with separate IA teams, each national organisation may need to prepare an appropriate BAG).
**Approved by: National Audit Manager for National Audits, Director Internal Audit for Central Office Locality Audits.
* Locality Audit
**Produced by: Team leader.
**Approved by: State Audit Manager for State Audits, Director Internal Audit for Central Office Locality Audits.
* Outlet Audit
**Planning with Audit package modified by State/Area Audit Manager.
The proper structure and coordination of BAGs is essential in the very large organisation for coordination of IA reporting to global or national boards/governerance committees. If you get the BAG design and control method right, the global audit will run much more smoothely and report consolidation will be largely straightforward. Getting this right can save you months in report finalisation.
BAG for particular areas will normally comprise the below elements:
<table align=center >
<tr><td>1. Assignment Cover Sheet</td><td></td></tr>
<tr><td>2. Statement of Management Objectives</td><td>(Prioritisation of opinion basees)</td></tr>
<tr><td>3. Statement of System Objectives</td><td>(Boundary of opinion formation)</td></tr>
<tr><td>4. Statement of Audit Objectives</td><td>(Focus of opinion formation)</td></tr>
<tr><td>5. Statement of Audit Assertions</td><td>(Logical Basis for opinion formation)</td></tr>
<tr><td>6. Desirable Control Model</td><td>(Condition for opinion formation)</td></tr>
<tr><td>7. Standard Audit Budget</td><td>(Cost of opinion formation)</td></tr>
<tr><td>8. Skills Matrix</td><td>(Technical requirements)</td></tr>
</table >
===1. Assignment Cover Sheet===
The cover sheet provides an indication as to the continuing relevance of the BAG; when it was prepared could determine how relevant it is. A proforma Assignment cover sheet is presented on the next page. It contains the following fields:
* Title
* Table of contents
* Date of preparation
* Name and location of preparer/reviewer
* Name and signature of officer approving the BAG
* Type of Audit
* Audit Budget
===2. Statement of Management Objectives===
(Prioritisation of opinion basees)
Ideally there should be a set of management-designated objectives for each system subject to audit from which internal audit may develop evaluation criteria for the system.
'''''Example:''''' In an accounts payable system management objectives might include minimising creditor complaints and credit related interruptions to supply.
===3. Statement of System Objectives===
(Boundary of opinion formation)
System objectives form the basis for the control model. They may be broken down to include objectives for the major sub-systems. These objectives will need to be cleared by the auditee.
'''''Example:''''' In an accounts payable system, system objectives may include payment of the supplier once for a given debt at the limit of the credit discount period.
===4. Statement of Audit Objectives===
(Focus of opinion formation)
Audit objectives are to cover what is to be achieved by the audit and should directly relate to the objectives of the area under review. These will usually be laid out in the BAG given to the field auditor at the commencement of the audit.
'''''Example:''''' In an accounts payable system review, the audit objectives might include formation of an opinion as to the effeciency, effectiveness, economy and integrity of the payments control system, or that payments are made only once and to the correct supplier.
===5. Statement of Audit Assertions===
(Logical Basis for opinion formation)
The audit objectives and assertions are closely related. The assertions describe the components of an "acceptance" opinion, and the criteria for qualifying that opinion. Assertions express a truth we wish to sustain during the audit in order to express an "acceptable" (or positive) opinion.
'''''Example:''''' In an accounts payable system, the audit assertions may include the statement that "payments are made for bona fide debts". This assertion addresses both the objectives of single payment and correctness of the recipient.
===6. Desirable Control Model===
(Condition for opinion formation)
The Desirable Control Model is a model of the systems under examination and includes system control objectives, control features and exposures; mapped to the Statement of Audit Assertions and Statement of Audit Objectives.
===7. Standard Audit Budget===
(Cost of opinion formation)
The budgeted audit time for the conduct of the audit addressing the complete BAG should be specified, ideally with a breakup over any discrete or severable parts of the audit. The skills required in the various parts of the audit assignment should be defined.
===8. Skills Matrix===
(Technical requirements for opinion formation)
The skills requirements for the conduct of the audit addressing the complete BAG should be specified, ideally with a breakup over any discrete or severable parts of the audit.
==THE FIELD AUDIT PLAN (FAP)==
===Introduction===
What is the Purpose and Functions of the Audit Program/Field Audit Plan?
The audit programs, which are prepared based upon the results of the preliminary survey are listings of the audit procedures to be carried out during the field work. Audit programs should be designed to:
* outline what is to be done;
* outline why it is being done;
* outline when and where it is to be done;
* specify how it is to be done and who is to do it;
* provide a record as to what has been done; and
* facilitate supervision and control over the audit.
The audit program will depend on the scope of the audit to be performed. The audit can cover the entire operations of a section or department or it may be targeted at a particular aspect of the operations of the section or department.
===Preparation and Deliverable===
Contents:
A Field Audit Plan should be prepared by the Team Leader once an approved BAG has been received. Planning is the crucial stage in the performance of an audit. The Field Audit Plan sets out in a logical sequence the audit approach to be adopted and will normally comprise the following elements:
1. Determining the scope, boundary and assertions of the audit
2. Determining the budget
3. Obtaining background information
4. Determining the resources necessary to perform the audit
5. Determining the overall timing of the activities
6. Communicating the overall timing of the activities
7. Communicating with all personnel/sections who need to know about the audit
8. Performing a preliminary on sight survey
9. Writing the audit program
10. Determining when and to whom the draft and final reports will be issued
11. Obtaining approval of the audit work plan
* Time Budget:
The allotted time span for the total review and any local or functionally divisible elements is indicated in the BAG. The FAP uses these time budgets to set the total budget for the specific field audit being planned in the FAP.
* Responsibility:
For nationally conducted audits the lead State will prepare the field plan and have it approved by the Director Internal Audit.
* Deliverable:
The deliverable from the field audit planning phase is a document/file addressing or witnessing the performance of the above
===Steps in Preparing the Field Audit Plan===
====Step 1: Determining the scope and boundary of the audit====
This stage of the planning is the most critical as it impacts on each of the other components. It is essential that the scope and boundary of the audit be determined and agreed by both the auditor and auditee prior to the commencement of the audit. This will ensure both the auditor and auditee have a consistent understanding of what is to be performed and will help ensure the objective of the audit is met.
The scope is what will be included in the audit while the boundary is the point at which the audit ceases to cover a process (in the case of cycle audits) or a section's interaction with another section (in the case of organisational audits). There, of course, could be a number of boundaries for a particular audit.
Having established the scope; discussions with management, and the BAG's Statement of Assertions will allow a proposed list of assertions defining an agreed standard of "acceptance" of the system being reviewed. Management and Audit should have a clear mutual understanding of the truths to be tested during the review.
====Step 2: Determining the budget====
The overall budget will be determined at the time of preparing the strategic audit plan and will be present in the BAG. At the planning stage of the field audit it is necessary to distribute the budget allocation over the following areas:
<table border=1 width="80%" align=center >
<tr><th>Phase of Internal Audit</th><th>Standard % of time</th><tr>
<tr><td>Set up/Familiarisation/Planning<br>
including Entrance Interview</td><td>20</td></tr>
<tr><td>Information gathering - Interviews and documentation</td><td>25</td></tr>
<tr><td>Preparation of the testing program and testing</td><td>20</td></tr>
<tr><td>Preparation of the report and file completion <br>
including review of the file and the report</td><td>25</td></tr>
<tr><td>Exit Interviews and Follow-up</td><td>10</td></tr>
</table>
The components of each phase of the audit will vary from audit to audit as will the percentage of time allocated to each phase.
It should be noted that these elements are arbitrary divisions of a review. In practice the distinction between the elements may be obscured as the reviewer uses his/her experience to efficiently gather information whilst conducting the review.
====Step 3: Obtaining Background Information====
Background information should be obtained and read about the area to be reviewed prior to and after the Entrance Interview, to gain as thorough an understanding of the area as possible.
Sources of background information are include documents (such as standard forms, manuals, correspondence, etc), the BAG, previous reviews, entry interviews or preliminary surveys.
* Background information should be collected in the following classes:
** Organisation Objectives
** Organisation Operating Policies
** Organisation Financial Policies
** Performance Measures
** Industry/Other Performance Criteria
** Legislative Requirements
** Record of Entrance Interviews
** Matters held over from last audit
** Matters specifically requested by Audit Committee
** Matters specifically requested by the Client
** Engagement Brief
** Organisation structure of target sections
** Contacts (with contact record)
** Important Contracts and Agreements
** Other background data
====Step 4: Determining the resources necessary to perform the audit====
This is tied into the preparation of the budget. The resources include the audit team personnel and skills to be utilised on the audit, plus, if required, any consultants, hire of equipment (such as computers), or other such resources.
It is also necessary at this stage to allocate to each audit staff engaged in the audit the hours to be performed during each phase.
====Step 5: Determining the overall timing of the activities====
A timetable including all of the phases of the Internal Audit review should be determined and agreement obtained with the auditee. Each phase should have commencement and completion dates to assist in the control of the audit as well as judging performance.
====Step 6: Communicating with all personnel/sections who need to know about the audit====
This is essential to ensure the smooth running of the audit. Communicating with all personnel/sections during the planning phase of the audit enables them to fully prepare themselves ensuring:
* they have sufficient time to think about any concerns or problems they may have which they want addressed during the course of the audit;
* the required personnel are available during the course of the review and not away on leave or at training or simply too busy. This also allows them to properly plan their work; and
* there is sufficient workspace available.
====Step 7: Performing a introductory on sight survey (entrance interviews)====
This survey assists in gaining an understanding of the auditee operations and allows the auditor(s) to meet the auditee personnel in a relaxed setting. This phase is essential in obtaining an overall familiarity with the auditee and the auditee's operations, identifying areas for audit emphasis and to invite the auditee's comments and suggestions. Entrance interviews are considered in more detail in Section 5.2. --REF REQUIRED--
====Step 8: Determining when and to whom the draft and final reports will be issued====
This phase should be performed in conjunction with determining the overall timing of the audit activities. The reporting deadline dates should be realistically determined in view of the resources available to perform the audit and any auditee deadlines (eg. financial statements reporting deadlines).
====Step 9: Obtaining approval of the audit plan====
This is the final stage of the planning phase. The review and approval of the audit plan is essential to ensure particularly the objectives of the audit will be met and that all matters to be covered have been adequately considered.
====Step 10: Writing the audit test program====
Although this may not be performed at the commencement of the audit, it is essential that the audit (testing) program be prepared and approved prior to the commencement of the testing phase. In most cases the testing program can not be prepared until after the systems analysis has been conducted. This analysis is performed after the planning phase is completed.
In RIAM both the analysis and testing phases are treated as part of the greater Evaluation Of a System of Internal Control. In some national audits the testing strategy might be worked out centrally, while in other audits, the most appropriate testing strategy will be established at the local systems level. In either case the system being tested must be understood before the test strategy can be defined. Consequently, designing of the test program occurs during System Evaluation, after documentation and analysis, but immediately prior to the testing phase itself.
The field plan should include a reserved section for the detailed testing plan to be inserted at the appropriate time. Test plans should be approved by the preparers supervising officer, except where noted differently elsewhere in this manual.
==Backlinks==
* [[RIAM:VLA:The Four Phases of the RALSBA| Back To The Four Phases of CSA]]
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
*[[RIAM:Overview: The Assertion Linked Systems Based Audit (ALSBA)|Back to The Assertion Linked Systems Based Audit (Overview)]]
affb5a61add77660aff61c28a748e6f4abbc0a7f
RIAM:VLA:STANDARDS FOR, AND TYPES OF, AUDIT EVIDENCE AND WORKING PAPERS
0
353
545
2019-09-10T09:29:48Z
Bishopj
1
Created page with "=PHASE 1: STANDARDS FOR, AND TYPES OF, AUDIT EVIDENCE AND WORKING PAPERS= ==Introduction== The auditor should have unlimited access to any evidence until, in their professio..."
wikitext
text/x-wiki
=PHASE 1: STANDARDS FOR, AND TYPES OF, AUDIT EVIDENCE AND WORKING PAPERS=
==Introduction==
The auditor should have unlimited access to any evidence until, in their professional judgement:
* '''''sufficient''''';
* '''''competent''''';
* '''''relevant''''';
* '''''reliable'''''; and
* '''''useful'''''
evidence has been collected to support audit results and their reports.
Audit evidence consists of at least:
* authoritative evidence, eg. contracts, confirmations from third parties;
* calculations by the auditor;
* internal control, eg. authorisations and approvals;
* interrelationships among the data, eg. analytical review;
* physical evidence, eg. documents such as cheques, invoices, receiving reports and purchase orders;
* statements by clients;
* statements by third parties;
* subsequent events; and
* subsidiary records.
==Basis for selecting Audit Evidence.==
Audit evidence should be selected:
* to allow the auditor to '''''form an opinion''''' that the following '''''assertions''''' underlying the system are being met;
** existence or occurrence
** completeness
** rights and obligations
** valuation or allocation
** presentation and disclosure
* to '''''satisfy specific client requirements''''';
* in accordance with '''''materiality considerations''''';
* in accordance with the '''''nature and degree of inherent and control risk''' '' ;
* in accordance with '''''potential key controls''''';
* in accordance with the '''''reliability of the evidence available''''', ie. the less reliable the evidence obtained, the more that is required to support conclusions etc;
* in accordance with the '''''efficiency of audit procedures''''' available to internal auditors.
{| border=1
|-
| The assessment of inherent and control risk permits the determination of the allowable level of detection risk for a particular assertion given the level to which the auditor seeks to restrict overall audit risk. The concepts of inherent, control and audit risks is covered in [[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]]
|}
==Contents and Standards for Working Paper Documentation. (IAS 420)==
Internal Audit Standard 420 states that "Working papers that document the audit should be prepared by the auditor and reviewed by management of the internal audit department. These papers should record the information obtained and the analyses made and should support the bases for the findings and recommendations to be reported."
The primary purpose of working papers is to support the auditor's report by providing the details necessary to prepare the report. Working papers may be important if a dispute arises or the matters covered during the audit become the subject of legal action because they should clearly show the nature, timing and extent of audit work done.
==Functions of Working Papers & Files:==
* provision of evidence to support the internal auditor's report;
* aids in the planning, performance and review of audits;
* documents whether the audit objectives were achieved;
* facilitates third party reviews eg. Australian National Audit Office;
* provides a basis for evaluating the internal audit department's quality control program;
* aids in the professional development of the internal auditors;
* provides evidence and support in circumstances such as disputes and lawsuits; and
* demonstrates internal audit's compliance with the "Standards for the Professional Practice of Internal Auditing".
==Contents - Permanent Audit Files==
A permanent audit file is usually established for each auditee area. It contains data of recurring importance including:
* prior audit reports and auditee's responses;
* records of prior reviews with management;
* post audit comments by the previous auditor, including any problem areas;
* records of reports to management;
* audit programs and questionnaires, Broad Audit Guides;
* organisation charts;
* flowcharts;
* important long-term auditee contacts;
* historical financial information;
* summaries of audit budgets;
==Contents - Task files (Current Files)==
Contents of Task files include:
* copy of the final audit report;
* engagement letters, contracts and contacts;
* action plan, client follow-up and correspondence;
* planning documents and audit programs;
* background and organisational details;
* memorandums on reviews of controls (control strengths and weaknesses);
* control system documentation (questionnaires, flowcharts, checklists and narratives);
* records of interview;
* legislative and management directions; and
* results and analysis of audit testing and how exceptions were handled.
==Working Paper Design==
Working papers should usually contain:
* headings including auditee, period covered and work paper subject matter;
* a key of all tickmarks and symbols used;
* the date of preparation and preparer's initials; and
* a reference number which individually identifies the working paper.
Working papers should be consistently and efficiently prepared so as to facilitate review. They should be:
* neat, not crowded and only written on one side;
* uniform in size and appearance;
* economical, avoiding unnecessary duplication (copies of client records is one way of reducing the time taken in completing workpapers); and
* arranged in uniform style.
Working papers should have summaries throughout which provide the reviewer with a concise statement of data contained in subsequent schedules.
A good indexing system for working papers should be simple and capable of easy expansion. Working papers should be well cross-referenced simplifying the finding of related information and the preparation of the report.
Cross referencing should follow at least these rules:
# Surround reference in angle brackets: " < > ".
# Use a colour different from the main writing colour for the <>.
# Referencing towards the front of the file, from this Line/Page, place reference on the LEFT hand side of the subject.
# Referencing towards the back of the file, from this Line/Page, place reference on the RIGHT hand side of the subject.
Sources of data should be clearly identified.
All sample documents should be uniquely labelled with a document numbering system, to allow clear references throughout the file.
==The RIAM (DRT Internal Audit Methodology) Audit File - an overview of its contents.==
The RIAM Audit File structure has been based upon the concepts outlined above. Below we reproduce the contents page from the Task File (Current File). An alternative contents page may be found at the end of this Volume.
{|
|-
|REF || CONTENTS
|-
| 1 || Final Audit Report and Other Relevant Files
|-
| 2 || Supervisor, Manager & Director Reviews and Follow Up
|-
| 3 || Engagement Letters and Contacts
|-
| 4 || Action Plan, Client Follow Up and Correspondence
|-
| 5 || Matters for Manager & Director Attention
|-
| 6 || Matters for Review Next Audit
|-
| 7 || Planning Documents and Audit Program
|-
| 8 || Work & Time recording Schedule
|-
| 9 || Background and Organisation Details
|-
| 10 || Organisation Objectives, Operating & Financial Policies, and Performance Measures
|-
| 11 || Strength & Weakness Schedule
|-
| 12 || Control System Documentation and Conclusion<br>
(Control Questionnaires, flowcharts, checklists and narratives)
|-
| 13 || Records of Interview
|-
| 14 || Legislation and Management Directives - Compliance<br>
(Including Important Contracts and Agreements)
|-
| 15 || Analysis and Tests of Transactions, Processes and Account Balances
|-
| 16 || Other Background Data and Notes
|}
==Backlinks==
* [[RIAM:VLA:The Four Phases of the RALSBA| Back To The Four Phases of RALSBA]]
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
a4a014fa925e4ad292feb32171e43a013f59daca
RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL
0
354
546
2019-09-10T09:31:57Z
Bishopj
1
Created page with "=PHASE 2: DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL= ==Introduction== When conducting a Systems Based Audit it is necessary for the auditor to gain an understa..."
wikitext
text/x-wiki
=PHASE 2: DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL=
==Introduction==
When conducting a Systems Based Audit it is necessary for the auditor to gain an understanding of the system and its related internal controls. The examination and assessment of internal controls will enable the auditor to determine the nature, timing, and extent of audit testing. This phase is also applicable to Key Control Audits and Audit Packages but only in the initial formation of such packages.
To gain an understanding of the System the auditor must perform two major processes:
* Systems Documentation & Modelling
* Control Analysis.
Documentation of the system must be completed before the control analysis is completed. These two processes will be discussed in detail after a brief look at internal controls.
==Internal Controls==
Internal control has been defined as:
* The plan of organisation and all the co-ordinated methods and measures adopted within an entity to safeguard its assets, check the accuracy and reliability of its accounting records, promote operational efficiency, and encourage adherence to managerial policies.
This definition covers both accounting and administrative controls. Accounting controls are primarily concerned with safeguarding assets and the reliability of financial records. Administrative controls are primarily concerned with operational efficiency and adherence to managerial policies.
The internal control system is the whole system of controls established by management in order to carry on the operations of the organisation in an orderly manner, and to safeguard the accuracy and reliability of records and financial information.
A strong system of internal controls may reduce the need for audit to use substantive testing procedures. A weak system of internal controls may increase the risk of fraud or error and therefore, require audit to perform extended substantive testing procedures. For a discussion of testing procedures and the relationship of risk assessment to test design, refer to the following sections of the manual.
At the most general level, the purpose of an internal audit review is to express an opinion about the design and operation of a system. The Internal Auditor is a Systems Analyst with a particular specialty in control systems design. We describe this specialty by reference to 5 goals of opinion formation. These are equivalent to the Institute of Internal Auditor's "Scope of Work".
==Assertions - The Apex of The Analysis Tree==
RIAM provides a highly structured method of systems analysis. The basis to the approach is the establishment of assertions. An assertion is:
* '''''A truth we wish to sustain about a system in order to express a favourable opinion about the operation of that system.'''''
The 5 goals of opinion formation are nothing more than the 5 general classes of assertions under which we classify and form our opinion about a system. Assertions are discussed greater detail later.
The RIAM control system opinion focus uses the following five assertion classes:
<table border=1 align=center>
<tr>
<th >ITEM</th ><th >OPINION FOCUS</th ><th >SUB FOCUS</th >
</tr>
<tr>
<td >
1
</td>
<td>
Reliability and Integrity of Information
</td>
<td>
* Accurate Information
* Reliable Information
* Timely Information
* Complete Information
* Useful Information
* Correctly Accumulated Information
* Fully and Correctly Disclosed
</td>
</tr >
<tr >
<td>
2
</td>
<td>
Compliance With
</td>
<td>
* Policies
* Plans
* Procedures
* Legislation
* Regulations and Treaties
</td>
</tr >
<tr >
<td>
3
</td>
<td>
Assets are Safeguarded.
</td>
<td>
</td>
</tr >
<tr >
<td>
4
</td>
<td>
Efficient and Effective Use of Resources.
</td>
<td>
</td>
</tr >
<tr >
<td>
5
</td>
<td>
Accomplishment of Goals, Objectives for Programs, Policies and Management's Critical Success Factors (Effectiveness).
</td>
<td>
</tr >
</table >
These classes form the apex of our analytic structure. Every finding in the review is ultimately related back to one or more of these assertion classes. Our purpose is to perform sufficient analysis to express a "pass" or "fail" conclusion over each of these classes.
Not every review will use all five classes, but where some classes are excluded, these should be clearly identified in the scope and boundary of the report.
In effect, the five classes represent an abstraction, or model, of the system under review. We will use additional modelling techniques such as flowcharting and narrations to reach our final conclusions, but all other procedures are 'shaped' by the desire to express our opinion into these 5 classes.
==Management Action Areas - The Second Level of The Analysis Tree==
The process of system modelling, is one of abstraction. As auditors we impose a particular 'view' of a real system that highlights particular characteristics of the system which are germaine to our analytic requirements. The second level of the analysis tree classifies management's "action areas" into 10 control classes:
<table border=1 align=center >
<tr><th>ITEM</th><th>MANAGEMENT ACTION AREA/CONTROL CLASS</th></tr>
<tr><td >1</td><td>Organisation of the section</td></tr>
<tr><td >2</td><td>Personnel</td></tr>
<tr><td >3</td><td>Policies</td></tr>
<tr><td >4</td><td>Procedures</td></tr>
<tr><td >5</td><td>Accounting</td></tr>
<tr><td >6</td><td>Budgeting and Planning</td></tr>
<tr><td >7</td><td>Reporting</td></tr>
<tr><td >8</td><td>Documentation</td></tr>
<tr><td >9</td><td>Internal Review</td></tr>
<tr><td >10</td><td>(Physical) Security</td></tr>
</table>
Many other classification methods at this level may be appropriate and the list may be interchanged with alternative classification systems as appropriate. The list presented above is an expanded version of Sawyer's control classes.
The important point to note is the range of management responsibilities that impact on the completeness of the conclusions formed in the assertion classes.
==Control Attributes - The Third Level of the Analysis Tree==
Within each of the management action areas, controls will be defined. Each control, and control sub-system will have features we term "Control Attributes". These control attributes are summarised:
<table >
<tr >
<td >
* Access Security</td><td>Ability to protect information and other resources against accidental or deliberate unauthorised modification, disclosure or use. (Note: This includes the distribution of reports)
</td>
</tr>
<tr >
<td >
* Accountability</td><td>Ability to establish a relationship between information and the entity or individual which/whom created or updated it.
</td>
</tr>
<tr >
<td >
* Auditability</td><td>Ability to provide documentary evidence of processing which can be used to trace between transactions and related records and reports.
</td>
</tr>
<tr >
<td >
* Continuity</td><td>Ability to minimise the impact of interruptions to operations and processing support for business functions.
</td>
</tr>
<tr >
<td >
* Information Integrity</td><td>Ability to ensure that information is complete, accurate, reliable and timely.
</td>
</tr>
<tr >
<td >
* Process Integrity</td><td>Ability to ensure that the process is consistent complete, andb accurate (correct).
</td>
</tr>
<tr >
<td >
* Effectiveness</td><td>Ability to accomplish the intended purpose of the system by consistency of design with the intended business functions, and purposes of those functions.
</td>
</tr>
<tr >
<td >
* Efficiency</td><td>Ability to achieve stated objectives with the minimum of cost. Cost is measured in a variety of ways.
</td>
</tr>
<tr >
<td >
* Timeliness</td><td>Ability to ensure that the control system achieves prevention, detection and correction in a timely fashion, and processes inputs into outputs within a useable timeframe .
</td>
</tr>
</table >
==Types of Controls - The Fourth Level of the Analysis Tree==
The control system's ability to support the assertions and therefore the key controls identified are analysed into three types of controls:
* ''Preventive Controls''
** Including direct controls such as authorisation and certification of forms, indirect controls such as training, maintenance of up-to-date reference material, section administration and organisation;
* ''Detective Controls''
** Such as supervisor review, batch control totals, edit checks and periodic system reconciliations;
* ''Corrective Controls''
** Such as routing an error back through the same control system that originally processed and detected the error and response to exception reports.
The key point to be drawn at this level is the issue of cost. As a general guide, the cost of operating a control escalates as we move from preventive controls to corrective controls. The reason for this is simple: preventive controls operate before an error has occurred and the associated effort has been expended, while a corrective control operates after the error has occurred requiring additional effort to correct the error.
The figure on the following page represents the structure of the RIAM control model as detailed thus far.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="1" align="left">
<tr>
<td>
<div class="center">
[[Image:ALSBA.png]]
</div>
</td>
</tr>
</table>
==Building The Model - System Documentation==
===Introduction===
To gain an understanding of any system the auditor needs to examine the system to identify and record all relationships and interactions within the system between people, machines, procedures. To do this the auditor must gather information, and from that information, document the system. The extent of documentation required will be determined by the nature and complexity of the system being examined, together with the degree of reliance intended to be placed upon the system.
The documentation process can be seen as one of filling in the leaves of the analysis tree outlined in section 3 to 5. Documentation is not a function separate from the systems analysis but part of the analytic process.
A system model in RIAM is therefore considerably more than just a flow chart of steps in a process. It incorporates issues of management influence such as resources, training, ergonomics, economics, history and, of course, transaction flows. It is quite likely that we will have available Desired Control Models for parts of the model, such as a transaction flow.
In RIAM we view the process of analysing and documenting the control system as one of model building. The detailed structure into which the documentation is written as we evaluate the system, categorises and models that system, supporting analysis by either comparison to predefined "ideal" models (we call these Desired Control Models) or where such is unavailable, by logical testing of the assertions against identified threats.
In analysing the model we have built of the "real system" we use Desired Control Models as a short-cut to assertion testing. The Desired Control Model represents a description of the system as required to satisfy selected assertions.
For example: after documenting the legislative requirements for purchasing, we might construct a desired control model for the document transaction flow that satisfies the "Compliance with Legislation" assertion class. Having established this desired control model we can then compare our "real system" model noting those steps present in the desired model yet not in the "real system" model. Evidently if there are no such steps identified, the "real system" model passes on that assertion.
In other cases the desired control model will not be definable economically. In these cases our model may be analysed by structured means, such as threat analysis or one of a range of unstructured means, such as experience!
===The Steps to Building the System Model===
<ol>
<li> The first step in documenting the system is to collect relevant background information relating to the review and the section(s) associated with the review prior to building a brief description of the system/section under review. Core background information should be found in the audit file:
<table align=center border=1>
<tr><th>CLASS OF INFORMATION</th><th> SCT</th></tr>
<tr><td>Organisation Objectives</td><td></td></tr>
<tr><td>Organisation Operating Policies</td><td></td></tr>
<tr><td>Organisation Financial Policies</td><td></td></tr>
<tr><td>Performance Measures</td><td></td></tr>
<tr><td>Industry/Other Performance Criteria</td><td></td></tr>
<tr><td>Legislative Requirements</td><td></td></tr>
<tr><td>Record of Entrance Interview</td><td></td></tr>
<tr><td>Matters Held Over From Last Audit</td><td></td></tr>
<tr><td>Matters Specifically Requested by Audit Committee</td><td></td></tr>
<tr><td>Matters Specifically Requested by Client</td><td></td></tr>
<tr><td>Engagement Brief</td><td></td></tr>
<tr><td>Organisation Structure of the Target Sections</td><td></td></tr>
<tr><td>Contacts</td><td></td></tr>
<tr><td>Important Contracts and Agreements</td><td></td></tr>
<tr><td>Other Background Information</td><td></td></tr>
</table>
<br>
<br>
<li> The second step is to form a brief description of the key static elements of the system under review. These elements provide a quick overview or description of the system under review. In most cases they are simple lists with references to the parts of the Audit File containing examples or more information:
<table align=center border=1>
<tr><th>ITEM</th><th>TITLE</th><th>REF</th></tr>
<tr><td>1</td><td>Purpose and Objectives (ie the Process) of the System</td></tr>
<tr><td>2</td><td>Organisation Structure</td></tr>
<tr><td>3</td><td>Documents (Control Forms, etc)</td></tr>
<tr><td>4</td><td>Records - Manual</td></tr>
<tr><td>5</td><td>Records - Computer</td></tr>
<tr><td>6</td><td>Inputs to the System</td></tr>
<tr><td>7</td><td>Outputs from the System - Primary System Deliverables</td></tr>
<tr><td>8</td><td>Outputs from the System - Reports</td></tr>
<tr><td>9</td><td>Industry and Other Performance Criteria</td></tr>
<tr><td>10</td><td>Matters held over from previous audit</td></tr>
</table>
<br>
<br>
<li> The third step is to identify or construct any desired control models definable from the data so far collected. In some cases the desired control model will be available from the BAG, the Internal Audit Technical Library, other reviews, the Introduction To Internal Audit course, or the appendices of this manual.
* Desired Control Models are discussed in sections 6.6.1 and 6.8
<br>
<br>
<li> The fourth step is to document the system in a structured manner that supports the concurrent analysis of the system. The following phases must be addressed during systems analysis:
<table align=center border=1>
<tr><th>PHASE</th><th>ACTION</th><th>REF</th></tr>
<tr><td>1</td><td>Conclusion (Opinion)</td><td></td></tr>
<tr><td>2</td><td>Objectives (Purpose) of the Control System</td><td></td></tr>
<tr><td>3</td><td>Framework of Analysis (Assertions to be supported)</td><td></td></tr>
<tr><td>4</td><td>Assertion Conclusion Matrix (Assertions mapped to S&W)</td><td></td></tr>
<tr><td>5</td><td>Strength and Weaknesses Schedule (S&W)</td><td></td></tr>
<tr><td>6</td><td>Key Controls (Mapped to Assertions)</td><td></td></tr>
<tr><td>7</td><td>Overview of the Control System (Principal Flows)</td><td></td></tr>
<tr><td>8</td><td>Control System Flowcharts/Documentation</td><td></td></tr>
<tr><td>9</td><td>Files & Records in the System</td><td></td></tr>
<tr><td>10</td><td>Cycles in the System</td><td></td></tr>
<tr><td>11</td><td>Transactions and Value</td><td></td></tr>
<tr><td>12</td><td>Documents in the System</td><td></td></tr>
<tr><td>13</td><td>Segregation of Duties Chart</td><td></td></tr>
<tr><td>14</td><td>Other</td><td></td></tr>
</table>
<br>
<br>
To operate effectively a sound system of internal control should include the following characteristics;
* A simple and flexible plan of organisation which provides clear lines of authority and responsibility.
* A separation of duties between operating, custodial and accounting functions.
* Separation of duties and delegation of authority within each section.
* Duties and responsibilities of all functions should be clearly laid down.
* Clear instructions on the method of authorising and recording transactions providing adequate accounting control over assets, income and expenses.
* Competent executives and department heads to carry out the laid down procedures efficiently and effectively. Such persons should have capabilities commensurate with their responsibilities.
* Some means to be provided for a continuous internal appraisal of the effectiveness with which internal control procedures are being carried out.
<br>
<br>
</ol>
==Systems Documentation Techniques==
Within our assertion structured systems model are many subsystems. These are documented throughout the documentation "tree". Each susbsystem should be documented in the way that best suites our anlalytic needs. For transaction flows this might be some type of data flow, for a delegations analysis it might be an organisation chart, and for a risk analysis it might be a Fitzgerald Matrix, etc.
There are a number of techniques available to the auditor for use in documenting systems of internal control, such as:
* Narration
* Process and Document Flows
* Annotated Data Flows
* Organisation Charts
* Segregation of Duties Chart
* Assertion Matrix
* Lancaster Modelling
* Alogorithm Pseudo-programming
* Simulation
Irrespective of which method is chosen, documentation should include:
* the origin of every document and record in the system
* all processing that takes place on the document
* the disposition of every document and record in the system
* a description of internal controls operating within the system
==Analysing the Model & The Desired Control Model==
At the top level the Desired Control Model is all the selected assertions in the 5 assertion classes being sustained ! Below this level other Desired Control Models map proof conditions for each assertion.
We generally think of the desired control model in terms of the detailed control features we would wish to see present in the control model in order for these assertions to be supported.
Many Desired Control Models may exist simultaneously for the one system. Each model may address only certain of the requirements dictated by the audit assertions.
The formulation of these models starts at the planning stage when we establish the detailed assertion list for the particular review, and is enhanced throughout the review as our detail of the structure and operation of the system grows.
In simple transaction systems, such as purchasing and payroll systems, the control principles and design requirements can often be established with a high degree of accuracy and detail at the start of the review because experience and legislation lays down in considerable detail the most appropriate method of operation of the system. Many reviews, however, are not so clearly laid down and the auditor must develop a desired model specific to the system.
In building such a model attention should be paid to the 10 management action areas rather than simple the flow of documents or information through the system. At all times the control model should be related back to the assertions which are the purpose of the review to ensure that only relevant controls are being included.
In Section 6.6.1 we discussed how Desired Control Models can be used to analyse the control system when they are formulated in accordance with our assertions. Targetting the desired model directly at specific assertions (rather than generally at all assertions) is more likely to assure us of completeness of analysis of a given assertion. We would therefore expect multiple desired control models to apply.
==Analysing the Model & Information Flow==
Most systems involve a sequence of processes between which information flows. At the highest level (as depicted in the second step of section 6.6) the system consists of:
* Input
* Process
* Output
* Storage
We refer to these four components as "Control Points". Each component is separately controllable.
Each control point is recursively defined. Like a Babooshka Doll, the system is constructed from sub-systems made up of input/process/output/storage which are in turn made from sub-systems of input/process/output/storage components. Each control point is a sub-system.
We analyse each control point for controls exhibiting one or more of the seven control attributes identified above.
At the information flow documentation stage, we wish to extract these control points from the system and focus our controls analysis on these points. The flowcharting, or flow documentation "language" adopted should facilitate this analysis.
==Analysing the Model & Assertion Testing==
We shall look at the assertion basis to analysing control models in the absence of a Desired Control Model in sections 7 and following. In particular the method of applying Assertion Testing is outlined in Section 7.7.
==Backlinks==
* [[RIAM:VLA:The Four Phases of the RALSBA| Back To The Four Phases of RALSBA]]
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
* [[RIAM:Overview: Control Implementation Services (CIS)| Back to CIS]]
4d9d590d4e7581d0f43f5763ad934684717d69d4
RIAM:VLA:ASSERTIONS
0
355
547
2019-09-10T09:33:21Z
Bishopj
1
Created page with "=PHASE 2: ASSERTIONS= ==Introduction== Statement of Auditing standards states that the auditor should gain an understanding of the accounting system and related internal co..."
wikitext
text/x-wiki
=PHASE 2: ASSERTIONS=
==Introduction==
Statement of Auditing standards states that the auditor should gain an understanding of the accounting system and related internal controls and should study and evaluate the operation of those internal controls upon which he/she wishes to rely in determining the nature, timing and extent of other audit procedures.
[[Image:IAControlVAssertions.png]]
The existence of these controls imply the existence of assertions.
An Assertion is something we (as auditors) wish to be able to state about a system in order to give it the "Big Tick" - Systems are designed and operating correctly.
[[Image:IAGoalAchievement.png]]
Be it an external or internal audit, assertions are fundamental to the review of any type of organisation or area, and form the basis of our reporting. Assertions are intended to provide a framework to help the auditor accumulate sufficient appropriate audit evidence to form a conclusion.
* ''Assertions direct the audit view and represent the scope of our opinion formation.''
In an external or financial statements audit the key opinion focus is on the financial statements as a whole. That is:
* ''whether the financial statements present fairly the state of affairs of the entity, and the assertions are geared to establish the truth and fairness of the statement.''
In an internal audit the auditor is reporting based on the assertions identified in relation to the scope of the review conducted.
In an internal audit, no one set of detailed assertions can be identified as being standard. They will normally vary according to:
* the type of entity being reviewed,
* specific management concerns (if any),
* the scope of the review,
* the purpose of the review,
* the nature of the review being conducted, and
* legislative requirements.
A standard set of core assertions can be established. These are identified later in this section.
Auditors undertake a combination of "compliance" and "substantive" procedures to obtain sufficient appropriate audit evidence to either support or suppress the assertions established.
Compliance procedures are tests which are designed to obtain reasonable assurance that the internal controls on which audit reliance is being placed are operating effectively.
Substantive procedures are designed to obtain evidence about completeness, accuracy and validity of the data produced by the clients accounting system and therefore to obtain reasonable assurance as to the accuracy and reliability of accounting records.
==External audit and related assertions==
As mentioned earlier, the global assertions in relation to external audits are:
* financial statements present fairly the state of affairs the entity, and
* all relevant legislative requirements have been complied with.
The principal overall objective of an external audit is to add credibility to statutory accounts by the expression of an independent opinion thereon. They are therefore predominantly financial assertions.
In an external audit the auditor is concerned with the following general assertions areas:
* completeness,
* accuracy;
* existence,
* valuation, and
* presentation and disclosure.
===Completeness===
All account balances and transactions that should be included in the accounts are included. The completeness assertion deals with matters opposite to those of the existence assertion. The completeness assertion is concerned with the possibility of omitting items from the financial statements that should have been included, whereas the existence assertion is concerned with inclusion of amounts that should have been excluded.
===Accuracy===
Recorded transactions and account balances are mathematically accurate, are based on correct amounts, have been classified in the proper accounts, and have accurately summarised and posted to the general ledger.
===Existence===
Assets and liabilities recorded on the balance sheet existed at balance date and revenue and expenses included on the income statement actually occurred during the accounting period.
===Valuation===
Appropriate accounting measurement and recognition principles are properly selected and applied to record transactions at appropriate amounts.
===Presentation and disclosure===
Account balances and classes of transactions are properly classified and described; appropriate disclosures are made.
The tables on the following pages present the general financial assertions and type of evidence implied therefrom. These are generally applicable to non-government and government environments.
<table>
<tr >
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA: Revenues
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Performance occurred
</td></tr>
<tr><td align=center >
Authorised
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA: Receipts from customers
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Received
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Trade accounts receivable
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Enforceable claims
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Valued
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Production costing
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Production occurred
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA :Cost of goods sold
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Shipment occurred
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
</tr>
<tr>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Inventories
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Exist
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Values
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Purchases of goods and services
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Goods received
</td></tr>
<tr><td align=center >
Services rendered
</td></tr>
<tr><td align=center >
Authorised
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Payments for goods and services
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Payment occurred
</td></tr>
<tr><td align=center >
Authorised
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Accounts payable
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Obligation exists
</td></tr>
<tr><td align=center >
Valued
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Employee costs
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Employees exist
</td></tr>
<tr><td align=center >
Eligible
</td></tr>
<tr><td align=center >
Authorised
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
</tr>
<tr>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Employee related liabilities
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Obligation exists
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Valued
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Acquisitions and disposals
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Bona fide
</td></tr>
<tr><td align=center >
Authorised
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Depreciation
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Bona fide
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Property and equipment
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Exist
</td></tr>
<tr><td align=center >
Valued
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Cash balances
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Exist
</td></tr>
<tr><td align=center >
Valued
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
</tr>
<tr>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Investments
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Exist
</td></tr>
<tr><td align=center >
Authorised
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Valued
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Investment income (loss)
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Bona fide
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Borrowings
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Obligation exists
</td></tr>
<tr><td align=center >
Covenants
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Valued
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
<td align=center >
</td>
<td align=center >
</td>
</tr>
</table>
This area is generally applicable only to non-government and government business entities with rights to raise debt.
<table border=1 >
<tr><th align=center >
AREA:Interest
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Bona fide
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
This area is generally applicable only to non-government and government business entities with rights to raise debt.
<table>
<tr>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Deferred costs and intangibles
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Bona fide
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Valued
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Prepaid expenses
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Bona fide
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Valued
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Other assets
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Bona fide
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Valued
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Accrued and other liabilities
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Obligation exists
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
<td align=center valign=top >
<table border=1 >
<tr><th align=center >
AREA:Deferred revenue
</th></tr>
<tr><th align=center >
Assertion
</th></tr>
<tr><td align=center >
Received
</td></tr>
<tr><td align=center >
Proper amount
</td></tr>
<tr><td align=center >
Valued
</td></tr>
<tr><td align=center >
Recorded
</td></tr>
<tr><td align=center >
Accumulated
</td></tr>
<tr><td align=center >
Proper period
</td></tr>
</table>
</td>
</tr>
</table>
Note for Government Internal Auditors: Most Australian and International Government financial reporting guidelines do not currently allow the recognition of future government appropriations in the accounts of government funded organisations.
==Non-Financial Internal Audit and Related Assertions==
Internal audit is volatile in terms of the nature and type of review being conducted (refer to reasons stated above). As such, no one set of assertions can be identified that will act as a norm for assertions. Assertions, in the case of internal audits, are developed pertaining to each individual review.
Most internal audits, however, should have some common ground on which standard assertions will agree. Common internal audit assertions include:
* efficiency,
* effectiveness, and
* economy.
Efficiency encompasses the use of financial, human and physical resources such that output is maximised for any given set of resource inputs, or input is minimised for any given quantity and quality of output.
Efficiency in relation to internal audit could include the time taken to perform a particular review and produce the internal audit report for consideration by the internal audit committee.
Effectiveness encompasses the achievement of the objectives, including other intended effects, of programs, operations or activities.
Effectiveness in relation to internal audit may include the appropriateness of the audit findings in a review, the significance of the assertions affected with respect to the particular audit finding, the risk with which the organisation may be faced and how effectively the recommendations eliminate the risks associated with the findings.
Economy encompasses the acquisition of the appropriate quality and quantity of financial, human and physical resources at the appropriate times and at the lowest costs.
Economy in relation to internal audit may include having the appropriate skills and competence to perform the required review. The skill and competence will be reflected in the standard of the work performed and the quality of the report.
==Deterministic and Non-deterministic Assertions==
In applying assertions to an internal audit review it is useful to distinguish between deterministic and non-deterministic assertions.
A deterministic assertion is defined as one where the criteria for evaluating the assertion well understood and the risk with which the organisation may be faced can be reliably determined.
A non-deterministic assertion is one where the criteria for evaluating the is not well understood, defined or agreed and the scale of risk the with which the organisation may be faced cannot generally be reliably determined.
The concept of deterministic and non-deterministic assertions can best be explained by an example.
Suppose we are conducting a purchasing review for an organisation and two of the assertions include:
* ensure value for money is being obtained; and
* executive approval is obtained for all purchases greater than $30,000.
In this example the value for money assertion can be classified as a non-deterministic assertion while obtaining executive approval is a deterministic assertion.
* Obtaining value for money assertion explained.
Obtaining value for money is the most important assertion in a government organisation as Departmental officers are held accountable for the use of public monies. To evaluate the assertion, that is, whether value for money has been obtained is somewhat difficult. There is no definite measure for value for money (it cannot be measured in real terms).
Value for money is more of a subjective assertion, and in determining whether value for money has been obtained, auditors professional judgement will have to be exercised (or a client survey conducted). There may be certain ground rules established to act as a guide in obtaining value for money. This may include; obtaining a minimum of three quotes and giving all interested parties an opportunity to put in a quote.
In situations where a department has standing (period) contracts with particular suppliers, there is no guarantee that value for money is being obtained on every purchase. New suppliers may have entered the market, subsequent to the standing contract, offering more competitive prices.
Where an auditor is faced with a non-deterministic assertion, it is wise to further break the assertion down or to identify standards which will give it a deterministic assertion. For example, someone says to you "please walk quickly". Quickly is a non-deterministic assertion; if however, the term quickly is defined or is given some measure (example 1 km per hour) then the non-deterministic assertion becomes deterministic.
* Obtaining executive approval assertion explained.
Obtaining executive approval is a deterministic assertion. Detailed tests of all purchases over $30,000 would reveal, with a high degree of certainty, what percentage of purchases greater than $30,000 have executive approval and what percentage do not.
==Effect of audit findings on assertions and impact on the audit report.==
When the auditor commences the internal audit review, audit findings will be identified which should be reported to management. All audit findings should be listed in the strengths and weaknesses table.
Each finding should be related to the assertions identified. All assertions affected due to each finding should be weighed amongst each other and a thorough discussion should be provided in the "Implication and risk" section of the RIAM audit report findings.
Where possible, each finding should be argued from both sides, that is, the advantages/benefits and the disadvantages/risks with which the organisation may be faced by the existing approach identified in each finding.
The implication and risk section of the RIAM audit report should:
* identify the assertions affected;
* justify why the assertion is affected;
* identify the advantages (if any) the organisation may derive from the practice resulting in the finding;
* identify the disadvantages/risks with which the organisation is faced;
* weigh the advantages and disadvantages of each finding currently practiced by the organisation; and
* form an opinion on each finding which ultimately leads to the overall conclusion.
The entrance interview and scoping phases of the internal audit review attempt to interpret the assertions appropriately for the area under review, and add any assertions necessary to appropriately reflect management's specific data research needs.
==A Checklist of Internal Audit Standard Assertions:==
You will recall that 5 opinion goals detailed earlier which summarise the assertion classes we adopt under RIAM:
* Compliance with the relevant policies, plans, legislation and directions etc;
* Accomplishment of established goals and objectives for plans and procedures;
* Reliability and integrity of data; and
* Economical and efficient use of resources.
* Safeguarding assets;
A table summary of these assertions covering a wide range of standard organisational and functional areas is provided in [[RIAM:VLA:ASSERTIONS]].
In a review of a Grants Management Control system. We might pose one or more appropriate the focus question(s) and then define the criteria under which a "yes" or "no" answer is established, thus answering the question.
We might express this in the scope section of the review thus:
"The purpose of the review is examine the Grants System and answer the focus question:
''Are grant management controls for Government grants under programme XXX operating effectively and efficiently, and are grants being awarded for the intended purpose? ''
For the purposes of this review, the question will be considered proven if audit can support the assertions that:
{| border=1
|-
|a. || Grant expenditure is bona fide (ie that acquittals are for actual grants and for services appropriate to grant activity);
|-
|b. ||Grant data reported/processed is:<br>
* Attributed to the proper period,
* Accurately calculated,
* Correctly and appropriately accumulated,
* Accurately recorded,
* Correctly disclosed,
* Properly authorised with respect to transactions (ie grantee approved costs and the Institute is satisfied that the amount is for an appropriate expense),
* Providing benefits to which grantees are eligible,
|-
| c. ||
* The relevant legislation is observed;
* Payments are in accordance with legislation, and
* Approval for grants are in accordance with the legislation (ie properly vetted by the Advisory Committee and approval is given by the Grants Board); and
|-
| d. || The assets of the Department are appropriately protected and applied (ie having an appropriate process of grant approval that assures projects are of an appropriate standard, and that institute resources are used efficiently).
|}
"
==Using Assertions (in brief)==
The first step is to formulate the assertions for the review, and have them agreed along with the scope.
The second step is to document the control systems and identify the controls in terms of the assertions:
* Each control of interest in the system will go to support one or more of the assertions on which we wish to form an opinion.
* Each weakness of interest likewise.
* We only consider control strengths and weaknesses relevant to our assertions and establish a list of these controls in a control strengths and weaknesses schedule.
* Our opinion of controls as they impact our assertions decisions is summarised on the Assertions Matrix which maps the strengths and weaknesses to the assertions.
* Use testing to confirm the assertion assessments.
The assertion testing may be performed in a number of ways. Two approaches are:
# Desired Control Model comparison; and
# Threat testing.
Desired Control Model testing is outlined in Section 6.
Threat Testing is outlined in 7.8.
The third step is to form an opinion as to the overall support or suppression of an assertion, and report both at the summary level, and for each finding in terms of the assertions affected.
==Threat Testing==
Threat testing is an approach to assertion testing used in the absence of a desired Control Model.
Each assertion is examined in turn. For each assertion a list of causes for failure of an assertion is prepared based on experience, statistical sampling, management advice, consultant advice, and checklists, etc.
Each cause is described in terms of events rather than the "absence" of a control, eg.
''"Purchase made for personal use"''
rather than
''"Purchase Orders do not have to be authorised"''
The latter is describing the absence of a control.
These causes are called threats. To each threat a probability of occurrence may be assigned (perhaps based on historic samples).
Each threat is then applied to the control system model to investigate the probability of the system preventing the threat (ie. mitigating the risk). This probability is expressed as a probability of system failure.
We can then multiply the risk of the threat by the risk of system failure and get an overall probability of the assertion not being sustained in operation. The sum of all such threat related probabilities is the total risk of assertion failures.
Risk analysis is further discussed in Section 9.
==Backlinks==
* [[RIAM:VLA:The Four Phases of the RALSBA| Back To The Four Phases of RALSBA]]
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
* [[RIAM:Overview: Control Implementation Services (CIS)| Back to CIS]]
ce74d1f0a2c3c9565e9fe1fc88a27ce78a9955b5
RIAM:VLA:ANALYTIC REVIEW PROCEDURES IN INTERNAL AUDIT
0
356
548
2019-09-10T09:34:50Z
Bishopj
1
Created page with "=PHASE 1 to 3: ANALYTIC REVIEW PROCEDURES= ==Definition== Analytic Review Procedures (ARP) are used to analyse significant ratios and trends including the resulting investi..."
wikitext
text/x-wiki
=PHASE 1 to 3: ANALYTIC REVIEW PROCEDURES=
==Definition==
Analytic Review Procedures (ARP) are used to analyse significant ratios and trends including the resulting investigations of unusual fluctuations and items. It is the study and evaluation of relationships among measurable financial and performance information. ARP are used on the assumption that relationships exist and that variances in these relationships can both provide audit evidence and point to the need to collect further evidence. It is a form of testing which and audit evidence which is generally rapid to assemble, but relies on the integrity of the underlying data.
Non-financial application of ARP revolves arround the analysis of management performance indicators such as demand, output, efficiency, effectiveness and environmental indicators.
Examples of non-financial applications of ARP include; a change in demand with no change in staffing level, increase in percentage of complaints, and client surveys indicating timeliness problems.
==Objectives of Analytic Review Procedures==
The objectives of ARP include:
* identification of potential risk areas that will require focussed audit attention;
* providing evidence in relation to established audit assertions to vary the nature timing and extent of audit tests; and
* performing reasonableness tests to satisfy audit assertions
==Considerations before using Analytic Review Procedures==
#. The objectives ARP intend to achieve and the degree of assurance that can be obtained from it.
#. The type of organisation under consideration and the relevance of ARP.
#. Available industry ratios and their comparability to that of the organisation.
#. Volatility within the organisation and industry and its effects on ARP.
#. The relevance and reliability of in-house information and data in utilising ARP.
Availability of performance measurement data and the precision of the performance indicators in measuring identifiable variables.
==When Can ARP Be Used==
ARP can be used at three stages. These are:
#. at the planning stage;
#. at the systems evaluation and data analysis stage; and
#. at the reporting stage.
===ARP at the planning stage===
Assists:
* by increasing the understanding of the organisation's operations;
* by highlighting potential risk areas which may require increased attention; and
* with planning the nature, extent and timing of audit procedures to verify the established assertions.
===ARP at the systems evaluation and data analysis stage===
Assists:
* with confirming the hypotheses reached at the planning stage;
* by identifying changes, if any, from the planning stage encouraging us to revise the nature, extent and timing of audit procedures if necessary; and
* verifying the validity of the established assertions.
===ARP at the reporting stage===
Assists:
* justifying or explaining conclusions reached at the systems evaluation and data analysis stage; and
* confirming conclusions reached.
===Nature of ARP===
This varies largely depending on individual audit circumstances. ARP includes comparison of financial information with:
* comparable information for prior periods, for example, a reduction in stock turnover but increase in stock costs, therefore less efficient buying quantities;
* anticipated results, for example comparing actual results with budgeted results and investigating variances; and
* similar industry information, for example, comparing floor plan interest between car dealers of different sized entity's, and average staff to floor space ratio.
ARP also includes the study of relationships:
* among elements of financial information, for example gross profit percentages;
* between financial information, for example, wages to number of employees;
* between various performance indicators such as demand to output indicators; and
* between operating periods.
==ARP Techniques==
Various techniques are available for use by auditors ranging from simple techniques to more complex analyses both for financial systems and non-financial systems. These include:
{|
! AR TECHNIQUE !! NATURE OF PROCEDURE
|-
| '''''Financial Techniques''''' ||
|-
| Delta analysis || Evaluative
|-
| Common form statements || Evaluative
|-
| Reasonability Analysis || Evaluative
|-
| Trend analysis || Evaluative
|-
| Ratio Analysis || Evaluative
|-
| '''''Other'''''
|-
| Time Series Analysis || Predictive
|-
| Time Series Modelling || Predictive
|-
| Regression Analysis || Predictive
|-
| Financial Modelling || Predictive
|-
| '''''Non-financial Techniques/Performance Indicators'''''
|-
| Key Result Area Appraisal || Evaluative
|-
| Efficiency Indicators || Evaluative
|-
| Effectiveness Indicators || Evaluative
|-
| Demand Indicators || Evaluative
|-
| Output Indicators || Evaluative
|-
| Time Indicators || Evaluative
|-
| Staff Utilisation and Productivity || Evaluative
|-
| Client satisfaction || Evaluative
|}
The above list is not exhaustive but suffices for the scope of this course.
We will now briefly explain some of the above AR techniques.
In the foregoing table we classified the techniques as either evaluative or predicitive. The distinction is not particularly important, but it serves to highlight that ARP can focus on both describing the present and anticipating the future.
==Financial Systems ARPs==
===Delta Analysis===
This is comparison of current year item to a norm, for example:
* a current year item to that of a prior year
* a current year item to budgeted figures.
Delta analysis measures the difference between the two figure as a percentage of the independent variable. In the examples above the independent variable would be the prior year results and the budget respectively.
===Common Form Statements (CFS)===
CFS are statements that express balance sheet components as a percentage of some figure such as total assets and profit and loss components as a percentage of total revenues, assuming the entities have comparable operating structures.
This facilitates analysis by comparison of entities, such as departments, companies or even divisions of different sizes.
CFS are particularly powerful where the key figures in an entity's financial statements are dependent on external factors such as sales. Here we wish to preserve a profit margin therefore varying expenses in time with sales.
===Reasonability Analysis===
This compares reasonableness of account balances in relation to some other account balance or some non-financial base.
Some examples include;
* interest expense to borrowings
* depreciation charge to fixed assets
* wages to employee number
* dividend income to investments
===Trend Analysis===
This is similar to common form statements in that all numbers are expressed as a percentage of a base. Each number in the trend statement is expressed as a percentage of its own level in a selected base year. Percentage analysis focuses on trend changes rather than the absolute magnitude of dollar changes. Such an analysis may also be applied to physical units, production hours, as well as dollar units.
Trend analysis should be evaluated by the auditor using knowledge of the client's business as to whether past trends are expected to continue into the current period, or whether changes are expected.
===Ratio Analysis===
Ratio analysis is an expression of the relationship between relevant financial information items. Ratios may be compared with benchmarks established using internal or external data.
Some of the more prominent ratios include stock turnover, debtors turnover, gross profit percentage (for revenue collection activities).
==Non-Financial Techniques/Performance Indicators==
The purposes of these reviews include:
* whether objectives of the programs are being achieved;
* whether the functions of the functions of the programs are conducted effectively and efficiently;
* the overall soundness of procedures and systems;
* the overall soundness of management practices and procedures;
* the adequacy of internal controls;
* whether staffing structure is appropriate for the program objectives ;
* cost effectiveness of procedures;
* efficiency and adequacy of EDP\information technology facilities; and
* whether the overall policies and guidelines of DOF and the Auditor-General for the program are being followed.
Types of Non-Financial Techniques/Performance Indicators include:
* Key Result Area Appraisal
* Effectiveness Indicators
* Output Indicators
* Client Surveys
* Demand Indicators
* Efficiency Indicators
* Environment Indicators
==Relative Costs - ARP, Compliance & Substantive Tests==
The internal audit function is no different to any other area of the department in that it has to make best use of the resources made available to it. For this reason, in devising and carrying out the test program it is important to be aware of the costs associated with various types of tests and the need to minimise the more costly approaches.
Analytical review is generally the least expensive of the audit tests available as it involves no sampling and uses already available management information.
Compliance tests, which involve testing the key control points in the controls analysis for compliance with the intended system (ie are the forms authorised as required, etc) are generally the next least expensive because they provide for the "proving" of figures and balances. If the system design is accepted during systems analysis, the compliance test establishes reliance on the operation of the system that produces the figures rather than requiring the proving of the individual figure or balance. In addition, compliance testing allows for an assessment of the degree of statutory and managerial compliance with required procedures and practices which is much more difficult under other testing methods.
Substantive tests of transactions are the most expensive tests because recalculations and tracings to source documentation and statistically significant financially based samples are required. When combined tests of transactions are done, the compliance part of the tests are usually less costly. (In some literature ARP are treated as a form of substantive test.)
==Other Quantitative Techniques (Operations Research)==
Techniques related to ARP, that are quantitative in nature and of potential use to Internal Auditors include:
* Forecasting
* Expected Value
* Bayseian Analysis (Decision Trees or Payoff Tables)
* Regression Analysis and Curve Fitting
* Linear Programming
* Project Scheduling and Network Models
* PERT Network Model
* Simulation
* Queuing Theory
==Backlinks==
* [[RIAM:VLA:The Four Phases of the RALSBA| Back To The Four Phases of RALSBA]]
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
80ca69d2f33f4410529845a9002899059b200d88
RIAM:VLA:AUDIT RISK ASSESSMENT & SENSITIVITY ANALYSIS
0
357
549
2019-09-10T11:26:24Z
Bishopj
1
Created page with "=PHASE 1 to 3: RISK ASSESSMENT & SENSITIVITY ANALYSIS= ==Introduction== In auditing, preparation of the testing program relates to the two areas of audit risk that arises b..."
wikitext
text/x-wiki
=PHASE 1 to 3: RISK ASSESSMENT & SENSITIVITY ANALYSIS=
==Introduction==
In auditing, preparation of the testing program relates to the two areas of audit risk that arises because a 100% check cannot be undertaken. These risks are:
* material errors can occur because a control objective has not been achieved
* material errors can occur and remain undetected
For the first area of audit risk it will be necessary to test those controls upon which reliance has been placed by means of compliance testing and to analyse those controls which have not been met by risk analysis.
The second part of audit risk will be minimised by the use of substantive testing.
==Risk Assessment==
The Risk Assessment evaluates the system design focussing on the designed-in controls and the degree to which each assertion is sustainable. The manual identifies two sources of risk:
1. Where the control is absent
2. Where the control is present but not adequate.
Where the design appears to support the assertion, a control point is identified.
The collection and interaction of these control points creates the control system.
In RIAM this risk assessment step is called Systems Evaluation or Analysis, to distinguish it from the higher level Risk assessment (of auditable areas) performed during planning and the lower level risk analysis captured by Audit Risk.
==Audit Risk, Inherent, Control and Detection Risks.==
[[Image:IAInherent_Control_Detection_Risk_Filter.png]]
The US Statement of Auditing Standard (SAS) 47 (AU 312) defines audit risk as the risk that the "auditor may unknowingly fail to modify his/her opinion on financial statements that are materially misstated." A later US pronouncement SAS 55 (AU 319) states "The risk of material misstatement in financial statement assertions consist of inherent risk, control risk, and detection risk." Because the scope of internal auditing is greater than that of external auditing, the overall audit risk extends not only to financial statements but also to unwitting failure to uncover material errors or weaknesses in the section/department audited.
[[Image:IA_Inherent_Control_Risk_Matrix.png]]
Therefore, the definitions below, taken from SAS 55, are applicable to both external and internal auditing:
* Inherent risk is the susceptibility of an assertion to a material misstatement assuming there are no related internal control structure policies and procedures.
* Control risk is the risk that a material misstatement that could occur in an assertion will not be prevented or detected on a timely basis by the entity's internal control structure policies or procedures.
* Detection risk is the risk that the auditor will not detect a material misstatement that exists in an assertion.
==Calculating and Ranking Risk - Using Weights and Questionnaires==
The measurement of risk may follow any one of a number of methods. A good planning strategy is largely independent of the method of risk assessment adopted. The ranking criteria adopted is significant to the ordering/prioritising of tasks and should be the result of discussions with management.
The primary restriction to risk analysis reflected in the proposed approach is the assumption that risks may be separated into:
1. Inherent Risks
2. Control Risks
3. Detection Risks
==Using Audit (or Assertion) Risk to determine acceptable detection risk and in turn effect sample sizes.==
Recall the Audit Risk formula:
<math>AR ~= ~IR ~* ~CR ~* ~DR.</math>
AR ~= ~IR ~* ~CR ~* ~DR.
Given the Audit Risk, we can determine the detection risk:
<math>DR~=~AR OVER {(~IR~*~CR~)}</math>
DR~=~AR OVER {(~IR~*~CR~)}
Assessing the control risk at something less than 100% requires us to:
* identify the policies and procedures that are likely to be relevant to the particular assertion being examined, that are likely to prevent or detect misstatements; and
* test controls to evaluate effectiveness.
This testing may be done with a preliminary compliance test to establish the expected error levels.
The lower the level of control risk the more assurance the audit evidence must provide that policies and procedures appropriate to the particular assertion are effective.
The assessment of overall system audit risk is used in at least two ways:
* To set the materiality at which a misstatement in financial data is considered significant; and
* To set the detection risk using the above formula.
The audit risk can therefore be set to a level reflecting "the risk of misstatement (or the risk of incorrect acceptance)". This would ordinarily be something less than 5 %. If we have estimated the inherent risk and control risk, we can determine the acceptable level of detection risk:
DR~=~AR OVER {(~IR~*~CR~)}
DR~=~5% OVER {(~50%~*~30%~)}
DR~=~33 1 OVERSM 3 %
Theoretically, the acceptable level of detection risk, goes to establishing the required confidence level in sample sizes. The lower the acceptable level of detection risk, the greater the assurance required that must be required by substantive tests. The level of assurance required influences the extent of substantive tests and the sample sizes.
==Sensitivity Analysis==
Sensitivity Analysis is used to test the behaviour of a particular model to changing conditions. It is concerned with how the model solution changes as a result of changes in the problem parameters. Model parameters are generally not known with certainty because there is usually some degree of uncertainty in the real world. Therefore it is often advantageous to know how changes in the parameters change the optimal solution.
In formulating and solving linear programming problems, certain initial assumptions are made that all values of the coefficients are derived from the analysis of data and that they represent average values or best estimate values. Accordingly it is important to analyse the sensitivity of the solution to variations in those coefficients or in the estimates of the coefficients.
If a given solution is not sensitive to changes in the parameters, then the solution is considered more reliable than that in a highly sensitive situation. Given an optimum solution that is relatively sensitive, special attention should be given to forecasting future parameter values. On the other hand, an optimum solution with little sensitivity to change does not merit the effort and resources necessary to estimate the values of the parameters more accurately.
Given that many decision problems utilise estimated parameter values in formulating a model, sensitivity analysis becomes an integral part of decision analysis.
In the earlier example of planning using risk ranking techniques, we observed that subjectivity was present both in the scores chosen and the relative weightings of the variables. In that model we used rules for awarding scores to minimise subjectivity on and provide a more rigorously verifiable result. We did not, however, have a method to establish the weights for the scored variables.
The weights are exactly like the coefficients of the linear programming problem mentioned above. Sensitivity analysis therefore provides us with a way to establish the overall "risk" of the planning model in terms of the degree of sensitivity.
We can measure the sensitivity by determining by how much each weight would have to vary (holding scores constant) before the priority ranking changed significantly. The analysis will highlight those weightings which are particularly significant to the final result. The greater the consistency of scores that a particular section receives across all the variables, the less sensitive will be its score to changes in the weights of the variables.
==Compliance Testing==
The performance of (and compliance with) the control points is tested by compliance testing. To wit - testing compliance of the system operation with the systems controls. Generally, we do not test all control points. Rather we test only those controls that are defined a "Key Controls".
The breaching of a key control will cause a violation of an audit assertion for the whole system. That is why they are key controls ! For example failing to authorise a purchase order before processing might cause a violation of the "All payments are for authorised transactions and services" assertion.
Where tests involve manual steps (ie not solely on the computer using CAATS), generally compliance tests use attribute sampling to determine the sample sizes.
==Substantive Testing==
Substantive testing focuses on balances, with the purpose of detecting and measuring error values and error rates. The substantive procedures might include:
* Sampling invoices and checking the arithmetic, tracing the invoice to the purchase order, payment and into the ledger - comparing the balances at each point.
To distinguish this from compliance testing, the equivalent compliance test might simply verify that the invoice is initialled as having been checked, verified and posted.
Substantive testing includes Analytical Review which is covered in detail in the next section.
==Backlinks==
* [[RIAM:VLA:The Four Phases of the RALSBA| Back To The Four Phases of RALSBA]]
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
45e08147c71534db3fb726a770bc1e1a4f55fd36
RIAM:VLA:AUDIT SAMPLING AND AUDIT TESTING
0
358
550
2019-09-10T11:27:48Z
Bishopj
1
Created page with "(TO BE FORMATTED) PHASE 3: AUDIT SAMPLING AND AUDIT TESTING 10.1 Introduction The testing phase follows the evaluation phase where the focus was on the control framework w..."
wikitext
text/x-wiki
(TO BE FORMATTED)
PHASE 3: AUDIT SAMPLING AND AUDIT TESTING
10.1 Introduction
The testing phase follows the evaluation phase where the focus was on the control framework with the aim of gauging some assurance about the system operation and achievement of objectives by analysing risk. But with documentation providing only limited verification of its operation, the conclusions provided to this point are not final, hence the need for further testing - to supply the auditor with the necessary proof to support the audit opinion.
The approach taken to designing the tests will depend on the nature of the audit problem. There is, however, one guiding principle which needs to be borne in mind when framing the tests. Does the direction of the test achieve the desired audit objectives?
The objectives of each verification test must be clearly defined prior to developing a comprehensive audit program. These objectives and the nature of the policy, system or activity under review will determine the verification techniques required. A wide range of techniques can be selected for developing the audit program, including:
¨ transaction testing
¨ personal observation and enquiry
¨ report and data analysis
¨ independent or third party confirmation or interviews
¨ comparison and analysis of costs or data for similar activities
It is expected that auditors will, where circumstances warrant it, utilise statistical sampling techniques in the testing phase of their audits.
10.2 Why sample ? - When to and when not to sample
Sampling is the application of audit procedures to less than 100% of the population.
This involves selecting a proportion of the population, using characteristics of that portion to draw inferences about the entire population.
There are two basic approaches -
Statistical
Judgemental ( or Non-Statistical )
Consider the objectives that are trying to be achieved by using sampling
Compliance testing
Substantive testing
Factors in deciding when to use or not to use sampling include -
objectives
data or information involved
access to computer facilities
cost/benefit
other methods of testing to achieve the desired results
10.3 Types of samples, their features, use and benefits
Attributes Sampling
used for compliance testing
estimates the rate of deviations in the population
gives statistical result that can draw inferences to the population
Variables Sampling
used for substantive testing
estimates the $ amount of error in a population
easier to use
Mean per Unit Sampling
Probability-proportional-to-size
Difference estimation sampling
Stop - go Sampling
10.4 Key Terms in Sampling
Sampling - the process of applying audit procedures to less than 100% of the population
Population - is the data set from which the sample is taken to assist in reaching a conclusion. The individual items that make up a population are known as Sampling Units
Sampling Risk - is the risk that the auditors conclusion based on a sample is different to that if the whole population was used
Non Sampling Risk - is the risk not specifically caused by sampling, ie the risk that incorrect procedures are being used
Confidence level or reliability - The percentage of times that one would expect the sample to adequately represent the population. Thus, a confidence level of 90% should result in samples that adequately represent the population 90% of the time. Confidence level is related to audit risk because the auditor is accepting a risk of 10% (100%-90%) that the sample will not represent the population. In sampling for variables (substantive testing), the primary concern is the risk of incorrect acceptance or the risk that the sample supports the conclusion that the assertion tested is not materially misstated when it actually is materially misstated. In sampling for attributes (tests of controls), the primary concern is the risk of assessing control risk too low, or the risk that the assessed level of control risk based on the sample is less than the true operating effectiveness of the internal control structure policy to procedure. These wrong conclusions are also called type II or beta errors.
Precision of confidence interval - An interval around the sample statistic (for example, the mean) within which one expects the true value of the population to fall. Precision is based upon tolerable misstatement determined by materiality considerations. In sampling for attributes (tests of controls), precision is determined by subtracting the expected deviation rate from the tolerable rate. In sampling for variables (substantive testing), precision is determined by considering tolerable misstatement in conjunction with the confidence level (an effectiveness issue) as well as the risk of incorrect rejection. The risk of incorrect rejection is the risk that the sample indicates that the assertion tested is materially misstated when in fact it is not misstated (termed a type I or alpha error). It relates to efficiency issues because the auditor will likely continue auditing until the balance is finally supported. A table is typically consulted to determine the appropriate precision for various risk levels. A rule of thumb is often used is to set precision at 50% of tolerable misstatement.
Alpha (Type I) error is the rejection of a correct hypothesis. The risk is incorrect rejection of an assertion and the risk of assessing control risk too high both relate to alpha error. These risks are aspects of sampling risk that involve efficiency issues.
Beta (Type II) error is the failure to reject an incorrect hypothesis. The risk of incorrect acceptance of an assertion and the risk of assessing control risk too low both relate to beta error. These risks are aspects of sampling risk that involve effectiveness issues.
Sampling without replacement means not returning a sample item to the population to prevent its being selected more than once. Audit sampling is customarily done without replacement.
Sampling with replacement means returning a sample item to the population so that it has a chance to be chosen more than once.
Standard deviation is a measure of the degree of compactness of the values in a population. This measure is used by the auditor to help determine appropriate sample sizes. The first formula given below is for the population standard deviation (ó). It is the square root of the quotient of the sum of the squared deviations from the mean (µ), divided by the number of items in the population (N). The sample standard deviation is found using the second formula given below. The sample standard deviation is s, the mean of the sample is s, the mean of the sample is x, and the sample size is n.
sigma ~=~ SQRT {{ SIGMA ( CHI SUB i~-~ mu ) SUP 2}over{N}}
s ~=~ SQRT {{ SIGMA ( CHI SUB i~-~ x ) SUP 2}over{n~-~1}}
Hypothesis testing involves a predetermined rule for evaluating the auditee's assertion. Strictly speaking, the auditor either rejects the hypothesis or is unable to do so. In auditing literature, however, a hypothesis that cannot be rejected is often said to be "accepted". This usage is followed even though a sample is never a sufficient basis for concluding that the hypothesis is in fact true. In hypothesis testing, the auditor determines the acceptable risk of incorrect rejection and the acceptable range (the nonrejection region) about the auditee's value. The auditor "accepts" (is unable to reject) this value if the sample value is within the auditor's prespecified materiality limits about the auditee's assertion. The auditor rejects the auditee's figure if the sample precision interval is sufficiently outside the precision limits. The sample tests the null hypothesis that there is a null or zero difference between the true value and the assertion. The null hypothesis will consist of an equality (Ho = a given value) if a two-tailed test is involved (relatively large or small values will be rejected). If a one-tailed test is involved (relatively large or small values will be rejected). If a one-tailed test is involved (extreme values on one side can be ignored), the null hypothesis will be an inequality (Ho or a given value).
Judgment (nonstatistical) sampling uses the auditor's subjective judgment to determine the sample size (number of items examined) and sample selection (which items to examine). This subjectivity is not always a weakness. The auditor, based on other audit work, may be able to test the most material items and to emphasise the types subject to high control risk. The auditor's working relationship with managers means that in many audits (particularly those of a non-financial nature) small scale judgement sampling may be sufficient. A judgementally selected sample can not be projected onto the full 'audit' population from which the sample is drawn, but may be sufficient when the purpose of the testing does not require mathematically rigorous proofs, extrapolation to the full population, accurate estimates of error rates or where management has waived the need for scientific testing.
Probability (random) sampling provides an objective method of determining sample size and selecting the items to be examined. Unlike judgment sampling, it also provides a means of quantitatively assessing precision (how closely the sample represents the population) and reliability (confidence level, the percentage of times the sample will reflect the population).
Tolerable rate is the maximum rate of deviations from a prescribed internal control structure policy or procedure that the auditor is will to accept without changing his/her assessment of control risk for the assertions related to the policy or procedure.
10.5 Judgemental and Statistical Sampling
Both types of sampling have the in common the reliance on judgement for planning, executing the plan and evaluating the results
Both methods are subject to sampling and non-sampling risk
The critical difference between the two, is that the Laws of Probability are used to control sampling risk in statistical sampling
10.6 Judgemental V. Statistical Sampling
STATISTICAL SAMPLING
ADVANTAGES DISADVANTAGES
allows sample size to be set at a minimum requires the use of a random sample, which may be costly and time consuming
objective method of determining sample size, sample risk and evaluating the sample may require additional training of staff
allows more control over sample risk many audit areas are not extensive enough to warrant its full use
easy to use is access to computers is available
allows for specific levels of reliability (confidence) and degree of precision (materiality)
JUDGEMENTAL SAMPLING
ADVANTAGES DISADVANTAGES
allows for the auditor to sue his judgement, ie. high risk items cannot quantitatively measure sampling risk
may be more cost effective than statistical sampling cannot draw statistical inferences from the sample results
presents the risk of either under-auditing or over-auditing
may be inappropriate for inexperienced staff
10.7 Calculating Sample Sizes
10.7.1 Dollar Unit Sampling ( DUS ) Assumes No Errors in Pop Sample
This may be known as probability-proportional to size (PPS), cumulative monetary amount (CMA) and many others
DUS relies on an attribute sampling approach to express deviations in dollar amounts rather than as a deviation rate
The formula for determining a sample size in DUS is -
Equation 1
n~=~{BV~*~RF} OVER {TE}
Equation 2
n~=~{RF} OVER {[TE/BV]}
Equation 3
I~=~{BV} OVER {n}
n= Sample Size
BV = book value of items tested
RF = reliability factor
TE = tolerable error
I= Skip Internal
DUS Reliability Factors
Reliability
Required RF
99% 4.605
95% 2.996
90% 2.300
EXAMPLE:
Assume:
BV = 5,000,000
N = 2,000
TE = $250,000
CL = 95% (confidence level RF = 2.996)
RF = 2.996
(A)
n~=~{BV~*~RF} OVER {TE}
{$5M~*~3.0} OVER {$250,000}~(rounding)
=
= 60
(B)
Skip~Internal[[User:Bishopj|Bishopj]]I~=~{BV} OVER {n}~=~{$5M} OVER {60}~=~$83,333
(C) If no errors in Sample, Auditor is 95% confident that Overstatement does not exceed $250,000.
Assume:
Z = 1.96 (95% confident)
N = 1000 (population size)
A = ± 2% (Prec)
P = 5% (est. error rate)
(1)
n SUB{( e)}~=~{1.96 SUP 2~*~.05~*~(1-.05)} OVER {.022}
= 456
(2)
n SUB {(f)}~=~{n SUB {(e)}} OVER {1~+~(n SUP {(e)}/n)}~=~{456} OVER {1~+~(456/1000)}~=~ UNDERLINE {313}
10.7.2 Attribute Sampling
generally used for compliance testing
it is used to estimate the rate of deviations form prescribed control procedures in a population
The formula has two parts -
The first formula is:
n SUB {( e )}~=~ {{ Z SUP 2(p)(1~-~ p)}over{A SUP 2}}
where
n(e) = First estimate of sample size
Z = Standard deviation factor
p = Occurrence rate
A = Desired precision.
N = Population size
The second formula uses the first estimate of sample size and adjusts it to fit the population:
n SUB{ (f)} ~=~ {{ n SUB {(e)}}over{1~+~(n SUB {(e)}/N)}}
where
n(f) = Final sample size
n(e) = First estimate of sample size
N = Population.
10.7.3 Variable Sampling
Equation 1: Calculate the Standard Deviation of the Test Sample
s ~=~ SQRT {{ SIGMA ( chi SUP 2)~-~ ( SIGMA chi ) SUP 2/n }over{n~-~1}}
Equation 2: Estimate the required sample,assuming infinite popoulation
n SUB {(e)} ~=~ ({Zs} OVER {A}) SUP 2
Equation 3: Adjust sample size for finite population
n SUB {(f)} ~=~ {n SUB {(e)}} OVER {1~+~(n SUB {(e)}/N)}
Equation 4: Calculate the sampling error (error in sample per unit sampled)
A ~=~ +-~ {[Z~{s} OVER SQRT n}~( SQRT {1~-~n OVER N})~]
Equation 5: Estimate the total population error
A SUP P ~=~ +-~(A~*~N)
Where
s = Standard deviation of the sample N = Population Size
= Sum of Z = Standard Dev. Factor
x = Value of each sample item A = Precision (Sampling Error) in $
n = Sample size
Generally this approach is used for substantive testing
It is used to estimate the total $ of a population or the total $ amount of errors in a population
(1) VS example - Sample Size
[Pop Size] N = 5000
[Pop Value] = $500,000
[Designed Confidence] C = 90%
[Desired Std. Dev. fact] Z = 1.645
Std. Dev. of Sample = $40 from 200 items selected at random
Desired Prec. = 4%
Desired Prec. of Pop. = $20,000
Desired Prec./Unit A = ± $4
Thus
n SUB {(e)}~=~({Z SUB {(s)}} OVER {A}) SUP 2~=~({1.645~*~40} OVER {4}) SUP 2~=~271
n SUB {(f)}~=~{n SUB {(e)}} OVER {1~+~(n SUB {(e)}/N)}~=~{271} OVER {(1~+~{271} OVER {5000})}~=~ UNDERLINE {257}
Sample size = 257 select another 57 items
(2) VS example - Precision of Estimates
n(e) = 257 N = 5,000
BV = $27,000 Z = 1.645 (90%)
AV = $23,130
S = $40
(2.1) Aug. Unit = AU = 23,130/257 = $90
Est.Inv.Val. = $90 * 5000 = $450,000
(2.2)
THEREFORE ~A~=~Z~{s OVER SQRT n}~( SQRT {1~-~n OVER N})
=~1.645~{40 OVER SQRT {257}}~(~ SQRT {1~-~{257} OVER {5000}})
=~$ UNDERLINE 4~~( +- )~ ~unit~ precision
(2.3) ± 4 * 5000 = ± $20,000 = Pop. Precis.
(2.4) Est Inv. Val ranges from $430,000 to $470,000
10.7.4 Discovery Sampling
This method is used when the auditor is examining a population for fraud or gross errors are expected
This method involves setting two parameters
critical rate of deviation ( max errors allowed / population )
probability
In this case it is necessary to use discovery sampling tables.
10.8 Effect of risk analysis on the extent of substantive testing
Risk of incorrect rejection:
is the risk that, the sample tells the auditor that the balance is materially misstated , when in fact it is not misstated
The risk will effect the efficiency of the audit , as it generally results in more testing having to be performed.
Risk of incorrect acceptance
is the risk that, the sample tells the auditor the balance is not misstated, when in fact the balance is misstated
This may have a serious effect on the audit as it may result in the wrong opinion being issued
Both of the above risks have an inverse effect on sample sizes, that is, a lower risk level will result in a higher sample size
10.9 Substantive Test Risk Matrix
Evidence indicates relevant account balance should be: Relevant Account balance is in fact:
Fairly Stated Not Fairly Stated
Accepted Correct
Decision Risk of incorrect Acceptance
Rejected Risk of incorrect Rejection Correct
Decision
10.10 The effect of stratification on sample sizes
Is the process of dividing populations into sub-populations. Thus allowing the auditor to direct his efforts towards items considered to contain the greater monetary error.
The principal advantage of stratified sampling is that it produces sub-populations that are individually more homogeneous, thus decreasing the sample size required to accomplish the audit objectives.
This serves to reduce the variability of the sampling units within each stratum
10.11 Interrelationships between concepts such as sample size, confidence, expected error rates and precision
Attribute sampling:
Increase in required confidence -> Increase in sample size
Increase in expected error rate -> Increase in sample size
Increase in precision -> Decrease in sample size.
Variable sampling:
Increase in required confidence -> Increase in sample size
Increase in standard deviation -> Increase in sample size
Increase in precision -> Decrease in sample size.
10.12 Methods of making selections
Random number sampling:
offers every item in the population an equal chance of selection
involves using random number tables or computer generated random numbers
is facilitated when items in the population are consecutively numbered
although the same number may be selected twice , practically this method uses no replacement sampling , therefore it may result in a larger sample. Thus it is considered to be a conservative approach.
Interval sampling:
simply means " selecting items at intervals "
used when random number sampling is inappropriate
simple to use
Stratified sampling
involves arranging the population to provide greater sampling efficiency
the population will be separated into two or more strata
samples are then taken from each strata
stratification ,allows for smaller sample sizes and controls distortion
Cluster sampling
used when documents or records are dispersed or scattered and other methods are to time consuming or costly
as its name suggests this method simply involves the selection of clusters instead of individual items.
SUMMARY
In dealing with audit sampling, the auditor should keep these ten commandments in mind:
1. Know the principles of scientific sampling, but use them only when they best fit the audit objectives.
2. Know the population, and base audit opinions only on the population sampled.
3. Let every item in the population have an equal chance of being selected.
4. Do not let personal bias affect the sample.
5. Do not permit patterns in the population to affect the randomness of the sample.
6. Do not draw conclusions about the entire population from the purposive or directed (judgement) sample, even though it does have its place.
7. Base estimates of maximum error rates on what is reasonable in the real world; try to determine at what point alarms would automatically go off.
8. Stratify wherever it would appear to reduce variability in the sample.
9. Do not set needlessly high reliability goals (confidence level and precision). Controls, supervision, feedback, self-correcting devices, and management awareness and surveillance should all be considered in seeking to reduce the extent of the audit tests.
10. Do not stop with statistical results; know why the variances occurred.
2be999f4ae546c42e21e239e10084de252518679
551
550
2019-09-10T11:35:16Z
Bishopj
1
wikitext
text/x-wiki
(TO BE FORMATTED)
=PHASE 3: AUDIT SAMPLING AND AUDIT TESTING=
==Introduction==
The testing phase follows the evaluation phase where the focus was on the control framework with the aim of gauging some assurance about the system operation and achievement of objectives by analysing risk. But with documentation providing only limited verification of its operation, the conclusions provided to this point are not final, hence the need for further testing - to supply the auditor with the necessary proof to support the audit opinion.
The approach taken to designing the tests will depend on the nature of the audit problem. There is, however, one guiding principle which needs to be borne in mind when framing the tests. Does the direction of the test achieve the desired audit objectives?
The objectives of each verification test must be clearly defined prior to developing a comprehensive audit program. These objectives and the nature of the policy, system or activity under review will determine the verification techniques required. A wide range of techniques can be selected for developing the audit program, including:
¨ transaction testing
¨ personal observation and enquiry
¨ report and data analysis
¨ independent or third party confirmation or interviews
¨ comparison and analysis of costs or data for similar activities
It is expected that auditors will, where circumstances warrant it, utilise statistical sampling techniques in the testing phase of their audits.
==Why sample ? - When to and when not to sample==
Sampling is the application of audit procedures to less than 100% of the population.
This involves selecting a proportion of the population, using characteristics of that portion to draw inferences about the entire population.
There are two basic approaches -
Statistical
Judgemental ( or Non-Statistical )
Consider the objectives that are trying to be achieved by using sampling
Compliance testing
Substantive testing
Factors in deciding when to use or not to use sampling include -
objectives
data or information involved
access to computer facilities
cost/benefit
other methods of testing to achieve the desired results
==Types of samples, their features, use and benefits==
Attributes Sampling
used for compliance testing
estimates the rate of deviations in the population
gives statistical result that can draw inferences to the population
Variables Sampling
used for substantive testing
estimates the $ amount of error in a population
easier to use
Mean per Unit Sampling
Probability-proportional-to-size
Difference estimation sampling
Stop - go Sampling
==Key Terms in Sampling==
Sampling - the process of applying audit procedures to less than 100% of the population
Population - is the data set from which the sample is taken to assist in reaching a conclusion. The individual items that make up a population are known as Sampling Units
Sampling Risk - is the risk that the auditors conclusion based on a sample is different to that if the whole population was used
Non Sampling Risk - is the risk not specifically caused by sampling, ie the risk that incorrect procedures are being used
Confidence level or reliability - The percentage of times that one would expect the sample to adequately represent the population. Thus, a confidence level of 90% should result in samples that adequately represent the population 90% of the time. Confidence level is related to audit risk because the auditor is accepting a risk of 10% (100%-90%) that the sample will not represent the population. In sampling for variables (substantive testing), the primary concern is the risk of incorrect acceptance or the risk that the sample supports the conclusion that the assertion tested is not materially misstated when it actually is materially misstated. In sampling for attributes (tests of controls), the primary concern is the risk of assessing control risk too low, or the risk that the assessed level of control risk based on the sample is less than the true operating effectiveness of the internal control structure policy to procedure. These wrong conclusions are also called type II or beta errors.
Precision of confidence interval - An interval around the sample statistic (for example, the mean) within which one expects the true value of the population to fall. Precision is based upon tolerable misstatement determined by materiality considerations. In sampling for attributes (tests of controls), precision is determined by subtracting the expected deviation rate from the tolerable rate. In sampling for variables (substantive testing), precision is determined by considering tolerable misstatement in conjunction with the confidence level (an effectiveness issue) as well as the risk of incorrect rejection. The risk of incorrect rejection is the risk that the sample indicates that the assertion tested is materially misstated when in fact it is not misstated (termed a type I or alpha error). It relates to efficiency issues because the auditor will likely continue auditing until the balance is finally supported. A table is typically consulted to determine the appropriate precision for various risk levels. A rule of thumb is often used is to set precision at 50% of tolerable misstatement.
Alpha (Type I) error is the rejection of a correct hypothesis. The risk is incorrect rejection of an assertion and the risk of assessing control risk too high both relate to alpha error. These risks are aspects of sampling risk that involve efficiency issues.
Beta (Type II) error is the failure to reject an incorrect hypothesis. The risk of incorrect acceptance of an assertion and the risk of assessing control risk too low both relate to beta error. These risks are aspects of sampling risk that involve effectiveness issues.
Sampling without replacement means not returning a sample item to the population to prevent its being selected more than once. Audit sampling is customarily done without replacement.
Sampling with replacement means returning a sample item to the population so that it has a chance to be chosen more than once.
Standard deviation is a measure of the degree of compactness of the values in a population. This measure is used by the auditor to help determine appropriate sample sizes. The first formula given below is for the population standard deviation (ó). It is the square root of the quotient of the sum of the squared deviations from the mean (µ), divided by the number of items in the population (N). The sample standard deviation is found using the second formula given below. The sample standard deviation is s, the mean of the sample is s, the mean of the sample is x, and the sample size is n.
sigma ~=~ SQRT {{ SIGMA ( CHI SUB i~-~ mu ) SUP 2}over{N}}
s ~=~ SQRT {{ SIGMA ( CHI SUB i~-~ x ) SUP 2}over{n~-~1}}
Hypothesis testing involves a predetermined rule for evaluating the auditee's assertion. Strictly speaking, the auditor either rejects the hypothesis or is unable to do so. In auditing literature, however, a hypothesis that cannot be rejected is often said to be "accepted". This usage is followed even though a sample is never a sufficient basis for concluding that the hypothesis is in fact true. In hypothesis testing, the auditor determines the acceptable risk of incorrect rejection and the acceptable range (the nonrejection region) about the auditee's value. The auditor "accepts" (is unable to reject) this value if the sample value is within the auditor's prespecified materiality limits about the auditee's assertion. The auditor rejects the auditee's figure if the sample precision interval is sufficiently outside the precision limits. The sample tests the null hypothesis that there is a null or zero difference between the true value and the assertion. The null hypothesis will consist of an equality (Ho = a given value) if a two-tailed test is involved (relatively large or small values will be rejected). If a one-tailed test is involved (relatively large or small values will be rejected). If a one-tailed test is involved (extreme values on one side can be ignored), the null hypothesis will be an inequality (Ho or a given value).
Judgment (nonstatistical) sampling uses the auditor's subjective judgment to determine the sample size (number of items examined) and sample selection (which items to examine). This subjectivity is not always a weakness. The auditor, based on other audit work, may be able to test the most material items and to emphasise the types subject to high control risk. The auditor's working relationship with managers means that in many audits (particularly those of a non-financial nature) small scale judgement sampling may be sufficient. A judgementally selected sample can not be projected onto the full 'audit' population from which the sample is drawn, but may be sufficient when the purpose of the testing does not require mathematically rigorous proofs, extrapolation to the full population, accurate estimates of error rates or where management has waived the need for scientific testing.
Probability (random) sampling provides an objective method of determining sample size and selecting the items to be examined. Unlike judgment sampling, it also provides a means of quantitatively assessing precision (how closely the sample represents the population) and reliability (confidence level, the percentage of times the sample will reflect the population).
Tolerable rate is the maximum rate of deviations from a prescribed internal control structure policy or procedure that the auditor is will to accept without changing his/her assessment of control risk for the assertions related to the policy or procedure.
==Judgmental and Statistical Sampling==
Both types of sampling have the in common the reliance on judgement for planning, executing the plan and evaluating the results
Both methods are subject to sampling and non-sampling risk
The critical difference between the two, is that the Laws of Probability are used to control sampling risk in statistical sampling
==Judgmental V. Statistical Sampling==
STATISTICAL SAMPLING
ADVANTAGES DISADVANTAGES
allows sample size to be set at a minimum requires the use of a random sample, which may be costly and time consuming
objective method of determining sample size, sample risk and evaluating the sample may require additional training of staff
allows more control over sample risk many audit areas are not extensive enough to warrant its full use
easy to use is access to computers is available
allows for specific levels of reliability (confidence) and degree of precision (materiality)
JUDGEMENTAL SAMPLING
ADVANTAGES DISADVANTAGES
allows for the auditor to sue his judgement, ie. high risk items cannot quantitatively measure sampling risk
may be more cost effective than statistical sampling cannot draw statistical inferences from the sample results
presents the risk of either under-auditing or over-auditing
may be inappropriate for inexperienced staff
==Calculating Sample Sizes==
===Dollar Unit Sampling ( DUS ) Assumes No Errors in Pop Sample===
This may be known as probability-proportional to size (PPS), cumulative monetary amount (CMA) and many others
DUS relies on an attribute sampling approach to express deviations in dollar amounts rather than as a deviation rate
The formula for determining a sample size in DUS is -
Equation 1
n~=~{BV~*~RF} OVER {TE}
Equation 2
n~=~{RF} OVER {[TE/BV]}
Equation 3
I~=~{BV} OVER {n}
n= Sample Size
BV = book value of items tested
RF = reliability factor
TE = tolerable error
I= Skip Internal
DUS Reliability Factors
Reliability
Required RF
99% 4.605
95% 2.996
90% 2.300
EXAMPLE:
Assume:
BV = 5,000,000
N = 2,000
TE = $250,000
CL = 95% (confidence level RF = 2.996)
RF = 2.996
(A)
n~=~{BV~*~RF} OVER {TE}
{$5M~*~3.0} OVER {$250,000}~(rounding)
=
= 60
(B)
Skip~Internal[[User:Bishopj|Bishopj]]I~=~{BV} OVER {n}~=~{$5M} OVER {60}~=~$83,333
(C) If no errors in Sample, Auditor is 95% confident that Overstatement does not exceed $250,000.
Assume:
Z = 1.96 (95% confident)
N = 1000 (population size)
A = ± 2% (Prec)
P = 5% (est. error rate)
(1)
n SUB{( e)}~=~{1.96 SUP 2~*~.05~*~(1-.05)} OVER {.022}
= 456
(2)
n SUB {(f)}~=~{n SUB {(e)}} OVER {1~+~(n SUP {(e)}/n)}~=~{456} OVER {1~+~(456/1000)}~=~ UNDERLINE {313}
===Attribute Sampling===
generally used for compliance testing
it is used to estimate the rate of deviations form prescribed control procedures in a population
The formula has two parts -
The first formula is:
n SUB {( e )}~=~ {{ Z SUP 2(p)(1~-~ p)}over{A SUP 2}}
where
n(e) = First estimate of sample size
Z = Standard deviation factor
p = Occurrence rate
A = Desired precision.
N = Population size
The second formula uses the first estimate of sample size and adjusts it to fit the population:
n SUB{ (f)} ~=~ {{ n SUB {(e)}}over{1~+~(n SUB {(e)}/N)}}
where
n(f) = Final sample size
n(e) = First estimate of sample size
N = Population.
10.7.3 Variable Sampling
Equation 1: Calculate the Standard Deviation of the Test Sample
s ~=~ SQRT {{ SIGMA ( chi SUP 2)~-~ ( SIGMA chi ) SUP 2/n }over{n~-~1}}
Equation 2: Estimate the required sample,assuming infinite popoulation
n SUB {(e)} ~=~ ({Zs} OVER {A}) SUP 2
Equation 3: Adjust sample size for finite population
n SUB {(f)} ~=~ {n SUB {(e)}} OVER {1~+~(n SUB {(e)}/N)}
Equation 4: Calculate the sampling error (error in sample per unit sampled)
A ~=~ +-~ {[Z~{s} OVER SQRT n}~( SQRT {1~-~n OVER N})~]
Equation 5: Estimate the total population error
A SUP P ~=~ +-~(A~*~N)
Where
s = Standard deviation of the sample N = Population Size
= Sum of Z = Standard Dev. Factor
x = Value of each sample item A = Precision (Sampling Error) in $
n = Sample size
Generally this approach is used for substantive testing
It is used to estimate the total $ of a population or the total $ amount of errors in a population
(1) VS example - Sample Size
[Pop Size] N = 5000
[Pop Value] = $500,000
[Designed Confidence] C = 90%
[Desired Std. Dev. fact] Z = 1.645
Std. Dev. of Sample = $40 from 200 items selected at random
Desired Prec. = 4%
Desired Prec. of Pop. = $20,000
Desired Prec./Unit A = ± $4
Thus
n SUB {(e)}~=~({Z SUB {(s)}} OVER {A}) SUP 2~=~({1.645~*~40} OVER {4}) SUP 2~=~271
n SUB {(f)}~=~{n SUB {(e)}} OVER {1~+~(n SUB {(e)}/N)}~=~{271} OVER {(1~+~{271} OVER {5000})}~=~ UNDERLINE {257}
Sample size = 257 select another 57 items
(2) VS example - Precision of Estimates
n(e) = 257 N = 5,000
BV = $27,000 Z = 1.645 (90%)
AV = $23,130
S = $40
(2.1) Aug. Unit = AU = 23,130/257 = $90
Est.Inv.Val. = $90 * 5000 = $450,000
(2.2)
THEREFORE ~A~=~Z~{s OVER SQRT n}~( SQRT {1~-~n OVER N})
=~1.645~{40 OVER SQRT {257}}~(~ SQRT {1~-~{257} OVER {5000}})
=~$ UNDERLINE 4~~( +- )~ ~unit~ precision
(2.3) ± 4 * 5000 = ± $20,000 = Pop. Precis.
(2.4) Est Inv. Val ranges from $430,000 to $470,000
===Discovery Sampling===
This method is used when the auditor is examining a population for fraud or gross errors are expected
This method involves setting two parameters
critical rate of deviation ( max errors allowed / population )
probability
In this case it is necessary to use discovery sampling tables.
==Effect of risk analysis on the extent of substantive testing==
Risk of incorrect rejection:
is the risk that, the sample tells the auditor that the balance is materially misstated , when in fact it is not misstated
The risk will effect the efficiency of the audit , as it generally results in more testing having to be performed.
Risk of incorrect acceptance
is the risk that, the sample tells the auditor the balance is not misstated, when in fact the balance is misstated
This may have a serious effect on the audit as it may result in the wrong opinion being issued
Both of the above risks have an inverse effect on sample sizes, that is, a lower risk level will result in a higher sample size
==Substantive Test Risk Matrix==
Evidence indicates relevant account balance should be: Relevant Account balance is in fact:
Fairly Stated Not Fairly Stated
Accepted Correct
Decision Risk of incorrect Acceptance
Rejected Risk of incorrect Rejection Correct
Decision
==The effect of stratification on sample sizes==
Is the process of dividing populations into sub-populations. Thus allowing the auditor to direct his efforts towards items considered to contain the greater monetary error.
The principal advantage of stratified sampling is that it produces sub-populations that are individually more homogeneous, thus decreasing the sample size required to accomplish the audit objectives.
This serves to reduce the variability of the sampling units within each stratum
==Interrelationships between concepts such as sample size, confidence, expected error rates and precision==
Attribute sampling:
Increase in required confidence -> Increase in sample size
Increase in expected error rate -> Increase in sample size
Increase in precision -> Decrease in sample size.
Variable sampling:
Increase in required confidence -> Increase in sample size
Increase in standard deviation -> Increase in sample size
Increase in precision -> Decrease in sample size.
==Methods of making selections==
Random number sampling:
offers every item in the population an equal chance of selection
involves using random number tables or computer generated random numbers
is facilitated when items in the population are consecutively numbered
although the same number may be selected twice , practically this method uses no replacement sampling , therefore it may result in a larger sample. Thus it is considered to be a conservative approach.
Interval sampling:
simply means " selecting items at intervals "
used when random number sampling is inappropriate
simple to use
Stratified sampling
involves arranging the population to provide greater sampling efficiency
the population will be separated into two or more strata
samples are then taken from each strata
stratification ,allows for smaller sample sizes and controls distortion
Cluster sampling
used when documents or records are dispersed or scattered and other methods are to time consuming or costly
as its name suggests this method simply involves the selection of clusters instead of individual items.
==SUMMARY==
In dealing with audit sampling, the auditor should keep these ten commandments in mind:
1. Know the principles of scientific sampling, but use them only when they best fit the audit objectives.
2. Know the population, and base audit opinions only on the population sampled.
3. Let every item in the population have an equal chance of being selected.
4. Do not let personal bias affect the sample.
5. Do not permit patterns in the population to affect the randomness of the sample.
6. Do not draw conclusions about the entire population from the purposive or directed (judgement) sample, even though it does have its place.
7. Base estimates of maximum error rates on what is reasonable in the real world; try to determine at what point alarms would automatically go off.
8. Stratify wherever it would appear to reduce variability in the sample.
9. Do not set needlessly high reliability goals (confidence level and precision). Controls, supervision, feedback, self-correcting devices, and management awareness and surveillance should all be considered in seeking to reduce the extent of the audit tests.
10. Do not stop with statistical results; know why the variances occurred.
==Backlinks==
* [[RIAM:VLA:The Four Phases of the RALSBA| Back To The Four Phases of RALSBA]]
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
8292872af95083bf359524c5c77c5c7e95d40f0f
RIAM:VLA:AUDIT REPORTING PROCEDURES
0
359
552
2019-09-10T11:38:10Z
Bishopj
1
Created page with "=PHASE 4: REPORTING PROCEDURES= ==Introduction== Internal audit reports communicate audit findings to management in order to assist it in monitoring the efficiency, economy..."
wikitext
text/x-wiki
=PHASE 4: REPORTING PROCEDURES=
==Introduction==
Internal audit reports communicate audit findings to management in order to assist it in monitoring the efficiency, economy and effectiveness of the organisations operations, to improve the control framework and to ensure compliance with established policies, plans and procedures. The report should supply the manager with the information necessary to take or to initiate corrective action on any deficiencies reported. Effective monitoring of the control process depends on the flow of timely, accurate, concise and relevant information to managers, set out so that matters requiring attention can be easily seen and acted upon.
A further aspect of reporting is the updating of the Strategic Audit Program in accordance with Section 4.5 of the manual, that is the preparation of the SAP Update work paper.
==RIAM Reporting Process==
===GENERAL GUIDE - THE RIAM REPORTING PROCESS===
{|
! STAGE !! ACTION
|-
| Discussion Paper || Issued after field work to address findings: circulated to program management before exit interview
|-
| Exit Interview || To review audit findings with operational staff, to correct errors of fact and to provide an opportunity to suggest rephrasing
|-
| Draft Report || Draft report prepared after exit interview. Management comments may be included in draft report. Draft report circulated to program management for final comment
|-
| Final Report || Locality audits to be signed by sams or by co assistant directors, depending on location. Final report to be provided to auditee in advance of wider circulation within the dept. Sams to circulate to state director, other sams, and to director, internal audit. Co assistant directors to provide to sams and to director, internal audit. The as, rmcb, will provide to the secretary and to caac.
|}
National program reviews should be provided to auditee in advance of wider circulation within the organisation. The report will be co-signed by the audit manager, whether a SAM or co assistant director, and director, internal audit. The as, rmcb, will distribute the report to the secretary, caac, fass and state directors.
===Program Reviews - State Based Activity===
###REFER PROGRAMME REVIEWS###
===Locality Reviews===
{|
| Audit Field Work || Field work completed
|-
| Discussion Paper || Address findings from field work
|-
| Exit Interview || Discuss findings with operational staff
|-
| Exit Interview Record || Include management comments
|-
| Draft Report || Distributed to State Director and relevant Area/Branch Manager<br>
Obtain management comments
|-
| Final Report || Including management comments. To be signed by the State Audit Manager and Project Officers. Distributed to other State Audit Managers, Assistant Secretary - RMCB, each State Director and relevant CO Division Head.<br>
Prepare and submit to Director Internal Audit SAP Update work paper
|}
===Outlet Audits===
{|
| Audit Field Work || Field work completed
|-
| Exit Interview || Discuss findings with operational staff
|-
| Exit Interview Record || Include management comments. Provide Outlet Manager with copy before leaving. Copy to Regional Manager
|-
| Report || Compile findings from all Units reviewed within the Area. Signed by State Audit Manager and Project Officer. Distributed to Area Director, State Director, other State Audit Managers and Assistant Secretary RMCB. Prepare and submit to Director Internal Audit SAP Update work paper
|}
==Standard Report Structure (See Attachment)==
Ultimately the product produced and of greatest significance to management is the report. Reporting should be standardised to ensure consistency of structure, coverage, presentation, language and quality.
Reports should have the following structure:
<ol>
<li> '''Title page'''. This should include names of auditors and date of report<br>
<li> '''Table of contents''' or index.<br>
<li> '''Executive Summary'''<br> A one page executive summary with the report title printed at the top of the page. This executive summary should be written to be easily understood by busy people who may not have any knowledge of the subject matter of the audit report. It should present the focus questions, answers and where a "no" or qualified opinion is offered, it should summarise the reason. Finally it should summarise the general audit opinion, giving brief mention to positive and negative findings.
<li> '''Executive Briefing'''<br>
Provides a summary of the purpose, objectives, assertions, approach, scope, boundary, the overall opinion, key findings and issues arising, and summary of agreed actions.
<br>
<li> '''Objectives and Approach'''<br>
Addresses the "How" and "Why" of the review, and defines the assertions on which the conclusions and findings are based.
<br>
<li> '''Scope and Boundary'''<br>
Clearly defines the matters covered by the review, and most importantly the matters excluded from the review.
<br>
<li> '''Brief Description of the System Reviewed'''<br>
Covers the background to audit: description of program or activity audited. Purpose of the Section/Systems, The People and Organisation Structure, the Principal Activities of the Section/Systems, Documents and Records (both manual and computer) and the Reports Produced from and to the Section/Systems. Inclusion of this description facilitates understanding of the issues, and assists other readers to judge whether the report is applicable to their area of responsibilities: identification of the Division, Branch and Section audited: reference to external and other internal audits of the same area in the last two years, and brief mention of their major findings;
<br>
<li> '''Checklist of Findings, Recommendations and Action Plans'''<br>
Presents in Landscape form a summary of the findings and recommendations in section 6 under the headings: "Findings" and "Recommendations". Tables include boxes for Action Plans to be referenced or detailed. This section assists in monitoring and following up responses to audit recommendations by the Audit Committee.
<br>
<li> '''Detailed Findings and Recommendations'''<br>
Positive and negative findings should be recorded so that the report is balanced. Negative findings (those which supress an Assertion) should be reported in much greater detail than positive findings (those which sustain an Assertion).
<br>
<br>
The findings and recommendations have a standard structure:<br>
* Observation
** The observed facts, relevant legislation, directions and industry relevant information.
<br>
* Implications and Risks
** Assertions suppressed or supported.
** Principal risks and exposures.
** Arguments in favour of, or reasons for, the breach and audit's comment.
** Summation of audit's conclusion as to risk or exposure.
<br>
* For convenience, it is advisable to have the implication and risk on the same page as the relevant finding.
<br>
* Recommendations
** Numbered, clear, specific and relevant recommendations for action.
** Where alternatives are identified either by audit or the client they are presented and evaluated.
<br>
* Management Comment
** Management's response to the issues raised and action taken/to be taken and the officer to whom it is assigned. After discussion and exit interviews the all (or at least practically all) of your recommendations should be accepted by management. If not, you have not done your job correctly!
<br>
<br>
Appendices are included as appropriate to:
* Document systems
* Checklist Findings and Recommendations with an Action Plan, or Acion Plan Blank form
* Report data anomalies detected during testing
* Explain complex concepts or definitions
* Provide general discussion of management related issues or management theory which may assist management in decision making
</ol>
==Level of Detail and Alternative Structures==
The basic principle is that the method of reporting should be tailored to the situation and the target audience. The standard structure of section 3 should be varied where circumstances, or the needs of the report's audience dictate. The RIAM Introduction to Internal Audit demonstrate different approaches to reporting. The intention here is not to restrict the method of presentation but to provide a default standard to be used except where approved by Internal Audit management. Other standards may also be adopted from time to time and will be incorporated in the manuul. ###INSERT REFERENCE### contains an set of worked example reports.
The first and most relevant variation in the standard applies to the level of detail presented to various levels of management. An "Observation, Implication & Risks, Recommendation, and Management Comment" format as standardised above needs careful use when reporting to senior management. The format is ideally suited to low level or line managers as it focusses their attention on administrative detail. Senior managers, however, have an interest in strategic issues and reports that show an analysis of problems in terms of their risk and root causes.
The second variation to the standard also relates to the level of detail and the level of management to whom the report is targetted. A report usually has multiple management audiences, and the standard above reflects that fact - Senior management through line management will find a section that presents the information at the level of detail they require from executive summary through detailed findings and recommendations. Where the audience is not so widely distributed a structure with more specific targetting may be appropriate.
==Action Plans - The basis for Follow Up==
Among the suggested appendices is one that presents either an action plan or a blank form upon which management can write an action plan. The Action Plan either as a blank or completed form has the following features:
* It provides a checklist of all observations/findings & recommendations in the detailed part of the report. (Watch the detail here - there shouldn't be any !);
* Matches the findings and reommendations to management's response (action);
** Presents the detail for the management actions in the form:
{| border=1
!Finding No !! Finding !! Recommendation !! Proposed Action !! By Whom !! By When !! Complete
|}
* This provides a rounding to the reports that allows management to pick up on key events and dates for their own review of the progress on implementing changes arising from audits; and
* Facilitates systems excellence by encouraging the planning and implementation of appropriate corrective action while the issues are still fresh in the minds of management by providing them with the key planning document.
==Standards for Reports==
General standards for reports are that they should be:
* Accurate
* Clear
* Concise
* Courteous
* Simple in style
* Timely
A detailed discussion of the rules and standards for writing reports is included in [[Report Writing]].
Specifically the difference between Good and Bad Reports include the following:
{| align=center
! !!Good Reports !! Bad Reports
|-
|1.||Easy to read||Hard to read
|-
|2.||Give background to audit & refers to other audit reports|| No background
|-
|3.||Conclusions justified||Unjustified conclusions
|-
|4.||Technical details in appendix (including method)
||Technical details in report
|-
|5.||Identifies major & minor findings||Does not identify most important findings
|-
|6.||Has views for each major findings||Does not identify views for each major findings
|-
|7.||Identifies who is responsible for each action||Does not identify action
|-
|8.||Good timing||Bad timing
|-
|9.||Recognises multiple audiences||Does not recognise multiple audiences
|-
|10.||Has one page executive summary||Does not have executive summary
|-
|11.||Reinstates awareness awareness of change process||Does not demonstrate controlled change process
|}
==Backlinks==
* [[RIAM:VLA:The Four Phases of the RALSBA| Back To The Four Phases of RALSBA]]
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
1e0b747590ab6666cf7394c43d95d83edeab8420
RIAM:VLA:IA REVIEW AND QUALITY ASSURANCE
0
360
553
2019-09-10T11:39:29Z
Bishopj
1
Created page with "=PHASE 1 to 4: REVIEW AND QUALITY ASSURANCE= ==Introduction== Quality Assurance, is the review and control necessary throughout all stages of an internal audit. The final r..."
wikitext
text/x-wiki
=PHASE 1 to 4: REVIEW AND QUALITY ASSURANCE=
==Introduction==
Quality Assurance, is the review and control necessary throughout all stages of an internal audit. The final review should ensure that
* the direction of the audit has proved appropriate
* the audit approach has been satisfactory
* that the audit has been correctly documented so as to support the audit findings and recommendation
Quality Assurance is a review process which if correctly applied will provide assurance to both audit and client management that defined auditing practices and standards have been complied with. Quality Assurance also enables the early identification of non compliance with prescribed procedures. It also allows audit management to determine the appropriateness of such procedures. Another feature in the Quality Assurance process is the identification of training needs for field auditors including project team managers.
Quality Assurance is not the sole prerogative of the audit manager, rather it is a process which involves all participants from the field auditor and project team leader, audit manager to Corporate Audit Committee and to a lesser extent the clients.
In this article we focus on quality assurance as a review mechanism, but it should be remembered that under total qualioty management principles, we conduct quality assurance throughout the project. Its components in this form include:
* Specification of the methods;
* Training and refreshing skills of staff;
* Recruitment of appropriate staff;
* Feedback and suggestions from staff and clients as to process improvements;
* Establishment of a permanent quality assurance/improvement review committee focussed on process improvement;
* Monitoring of audit process performance measures;
* Adoption of processes that control and manage throughput and production point queues (like in-trays!!);
* Automatic escalation of alerts on system or process failure; and
* On going and active direction by the management team.
The portion of quality assurance described here is specifically focussed on review stage quality assurance which is specifically addressed and required by the auditing standards.
==Responsibility for Review and Quality Assurance==
Under the Annual Audit Plans, there are three levels of audit activity. These are :
* Program Reviews which are of a national or state wide focus
* Locality Audits which are oriented to activities in a single location
* Outlet Audits are audits of individual outlet offices
Review and Quality Assurance should be applied to all and by the appropriate officers irrespective to the type of audit being undertaken. Naturally, the field auditor at all times should be aware of the importance in ensuring that work papers and audit documentation are of a high and presentable standard. Included is the need to ensure the next level of audit management takes an active role and interest in reviewing workpapers and progress accordingly.
At the Program Review level, the quality assurance should be undertaken by the Director, Internal Audit and would have been reviewed by the field auditors project manager.
For Locality Audits the quality assurance would be performed by the State Audit Manager/Director Internal Audit.
Outlet Audits would be reviewed by the field auditor with input provided by other field auditors and the State Audit Manager.
The focus of all review and quality assurance action is to ensure specified audit objectives have been met, the scope and objectives are measured against the final product, working papers are adequate and can support findings and that there is evidence of a review process with outcomes being addressed satisfactorily by the field auditor.
==Quality Assurance through the Audit Flow==
Any audit is a planned and controlled constructive analysis of the strengths and weaknesses of an organisation's systems to gauge the extent to which they assist the organisation to achieve its objectives with adequate cost effective and efficient use of resources.
Quality Assurance is the process whereby the objectives are visited during and at the completion of an audit.
==Quality Assurance in the Audit Phases==
All audits can be considered to be undertaken in five major phases during which Quality Assurance should be applied. This will ensure early identification of any weaknesses and that the final product/report is not unnecessarily delayed.
Major phases of an Audit:
* Planning
* Documentation
* Analysis
* Verification/Testing
* Reporting
The Quality Assurance function should apply at the beginning, during and completion of each phase. A simple, efficient and effective check during each audit phase by the field auditor and audit leader/manager will greatly assist in ensuring the audit has been completed properly and that the product is a quality product readily accepted by client management.
Incorporation of the Quality Assurance process during each audit phase not only assists in identifying whether the field auditor is on track but in itself is a control mechanism.
Additionally, Quality Assurance highlights the areas of strengths or weaknesses respective to field auditors work and thus corrective action can be initiated.
==Quality Assurance during Planning Phase==
The planning phase is critical to a successful audit outcome in that wasted time and application of resources may be avoided if in the initial stage all requirements are adequately planned and catered for.
The field auditor should identify the system objectives, specify the audit objectives and scope, have a clear picture of the system boundaries, formulate a brief system overview, allocate resources, itemise broadly the time required for each major phase of the audit and determine topical issues relevant to the system being audited. Also any major organisational policy relating to the system should be identified.
Each phase of the audit should have a time and resources allocated. This will allow both the field auditor and the audit manager or project team leader to monitor the pace of the audit.
The audit manager should receive a verbal report of progress to date. This will allow the audit manager or project team leader to be aware of any issues, potential problems and guide the field auditor in the correct path.
Evidence of such activities should be included in the working papers under the planning phase.
During the planning phase, a review of previous audit reports and the status of recommendations from such reports would be performed by the field auditor.
Additionally, the basis of the entry interview will be prepared and checked with the audit manager. This should include aspects in relation to audit methodology.
==Quality Assurance during Documentation Phase==
The Quality Assurance applied during the Documentation phase essentially should centre on the outcome of the Entry Interview. Also included are User procedural guides and system narratives, legislation that is applicable, program notes and budget reports/extracts, previous DRT or ANAO audit reports and follow up action by users. The DRT Risk Assessment of DRT Programs 1990/91 and reports from a program evaluation which may include the actual system being audited should be included in the work papers. All these documents will assist the auditor in gaining an accurate overview of the system and its environment.
A walk-through of the system and notation of manual and compliance requirements should be performed during this phase. The walk-through gives the field auditor an opportunity to gain an understanding of the system and subsequently document it. It also lays the foundation for the preparation of a control model.
Evidence of walk-throughs and verification of the walk-through flowchart or narrative description by the users must be sighted by the audit manager or project leader as part of the Quality Assurance process.
The emphasis of the documentation phase is to ensure there is a sufficient basis for the field auditor to lay the foundation of the audit in order that the essential knowledge about the system and its environment are in the working papers. This allows the reviewer performing the Quality Assurance, to ascertain certain aspects of the system and to form an overview of the system itself. The documentation sets the background.
A brief index of documentation should be included in the working papers with areas of interest or relevant to the system being highlighted. This will provide the audit manager with an efficient reference to subject matter regarding the system.
==Quality Assurance during Evaluation Phase==
The Evaluation phase essentially is where the field auditor develops a control model. The control model can also take the form of a risk assessment matrix approach.
Data for the control model should be drawn from the walk-through of the system, outcome of the entry interview, reference to the documentation already collected and from standard control models.
The control model is simply the identification of controls and respective control objectives that an auditor would ideally like to see in place regarding a system. These are subsequently matched against the actual controls in place in the system. The subsequent verification process will identify the adequacy of control and identify weaknesses, strengths and potential exposures.
Quality Assurance at and during this audits phase is critical as it is the basis for the field auditor to evaluate the system. The field auditor should have the draft control model reviewed by the project leader and subsequently the audit manager before embarking on the evaluation phase.
This will ensure the audit objectives and scope are on the right track and identify the field auditors level of understanding in respect to defining control models.
The audit manager should examine the control model for its appropriateness in a rigorous fashion. Only when the control model is regarded as being satisfactory should the field auditor proceed to the actual evaluation phase.
A brief written statement on the final control model by the audit manager or project leader to the effect of a review has been performed should also be mandatory.
==Quality Assurance during Verification/Testing Phase==
The evaluation of the control model is carried out by analysing the results of applying the control model in the actual environment the system resides.
The comparison of ideal controls against actual controls in the system and its environment will provide the field auditor with sufficient data to form an opinion on the adequacy of controls.
It should be noted that even if the system does not completely relate to the control model it may in fact be well controlled and contain compensating control features. This often happens when a standard control model is used as such models are aimed at systems which have common features. The system being audited may not fit this scenario in terms of commonality.
The analysis and evaluation of the control model should be done with the assistance of specialist members of the audit staff. This Quality Assurance process will ensure all features respective to analysis of the adequacy, efficiency and effective of controls in the system are appropriately covered.
Quality Assurance after completion of the evaluation of the control model is extremely important. It allows for an independent and objective view of the results to be formed by the project leader or audit manager. It is not a time consuming process and will identify areas where the field auditor has placed too much emphasis on a particular control feature or not enough. Quality Assurance at this stage also provides the opportunity to identify areas of the system which may need to be revised or where clarification of a control feature is required.
Essentially the Quality Assurance during this phase ensures the appropriateness of testing which follows and that the evaluation of the control model has been performed properly and with due regard to audit standards. Again, it also provides an indication of the level of experience and understanding of the field auditor. Conversely it is an education for the project or audit manager. These higher beings are not always correct.
The testing phase is also critical in that it ensures the field auditor can substantiate and validate the direction of the testing. This would be set out in the rationale statement which is the auditor's opinion on the system of internal control and includes the details of and justification for the testing which will be carried out to confirm this opinion
Quality Assurance must include an examination of the logic of the rationale and subsequent testing and also be directed to ensuring test papers are appropriately indexed to each test/in and provide a comprehensive and meaningful basis for review.
Evaluation of test results from a Quality Assurance perspective provide a sound basis for ensuring the exit interview and draft report are based on accurate and tested findings.
==Quality Assurance during Reporting Phase==
Quality Assurance should be applied to the draft report prepared by the field auditor. This will allow for all findings to be revisited by the audit manager and any issues which may emerge as being potentially critical to acceptance of the report being identified and addressed properly in the report.
The audit manager should review the draft report with the objective of ensuring all findings can be supported and substantiated, that they are accurate and complete. The report must address real concerns and issues and not petty mundane matters. Recommendations are to be sound, cost efficient and of practical benefit to the users.
The above Quality Assurance process can be performed quickly in light of the previous Quality Assurance process during the other stages of the audit.
The review of the draft report is an opportunity for the audit manager to discuss with the field auditor any aspects in relation to perceived training needs as a result of the audit.
All amended copies of the draft should be retained as a management trail and to provide the audit manager with a basis for determining when and why amendments were made.
==Final Quality Assurance of an Audit==
The final Quality Assurance is ideally a complete revision of the workpapers by the audit manager at the draft reporting stage. A review sheet at the from of the workpapers should be made available for this purpose.
The review sheet allows for the audit managers comments to be written and subsequently the field auditors response.
The Quality Assurance process at this stage also determines whether the field auditor has used prescribed forms and gives an indication of whether audit standards, procedures and methodologies are being correctly applied by audit staff.
==Process of Quality Assurance==
===NATIONAL AUDITS (Generally)===
The purpose of this document is to outline the proposed procedures to ensure a high quality professional standard is achieved and maintained throughout planning, fieldwork and reporting of the audit.
The proposed option for review and quality assurance of the audit work performed is as follows:
====First Review====
All State Audit Managers (SAMs) will be responsible for the review of all work papers produced within their respective State, except work they produce themselves. State Audit Managers (SAMS) will complete Parts 1a, 1b and 1c of the QA Standard Working Papers when reviewing the work of their staff.
State audit managers will form a peer review group that will conduct random peer reviews of the audit papers of other SAMs according to a schedule to be issued annually. In these tasks, the SAMs will complete Part 1d.
The primary objective of this first review is to ensure adequate audit coverage was provided in the State of review to allow any other professional auditor to express a similar audit opinion based on audit evidence as presented.
====Second Review====
Where the National Audit consists of multiple national components (such as a Financial Statements Audit), a National Manager will be appointed for each component. Where the National Audit consists of only one component, proceede directly to the third review.
After the first review, all work papers for each audit component, when completed in a State or Central Office, should be copied and sent (in an overnight bag if required, otherwise through an online audit reporting system) to the relevant Audit Manager for that audit component item. The Audit Manager will perform the second review, Part 2 (of QA Standard Working Papers).
The objective of the second review is for the Audit Manger to determine whether sufficient audit coverage has been performed from a national perspective to facilitate the expression of a national audit opinion. This review should be performed prior to an audit opinion being expressed on the State returns.
====Third Review====
On the completion of component audit review by the Audit Managers and finalisation of draft audit reports, each Audit Manager should forward work papers and draft reports to the National Co-ordinator. The National Co-ordinator will review all work on completion. The National Co-ordinator will also complete quality assurance checklists. The National Co-ordinator will complete Part 3 - Third Review (of the QA Standard Working Papers).
The objective of the third review is to ensure sufficient audit evidence exists to support the content of the component audit report prepared by each Audit Manager. The National Co-ordinator is also to prepare and complete a quality assurance checklist which will verify that all review processes are complete, and all audit milestones attained and verified. The primary responsibility of the National Co-ordinator is ensuring the quality of audit work is of a sufficient standard to withstand any reasonable scrutiny.
====Quality Assurance====
=====National Level=====
The Director Internal Audit will have overall responsibility for all audit opinions expressed. To ensure reasonableness of the audit opinion expressed, a quality assurance review will be performed by the Director Internal Audit. The objective of this review is to ensure no major deficiencies exist in audit coverage, and the national audit opinion is sufficiently supported. The Director IA will complete Part 4 - QA (of the QA Standard Working Papers).
=====Global Level=====
For Global Audits, National Directors of Internal Audit will meet with the International Director on a quarterly or bi-monthly basis (as appropriate) and coordinate global opinions basied on the national audit opinions. Issues will be graded accoring to the level of global (as opposed to national) risk and matters specific to individual countries, but not of global implication will be separated from the global issues into national sections of the global report, with a summary and index prepared to allow for rapid review by the global governance committee.
===PROGRAMME AUDITS===
Quality assurance should be the responsibility of the Audit Manger. All working papers should be reviewed by him/her at the during and at the completion of fieldwork prior to the preparation of the national report. All State reports should also be reviewed by him/her prior to their submission/discussion to ensure consistency, particularly of issues that may become part of the national report.
The Audit Manager's papers should be subject to quality assurance review by his/her State Audit Manger where that is appropriate. The papers of Audit Managers who are themselves a State Audit Manager should be reviewed by another State Audit Manager according to the annual QA peer review schedule.
In all States/Territories where the audit is conducted the relevant State Audit Manager has responsibility for the day-to-day supervision and review of the audit, and should participate in entry and exit discussions and the preparation of the State report.
==Backlinks==
* [[RIAM:VLA:The Four Phases of the RALSBA| Back To The Four Phases of RALSBA]]
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
* [[RIAM:Overview: Tactical Quality Assurance Strategy (TQAS)]]
cf7065adf44a5990277fe21983cf0744e9d8988c
RIAM:SPECIALREVIEWS:Follow Up Audits
0
361
554
2019-09-10T11:45:36Z
Bishopj
1
Created page with "(TO BE FORNATTED) =FOLLOW-UP REVIEWS= ==Introduction== Follow Up audits are a major component of the audit process. This activity constitutes a post review of a previous..."
wikitext
text/x-wiki
(TO BE FORNATTED)
=FOLLOW-UP REVIEWS=
==Introduction==
Follow Up audits are a major component of the audit process. This activity constitutes a post review of a previous audit and respective recommendations. A Follow Up Audit is not a Post Implementation Review which is a user responsibility and part of normal System Development Methodology practices.
The timing of Follow Up Audits should be within a 6 to 12 month period from issue of the original final report. User Management should be given the opportunity to assess the resources required and respective planning to address audit findings. Internal Audit at the Central Office level will monitor the status of uptake of recommendations in the Annual Audit Plan status reports. This procedure will provide a useful guide to the audit staff responsible for Follow Up Audits in that they will be quickly able to determine the status of recommendations before the Follow Up Audit is initiated.
A major outcome of Follow Up Audits is that Internal Audit can provide accurate advice to senior management and the Executive, including Corporate Audit and Accounting Committee, on progress taken by user management in implementing audit recommendations. The Follow Up Audit activity provides senior management with an independent assurance that identified problems and exposures are being addressed.
Data for Performance Indicators is also drawn from Follow Up Audits as the final report will provide accurate and meaningful measurement of the response and success of Internal Audit's function.
==Focus and Objectives==
The focus of Follow Up audits is to revisit the audit area with the objectives of :
¨ assessing whether recommendations from the original audit have been implemented
¨ determining the status of each recommendation
¨ ascertaining the adequacy of new or change in controls regarding recommendations
¨ assessing the efficiency and effectiveness of recommendations addressed by users
¨ reporting on the above
Other activities which may be included in Follow Up audits are establishing the adequacy of compliance with new procedures and control features which are outcomes of audit recommendations.
==Review of Recommendations==
The auditor must objectively consider why any recommendations have not been addressed. Included is determining the impact on the operations and environment where the activity takes place. There may be valid reasons for user management in not implementing some recommendations. Audit should assess whether user managements lack of action can be substantiated .
Failure to implement recommendations may be :
¨ lack of user resources and expertise
¨ other competing priorities
¨ legislative change or change in Departmental policies and as a result the recommendation is no longer valid or viable
¨ dependant on other areas to formulate policies/guide-lines which the users rely on in terms of operating procedures (ie, accounting policies or SMFIs)
¨ major organisational change
¨ change in user management
Audits responsibility in regard to the above is to document the reasons for non-implementation of recommendations and any adverse impact or potential exposures.
==Procedures==
There are a number of fundamental steps in the Follow Up audit process to ensure that future activity will be on the correct knowledge base. The auditor should review the original report and any existing ANAO report.
Having reviewed the final report and working papers, the auditor should assess whether there have been any major changes in regard to legislation, organisation or Departmental policies and procedures that directly effect the audit area and likelihood of recommendations being addressed. These should be documented.
User management should be formally advised of the Follow Up Audit and date of entry interview. The entry interview should focus on the objectives in that the status of recommendations is to be ascertained and whether user management has adequately addressed audit recommendations from a control perspective. Outcomes from the above should be touched on at this point and user management given the opportunity to respond.
==Reporting==
Reports for Follow Up Audits will be along the same lines as stated in the Interviews and Reporting Section of the Conduct Of Audits.
==Backlinks==
* [[RIAM:VLA:The Four Phases of the RALSBA| Back To The Four Phases of RALSBA]]
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
f94c1661b6e8b793c72585508322fbd13fb02be3
RIAM:SPECIALREVIEWS:Payment Systems Implementation Audits
0
362
555
2019-09-10T11:48:59Z
Bishopj
1
Created page with "(TO BE FORMATTED) =Payment Systems Post Implamentation Reviews= ==Introduction== Most jurisdictions and corporations have some form of policy requiring a post implementati..."
wikitext
text/x-wiki
(TO BE FORMATTED)
=Payment Systems Post Implamentation Reviews=
==Introduction==
Most jurisdictions and corporations have some form of policy requiring a post implementation audit for payment systems. For example the Australian Federal Government used to have what was known as a Reg45A review. Finance Regulation 45A(3) stated that no money could be spent except for:
"Payments in respect of which the secretary or an authorised officer has indicated in writing, or in such other manner as approved in writing by the secretary,
(b)(ii) That in the preparation of that data, system controls and accounting procedures approved by the minister have been employed;
"
The Audit Act Sect. 34(2) (referenced by Regulation 45A) imposed a legislative obligation to certify that both the appropriate delegates are authorising the payments and that they were made in accordance with the relevant Minister's written approved procedures.
The key assertions for a Reg.45A(3)(b)(ii) review were therefore derived from two sources:
1. The need comply with the Department's purchasing guidelines applicable to any purpose. In particular this is the need to certify that the computer system for which payment is being made has been delivered in good working order and is both the requested system and of a satisfactory standard. Certifying that goods have been delivered in good working order, is a requirement of the purchasing system;
2. The need to certify that a system (including both computer based and manual components) to be used for payments will support the certification needs of future payments once it is being relied upon by certifying officers claiming the adherence to the Minister's prescribed purchasing/payments procedures. This source implies assertions similar to those of a purchasing and creditors system review.
This dual focus of a post implementation review is important note:
# We are confirming that the controls over systems specificaton, implementation and acceptance are in place and actually operated for the system that is the subject of the review; AND
# We are confirming that the system as implemented, with both its in computer and around the computer controls will allow the exec & board to attest to the accuracy of reported payment information.
==The Assertions==
The assertions are broadly:
1. That the computer system and the implemented control system (including the automated and manual environment) support audit assertions that:
a. Payment and expense data are bona fide, relating to transactions that actually exist;<br>
b. Transaction and payment data reported/processed is :<br>
<br>
* Attributed to the proper period,<br>
* Accurately calculated,<br>
* Correctly accumulated,<br>
* Accurately recorded,<br>
* Correctly and completely disclosed,<br>
* Properly authorised with respect to transactions,<br>
* Providing benefits to which The Department and suppliers are eligible,<br>
* Complete;<br>
c. Payments are made to the correct recipient (from Finance Reg. 68);<br>
d. Payments are supported by a claim that identifies the head of expenditure to which the payment is chargeable (from Finance Directions, Sect 8)<br>
e. The relevant legislation is observed;<br>
f. The assets of The Department are appropriately protected and applied.<br>
g. The system is implemented in accordance with the requirements specification and is fit for the purpose intended;<br>
h. The system and application security is sufficient to sustain the assertions in (b) and minimise the risk of system loss;<br>
2. The implementation is sustainable regarding maintenance and operations for the anticipated life of the system.<br>
<br>
==Regulation 45A(3)(B)(ii) Controls==
In order to conform with the requirements of Regulation 45A(3)(b)(ii), controls must be designed into payment systems or be present to ensure that:
Access Security
#. Environmental controls such as physical security, continuity assurance (including data recovery, etc) and logical security are operating appropriately.
#. Application access is restricted to authorised users and the functions within the application are properly segregated.
Auditablility:
#. All stages of the approved control system have been correctly and completely carried out, and the performance is witnessed by audit trails.
#. The output has been reconciled, or is reconcilable to the input and/or source records and documents.
Accountability
#. Payment is authorised.
#. Individual transactions can be matched to individual users.
Data Integrity
#. The payee is the correct and entitled recipient (Reg 68).
#. Payment is for the correct amount.
Process Integrity
#. Duplicate payments are not made.
#. Rejected data is properly and completely corrected.
Continuity
#. Environmental controls such as continuity assurance (including data recovery, etc) are operating appropriately.
Effectiveness
#. The system supports payments in a timely manner ensuring bills are paid by the due dates.
In satisfying these requirements the controls may be either:
* built in to the application system itself (ie. computerised);
* present in the manual procedures under which the application or computer system is operated; or
* part of the computerised environment of the application (ie. the systems or environment level of the computer).
The controls appropriate will vary depending on the particular requirements of the application system.
The control objectives above focus primarily on the certification requirements for future payments (the second source in 12.1) rather than the issues surrounding whether a system is in good working order. This latter is examined in the following discussion.
User management should be formally advised of the Follow Up Audit and date of entry interview. The entry interview should focus on the objectives in that the status of recommendations is to be ascertained and whether user management has adequately addressed audit recommendations from a control perspective. Outcomes from the above should be touched on at this point and user management given the opportunity to respond.
Audit should then proceed to verify if user management has implemented whatever recommendations it has advised audit of being taken up. The adequacy and control features need to be substantiated, tested and documented.
An assessment of the adverse impact or potential exposures for failure to implement a recommendation should be undertaken and subsequently documented.
The working papers should be updated to reflect any significant change in operating procedures and policies. Included of course, is the effect of implementation of recommendations and respective controls.
The last step is to draft a report on findings and arrange for an exit interview to discuss the draft report with user management. Responses to the draft should be noted and incorporated in the final report for presentation to user management.
Before release of the final report, there should be a quality assurance of the working papers to ensure they have been updated accordingly and the findings in the report can be substantiated.
==Backlinks==
* [[RIAM:VLA:The Four Phases of the RALSBA| Back To The Four Phases of RALSBA]]
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
d6c6af0e8eac8a69670241d2fff77ddfd1f98a5b
RIAM:SPECIALREVIEWS:New Programme Reviews (Government)
0
363
556
2019-09-10T11:58:21Z
Bishopj
1
Created page with "(TO BE FORMATTED) =NEW PROGRAM REVIEWS= ==Introduction== In addition to the requirements specified at Section 5.3.5 in respect of new program/system reviews and the check-..."
wikitext
text/x-wiki
(TO BE FORMATTED)
=NEW PROGRAM REVIEWS=
==Introduction==
In addition to the requirements specified at Section 5.3.5 in respect of new program/system reviews and the check-list of matters to be considered this section of the Manual details the steps to be undertaken in the conduct of such assignments.
It is essential that all development and significant enhancement projects are reviewed to ensure that they meet the Department's requirements in a cost effective manner and that the systems produced are efficient and reliable. Such reviews are generally of two types:
¨ development reviews that consider feasibility studies, cost benefit analysis, design, user involvement and project control
¨ post implementation reviews that consider the success or otherwise of the project, user satisfaction and performance of the system.
==Assurance Objectives==
These reviews will endeavour to provide management with an assurance that the development has been successful in terms of contributing to Department objectives. Specific matters to be addressed are:
1. Program guidelines and administrative arrangements have been developed incorporating the above check-list aspects.
2. Where shortcomings are identified with program guidelines a summary of findings is to be returned to program management for discussion, and where applicable guidelines are to be amended.
3. Final version of the program guidelines and administrative arrangements are approved by Internal Audit.
4. A certificate is issued by Internal Audit stating compliance with the CAAC requirements has been satisfied.
5. Program management are to forward the final program guidelines with the certificate from Internal Audit to the Secretary or his delegate for approval.
As the foregoing indicates, the extent of Internal Audit examination is extensive and not solely limited to the review of in situ systems and programs. Internal Auditors are expected to provide contributory and meaningful assistance particularly in this type of review.
If the need exists the Internal Audit coverage of the process should be extended as far as necessary particularly in relation to user requirements and provision of "user advocate" services.
==Working Paper Example==
An example of the form for working papers is may be found [[WP Examples:New Programme Review WP|here]]
==Backlinks==
* [[RIAM:VLA:The Four Phases of the RALSBA| Back To The Four Phases of RALSBA]]
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
9f7868136ecfe9b791571f578e91b2062977ea87
RIAM:SPECIALREVIEWS:Programme Reviews (Government)
0
364
557
2019-09-10T12:03:48Z
Bishopj
1
Created page with "(TO BE FORMATTED) =PROGRAM REVIEWS= ==GUIDELINES FOR THE CONDUCT OF PROGRAM REVIEWS== ===Inclusion of Program Reviews in Annual Audit Plans=== Inclusion of a Program Revi..."
wikitext
text/x-wiki
(TO BE FORMATTED)
=PROGRAM REVIEWS=
==GUIDELINES FOR THE CONDUCT OF PROGRAM REVIEWS==
===Inclusion of Program Reviews in Annual Audit Plans===
Inclusion of a Program Review into an annual plan should be done only after taking account of planned evaluations, recent or pending ANAO audits, and recent or planned system and program changes, and associated post implementation reviews. any of these activities should influence the timing of the audit within a particular calendar year, and may cause the audit to be postponed to a subsequent year.
===Allocation of Program Reviews to States/Territories===
Responsibility for the conduct of a Program Review should be allocated a particular State/Territory or Central Office as part of the Annual Audit Plan planning process. In allocating this responsibility consideration should be given to the materiality of the particular program in that administration, balanced against the need to ensure appropriate work allocation across all audit units.
===Appointment of Audit Manager===
Each Program Review should be managed by a designated Audit Manager (or other such designation, eg Audit Leader). Appointment should be at the SOG B or C classification, depending on the materiality of the program and the development needs of the staff considered.
===Responsibilities of the Audit Manager===
The Audit Manager should be responsible for:
(1) Consulting and liaising with Program and Divisional management in Central Office;
(2) Preparing a National Field Plan for approval by the Director, Internal Audit, Canberra;
(3) Preparing advice of the audit for issue to appropriate client managers;
(4) Conducting the audit in his/her State/Territory, and in C.O, as required;
(5) Providing audit programs for application in all other participating States/Territories;
(6) Performing quality assurance checks on the work done in the other participating States/Territories; and
(7) Conducting national exit discussions in C.O. and preparing the National Report.
===National Field Plan===
The Audit Manager should be responsible for the preparation of a National Field Plan prior to the commencement of the audit. Its preparation requires research of background information relating to the program being audited and discussion with program managers in C.O. and his/her State/Territory.
The plan should contain the following:
Assignment Cover Sheet
To indicate audit title, AAP Task Number, table of contents, date of preparation, name and location of preparer, and name & signature of officer approving the plan.
Statement of Scope and Boundaries
To specify the audit coverage. The audit boundaries should be identified for audits which are restricted in scope and don't address all aspects of a program's operation.
In general terms, the audit should cover all processing and accounting arrangements, all publicity and marketing arrangements, all management information systems, all evaluation strategies (including performance indicators), and aspects of resource allocation to the program's administration.
Statement of Audit Objectives
There should be common core objectives for all Program Reviews, as follows -
To provide an opinion on:
(1) Compliance with Commonwealth legislation and regulation, and Departmental policy and procedure
(2) The efficiency and effectiveness of controls in achieving legislative, government and department objectives;
(3) The extent to which human, financial and physical resources are economically and efficiently managed;
(4) The reliability of management data used for decision making;
(5) The extent to which accountability relationships are reasonably served through appropriate information flows and evaluations of performance;
and recommend:
(6) Options for improvement to current arrangements.
Statement of Audit Method
A summary statement of the audit method to be used should be outlined. System based methods involving the following steps of modelling, documentation, analysis, testing, consulting and reporting should be followed.
Assertion Model
An Assertion Model should be prepared following research of the relevant legislation and regulations, and discussions with program and line managers concerning their objectives. It should contain statements of system objectives, management assertions, and desirable controls for each of the systems or sub-systems subject to audit. System objectives and assertions should be agreed by the appropriate management.
Other planning or modelling techniques that may be identified for inclusion as appropriate include:
Lists of exposures and an indication of their perceived risk
Criteria for the evaluation of effectiveness or efficiency
Audit Budget
The budgeted audit time for the conduct of the audit should be specified.
Audit Timetable
The planned timetable of the audit, showing the dates of the audit in the various States/Territories, and the completion date of the audit.
Planned Consultation and Reporting arrangements
Details of the managers with whom entry interviews are planned/have been conducted, and with whom regular liaison during the course of the audit and exit interviews at the conclusion of the audit will be held should be specified.
Other Considerations
In the course of preparing the National Field Plan the Audit Manager should establish with the relevant program and line managers areas of particular concern to them. Reference should be made in the plan to these requests/areas of interest, particularly if they would not normally be covered by the audit.
Similarly, other review activity conducted by the program or evaluation areas of the department, or by line management, should be identified during this planning phase. Some integration of review activity may be desirable.
===Advice of the Audit===
Formal advices should be prepared for issue under the signature of the Director, Internal Audit, to all Program and line managers who have a responsibility for the administration of the program subject to audit. At a minimum these should be all Division Heads and relevant Branch Heads in C.O., and all State/Territory Directors. The advice should detail the scope and objectives of the audit, the participating States/Territories, its planned duration, and provide the name of the Audit Manger to contact purposes.
===Conduct of the Audit===
The principle governing the conduct of Program Review audits should be that they constitute a single national audit where, to ensure national coverage, fieldwork is done in a number of representative States, and in a number of representative Area/Regional administrations within these States (as appropriate).
The sampling of administrative units (both States and offices within States) for the purpose of audit coverage is largely an efficiency consideration. The soundest statistical approach would be to randomly draw transaction samples form the national population. However, this approach is logistically difficult and provides no assurance that an opinion on individual Areas or States can be formed (to satisfy the interest/needs of these levels of management). Stratification of the national population of offices should provide for efficient sample selection and scrutiny. A sample of 30 (from a population of, say, 200 offices) should minimise the extent to which this stratification becomes statistically significant.
To achieve a focus on a national product it is essential that the audit remains under the control of the Audit Manager. It is also essential that all fieldwork be completed within a short timeframe as possible to ensure that common administrative and processing arrangements are audited and results are current when presented to national management.
The audit should be conducted as follows:
(1) Fully completed in the home State/Territory of the Audit Manger - including preparation of the State report and the conduct of exit discussions;
and then, shortly after
(2) Completed in the other participating States/Territories utilising a comprehensive audit program prepared by the Audit Manager following the completion of the audit and his/her home State/Territory - including preparation of the State report and the conduct of exit discussion.
===Audit Programs===
Conduct of the audit in the Audit Mangers home State/Territory should be in accordance with the IA section's preferred methodology. Full system documentation, analysis, testing and system/program evaluation should be completed.
Conduct of the audit in the other participating States/Territories should then be in accordance with an audit program prepared by the Audit Manager. The major component of the fieldwork will be the (re)performance of the tests done by the Audit Manager to provide further evidence to support/deny hypotheses and assertions. To this end the following should apply:
(1) The participating States/Territories be requested to confirm the system and program descriptions prepared by the Audit Manager. All major differences should be accounted for, particularly if they represent differences in program delivery strategies and/or internal control systems;
(2) Where no major differences are detected, testing should be conducted as directed by the Audit Manager; and
(3) Where major differences are detected, testing programs should be redesigned to take account of the different control systems, approved by the Audit Manager, and then conducted.
===Quality Assurance===
Quality assurance should be the responsibility of the Audit Manger. All working papers should be reviewed by him/her at the completion of fieldwork prior to the preparation of the national report. All State reports should also be reviewed by him/her prior to their submission/discussion to ensure consistency, particularly of issues that may become part of the national report.
The Audit Manager's papers should be subject to quality assurance review by his/her State Audit Manger where that is appropriate. The papers of Audit Managers who are themselves a State Audit Manager, or a SOG B in C.O., should be reviewed by another SOG B as a form of QA peer review.
In all States/Territories where the audit is conducted the relevant State Audit Manager has responsibility for the day-to-day supervision and review of the audit, and should participate in entry and exit discussions and the preparation of the State report.
===National Reports===
A draft national report should be prepared by the Audit Manger following the completion of audit fieldwork and State reporting.
In addition to providing a balanced view of the performance of the program audited the national report should distinguish between national and local issues. National issues are those aspects of the programs administration that require action to be taken at a national (C.O.) level; E.g. ADP system enhancements, redesign of evaluation strategies, procedural amendment to ensure national consistency of approach, etc. Local issues are largely operational in nature and can/have been addressed by the appropriate line managers following discussions with the auditors; E.g. non-compliance with established procedure, provision of training, alternation of workflows, etc. Their inclusion in a national report is largely for information purposes.
Care should also be taken to consult as widely as possible on the issues to be raised in the report, particularly if recommendations for major changes are proposed. In some instances it may be desirable to seek the views of State/Area/Regional managers and staff to the options being considered, or to have them to propose the solutions.
Exit discussions in C.O. should be on the basis of the draft national report. The Audit Manger should attend, with other audit representation as appropriate.
Distribution of the final national report should be to all managers who were originally advised of the conduct of the audit. Management responses should be included against all audit recommendations. For States/Territories where the audit was not conducted an invitation should be made to them to take account of the issues raised in the report, particularly the local operational issues, and review their performance and take corrective action as required.
===State Reports===
All participating States/Territories should prepared local State reports. These should drawn on the analysis of the national program review and be supported by local test results. In preparing these reports, purely local findings should be highlighted as action will be possible at a local level to resolve the problems identified. National findings and the planned release of a national report should be flagged.
Full consultation on these reports should be completed before their circulation outside the State in question (other than circulation within the audit team). This is to ensure appropriate consideration is given of findings and recommendations, and to involve local management in the development of solutions as much as possible.
==Backlinks==
* [[RIAM:VLA:The Four Phases of the RALSBA| Back To The Four Phases of RALSBA]]
* [[RIAM:Conduct of the Very Large Audit| Back To Conduct of the Very Large Audit (Main)]]
* [[Internal Audit Method| Back To The RIAM (Main)]]
30c7c04e8aac34a5ac490f793fb4008cf69b7d50
Internal Audit Method
0
338
558
529
2019-09-10T12:08:48Z
Bishopj
1
wikitext
text/x-wiki
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2019 - Moral Rights Retained.
This article and all pages referenced from here may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com]. The principal author of all pages in the following list is [[Jonathan Bishop]]. The author acknowledges the contributions, improvements, help and comments of the inumerable critics and users of the method over many years, however errors and ommissions are the responsibility of the principal author alone.
==The Method In Detail==
*[[RIAM:Overview of the Method]]
** [[RIAM:Overview: Rational Internal Audit Method - Introduction|Rational Internal Audit Method - Introduction]]
** [[RIAM:Overview: Overview of the Scope of Work|Overview of the Scope of Work]]
** [[RIAM:Overview: The Five Arms of RIAM - At a Glance|The Five Arms of RIAM - At a Glance]]
** [[RIAM:Overview: The Client Service Plan (CSP)|The Client Service Plan (CSP)]]
** [[RIAM:Overview: Risk Based Planning (RBP)|Risk Based Planning (RBP)]]
** [[RIAM:Overview: Control Implementation Services (CIS)|Control Implementation Services (CIS)]]
** [[RIAM:Overview: The Assertion Linked Systems Based Audit (ALSBA)|The Assertion Linked Systems Based Audit (ALSBA)]]
** [[RIAM:Overview: Tactical Quality Assurance Strategy (TQAS)|Tactical Quality Assurance Strategy (TQAS)]]
*[[RIAM:Risk Based Audit Planning]]
*[[RIAM:Control Theory & Analysis]]
*[[RIAM:Conduct of the Very Large Audit|RIAM:Conduct of the Very Large Audit Project]]
**[[RIAM:VLA:The Four Phases of the RALSBA|The Four Phases of RIAM Control Systems Analysis in the very large audit project]]
***[[RIAM:VLA:AUDIT INTERVIEWING|PHASE 1 to 4: INTERVIEWING]]
***[[RIAM:VLA:FAMILIARISATION, SCOPE & PLANNING|PHASE 1: FAMILIARISATION, SCOPE AND PLANNING]]
***[[RIAM:VLA:STANDARDS FOR, AND TYPES OF, AUDIT EVIDENCE AND WORKING PAPERS|PHASE 1: STANDARDS FOR, AND TYPES OF, AUDIT EVIDENCE AND WORKING PAPERS]]
***[[RIAM:VLA:DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL|PHASE 2: DOCUMENTING & ANALYSING THE SYSTEM OF INTERNAL CONTROL]]
***[[RIAM:VLA:ASSERTIONS|PHASE 2: ASSERTIONS]]
***[[RIAM:VLA:ANALYTIC REVIEW PROCEDURES IN INTERNAL AUDIT|PHASE 1 to 3: ANALYTIC REVIEW PROCEDURES]]
***[[RIAM:VLA:AUDIT RISK ASSESSMENT & SENSITIVITY ANALYSIS|PHASE 1 to 3: RISK ASSESSMENT & SENSITIVITY ANALYSIS]]
***[[RIAM:VLA:AUDIT SAMPLING AND AUDIT TESTING|PHASE 3: AUDIT SAMPLING AND AUDIT TESTING]]
***[[RIAM:VLA:AUDIT REPORTING PROCEDURES|PHASE 4: AUDIT REPORTING PROCEDURES]]
***[[RIAM:VLA:IA REVIEW AND QUALITY ASSURANCE|PHASE 1 to 4: REVIEW AND QUALITY ASSURANCE]]
**[[RIAM:SPECIALREVIEWS:Follow Up Audits|Special Reviews - Follow Up Audits]]
**[[RIAM:SPECIALREVIEWS:Payment Systems Implementation Audits|Special Reviews - Payment Systems Implementation Audits]]
**[[RIAM:SPECIALREVIEWS:New Programme Reviews (Government)|Special Reviews - New Programme Reviews (Government)]]
**[[RIAM:SPECIALREVIEWS:Programme Reviews (Government)|Special Reviews - Programme Reviews (Government)]]
**[[RIAM:SKILLS:CONDUCT OF EXIT INTERVIEWS|CONDUCT OF EXIT INTERVIEWS]]
==Backlinks==
[[Internal Audit]]
d1f8527889d9aa5b3768eb6e0e46b32daacef35a
BPC RiskManager Software Suite
0
3
559
498
2019-09-10T13:00:13Z
Bishopj
1
wikitext
text/x-wiki
=BPC RiskManager Software Suite - Risk, Compliance and Certification=
The BPC RiskManager Software suite is an Enterprise Grade risk management & governance software suite supplied worldwide, and developed and supported by Bishop Phillips Consulting. Originally developed between 1995 and 1997, the system is now in its 6th major version release with updates released roughly every 3 months. Version 6 was originally released in 2006, and the Enrima Edition (the current release) was first released in 2008. The latest version was released in 2011 with updates and new capabilities added semi annually since then. It is updated continuously throughout the year and, as a client, we encourage you to actively participate in the development direction.
The Enrima edition of BPC RiskManager is a single-user and multi-user risk management, compliance management, financial statements certification, insurance, survey, incidents & hazards system all in one application. You can manage multiple organisations and simultaneously view governance issues as risks, compliance obligations (legislation, processes and procedures) and compliance topics simultaneously. It manages email based reminders for a large variety of user expectations internally.
BPC RiskManager is available in 2 product streams (both of which can be configured as single user desktop or massively multiuser networked solutions). The two product streams are:
{|width=100%
|-
|
* BPC RiskManager V5 (Express)
|[[image:BPCRiskManagerExpressV5.jpg]]
|-
|
* BPC RiskManager V6 (Enrima Edition)
|[[image:BPC_RiskManager_V6261_Main_Screen.jpg|600]]
|}
=Client Base=
BPC RiskManager clients are head quartered in Australia, Canada, the United Kingdom and the United States of America. Global clients, of course have offices in many other countries. [http://www.bishopphillips.com| Bishop Phillips Consulting] has local offices in both Australia and North America.
The system is used extensively in the education sector with a very substantial presence in Universities in both Australia and Canada and commercial education providers and colleges in the USA. Other significant client groups include insurance providers (both primary insurers and reinsurers), central government agencies (such as federal & state/province departments and local government), utilities such as postal, electrical and water utilities.
BPC RiskManager implements and substantially extends the Risk Management Standards "AS/NZS 4360:2004 :Risk Management", and "ISO 31000" and complies with the "ISO/IEC Guide 73- Risk Management – Vocabulary".
The Risk Manager is not restricted to merely following the one interpretation of the risk standards. As a consequence of its long market history, BPC RiskManager implements a large number of divergent risk management methodologies or methods. Any combination of one to three assessment groups each containing ratings for likelihood, consequnce and control are possible. For example some clients use a risk management methodology that utilises risk budgets with three rating groups "Inherent, Residual and Target" where inherent ratings shift with external factors and target shifts with the corporate risk appetite (ie a risk budget) while the residual floats according to assessment ratings.
Any number of self assessments in each group can be maintained together with a separate family of assessments and remediations created by audit/expert that coexist with management's risk assessments.
Whether your preferred risk methodology uses quantification (quantitative risk analysis), or qualification (qualitative risk analysis), BPC Risk Manager directly supports the approach on a per assessment basis. Terminology (including fields names and purpose and screen captions) is fully customisable so the system can directly implement the corporate risk methodology / risk method.
=Get a Fully Functional Evaluation Copy of BPC RiskManager for FREE=
You can get a free no-obligation fully functional copy of BPC RiskManager (Enrima Edition) simply by completing the request form here:
[http://www.bishopphillips.com/australia/BPCServiceEnquiry.php I want to evaluate BPC RiskManager without obligation for free, please.]
It will work for 60 days, and if you need more time you can contact us and request a longer evaluation. There are no limitations in the evaluation version and we will even give you support for free while you get it running. It is fully self installing and will open up accessing your first risk database when the installer finishes.
If it isn't right for you, you can just uninstall after the 60 days with no further obligation to us.
=Knowledge Base=
*[[BPC RiskManager V6 Enterprise (Enrima Edition)]]
** [[BPC RiskManager V6 Enterprise (Enrima Edition)| BPC RiskManager Features]]
** [[BPC RiskManager V6.2 Network Architecture]]
** [[RM625ENT Installation Instructions|BPC RiakManager V6.2.5 Installation Instructions]]
** [[BPC RiskManager Frequently Asked Questions|BPC RiskManager - Frequently Asked Questions]]
** [[BPC RiskManager Quick Help With Common Tasks]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
** [[BPC RiskManager V6 on 64 bit Windows]]
*[[BPC SurveyManager - Overview]]
** [[BPC Surveymanager - Key Features]]
** [[BPC SurveyManager - Introduction]]
** [[BPC SurveyManager - Creating Surveys - Layout and Markup Tags]]
** [[BPC SurveyManager - Creating Surveys - The Page Script]]
** [[BPC SurveyManager - Questions and Input Controls]]
** [[BPC SurveyManager - Creating Surveys - Properties]]
** [[BPC SurveyManager - Creating Surveys - Rules Scripting]]
** [[BPC SurveyManager - The Built In Reports]]
** [[BPC SurveyManager - Advanced Database Configuration Settings]]
** [[BPC SurveyManager - Client Overview]]
** [[BPC SurveyManager - Tutorials - Survey Layouts]]
** [[BPC RiskManager and BPC SurveyManager Importer Masks]]
<noinclude>
[[Category:Featured Article]]
[[Category:Risk Management]]
[[Category:Risk Management - Software]]
[[Category:BPC RiskManager]]
[[Category:BPC RiskManager User Manual]]
{{BackLinks}}
</noinclude>
c218bc39fe64793df89b816257d805ac8dba86c3
RM625ENT Installation Instructions
0
365
560
2019-09-10T13:02:28Z
Bishopj
1
Created page with "=Introduction= BPC Risk Manager (TM) is a governance productivity tool used to help organisations capture and organise risks, compliance obligations, incidents, hazards, insu..."
wikitext
text/x-wiki
=Introduction=
BPC Risk Manager (TM) is a governance productivity tool used to help organisations capture and organise risks, compliance obligations, incidents, hazards, insurance, process workflow, corporate objectives and monitor actions, controls and performance for risk management across an enterprise. Now in Version 6 (Enrima), BPC RiskManager has been the flagship product of Bishop Phillips Consulting Pty Ltd, since 1996 and is sold and supported world-wide by Bishop Phillips Consulting through our world-wide offices.
BPC Risk Manager Enterprise is a multi user intranet/internet product. It requires a database server and can optionally use a web server for some component delivery. It may be installed in both a single-user configuration on (with all components on the onel PC or Laptop) or in an enterprise/multi-user configuration on a corporate or Government network. It will easilly support an organisation with 70,000+ employees, or an organisation with 2 employees.
This document is a very detailed guide to installing Risk Manager Enterprise – Enrima ADO Edition. Please read accompanying document [RiskManagerEnterprise_ SystemsRequirements] to ensure your environment satisfies the minimum requirements for this product. Please contact Bishop Phillips before proceeding if these requirements are not met.
IN THESE INSTRUCTIONS…“[RMInstallDir]” refers to the directory in which the BPC RiskManager system was installed. This will generally be something like:
“c:\program files\BishopPhillips\BPCRiskManager0625\”
Beginning with version 6.2.5 there is a new fully automated installer which looks after practically everything for you. You will need to install the Windows operating system with IE 6/7/8 (which is most likely already installed), IIS 5/6/7 as appropriate if using any of the IIS dependent parts (eg. ActiveX plugin, BPC HTTPSrvr or BPC SurveyManager), the database engine SQL 2000 / SQL 2005 / SQL 2008 / SQL 2005 Express / SQL 2008 Express, and MS Office (2000/XP/2003/2007) if you intend to use office integration support. The instructions in this wiki will tell you how to do all this (except for installing Windows itself). The automated installer will do everything else for you:
<ul>
<li>Install the application server
<li>Install the client
<li>Create 1 or more databases
<li>Upgrade databases from earlier versions
<li>Enable importation of any supplied preconfigured databases
<li>Register all components
<li>Build any web sites requested
<li>Launch the client and start the rapid configuration wizard
</ul>
If you are using the enterprise version (as opposed to the single user version), the installer has multiple modes to support installation of different parts on different computers. For those that really want to do everything manually, these instructions supply a very detailed step by step guide to all of the installation tasks.
=Terminology=
In these instructions we will assume the following terminology:
*DataBase Server Computer– this is the computer on which the database server software is installed. It may or may not be the same computer as the Application Server Computer.
*Application Server Computer – this is the computer on which you will be installing and running BPC RiskManager DataServer (i.e. the BPC RiskManager Application Server – known as the RM DataServer). It may or may not be the same as the Database Server Computer.
*Web Server Computer – this is the computer on which you will be running the web server components (i.e. MS IIS). Except in rare scenarios it will be the same as the Application Server computer.
*Client computer – these are the computers on which you will be running the BPC RiskManager client software (i.e. this is what most people would understand to be BPC RiskManager – but of course, it is only the front end of it.)
=BPC RiskManager Installation Guide=
==Before You Install - Deciding the configuration==
*[[BPC RiskManager V6.2 Network Architecture|Review the BPC RiskManager network architecture]]
*[[Before You Install - FAQ]]
==Order of Installation==
# [[Install your MS Database Server|Install your MS Database Server (MSDE 2000, SQL Server 2000, SQL Express 2005 or SQL Server 2005) on the Database Server Computer.]]
# Backup all Risk databases (if performing an upgrade)
# [[Uninstall prior BPC RiskManager version|Uninstall prior version of Risk Manager Enterprise (if performing an upgrade)]]
# [[Install MS IIS|(OPTIONAL) Install MS IIS 5 (IIS 6+ is preferred) on the Web Server Computer]]
# [[Install MS SMTP Services|(OPTIONAL) Install MS SMTP Services]] or other SMTP services software on the EMail Server Computer or get the SMTP server connection details for later use.
# [[Configure MS SMTP Services|(OPTIONAL) Configure the SMTP services software on the EMail Server Computer]].
# [[Install RiskManager From CD or Downloaded Installer|Install Risk Manager from CD or Downloaded install set, on the Application Server Computer]].
# [[Install Database Connectivity Tools|(OPTIONAL) Install Database Connectivity Tools on the Application Server Computer]]
# [[Install or Upgrade the Risk Database]] (on the Database Server Computer)
# [[Install Socket Server as a Service And HTTPSrvr as an ISAPI library]]
# [[Configure the BPC RiskManager & BPC SurveyManager Application Server]]
==Adding an Extra Database or Restoring A Database==
BPC RiskManager can access as many databases as you like. Essentially, adding a database is largely a matter of attaching a database or restoring a database backup. The only real issue to which you should pay attention is the access / login id used by the application server or BPC SurveyManager library to access the database.
You can find instructions for setting up a new database here:
* [[Instaling BPC RiskManager Database on SQL Server 2005 or SQL Express]]
* [[Instaling BPC RiskManager Database on SQL Server 2000]]
You will probabluy also need to establish the local configuration options for the risk manager application server and survey manager library to be able to access the database and connect the database to your network environment (such as mail servers, web sites, etc). You can find instructions for these aspects here:
* [[BPC RiskManager - Database Configuration]]
* [[BPC RiskManager - Mail Server Connection Properties]]
* [[BPC RiskManager - Distribution of Client Components]]
* [[BPC RiskManager - Install The SurveyManager]]
==Migrating RiskManager from Test or Dev to Production==
We were recently requested to supply a short form guide to the process of migrating a test site installation into production. This is that guide:
* [[Steps For Migrating RiskManager V6.x from Test To Production]]
=Backlinks=
[[BPC RiskManager Software Suite]]
72a7388d9a94f83bbea34de02c0703c1e3d59a88
Before You Install - FAQ
0
366
561
2019-09-10T13:03:38Z
Bishopj
1
Created page with "=Before You Install (for the first time)…= There are a number of ways to set up BPC RiskManager. Please take a little time to answer the questions in this section so tha..."
wikitext
text/x-wiki
=Before You Install (for the first time)…=
There are a number of ways to set up BPC RiskManager. Please take a little time to answer the questions in this section so that you configuration is straight-forward.
The system has such a wide range of set-up structures that covering them all in one document is a bit of a challenge. At times this document may seem curiously ‘small business’ and at other times it may seem like it is pitched at a Fortune 500 with extensive IT resources. The truth is that it is pitched at both of these types of organizations, because BPC RiskManager can meet both ends of the spectrum’s business requirements. We apologize in advance and beg you to gloss through the sections that are clearly not intended for you as an audience.
A BPC RiskManager installation consists of the following key software elements and suited to the associated uses:
==Which Versions of Windows Will I Be Using?==
The RiskManager application suite is known to operate successfully on the following versions of the Microsoft Windows operating systems:
*Windows 98se (installation instructions not included in this document - support for W98 is now deprecated)
*Windows XP sp1, sp2, sp3 (install instructions included)
*Windows Vista sp1 (install instructions included)
*Windows 7 (install instructions as for Vista, with additional notes for SQL 2008 Express if applicable)
*Windows 2000 sp3, sp4 (follow the instructions for Windows XP)
*Windows 2003 sp1, sp2 (install instructions included)
*Windows 2008 R2 (follow the instructions for Windows 2003 with some Vista notes)
*Windows 2008 for 64bit computers [operating in 32bit compatible mode]
Note: The automated installer will handle all of these versions for you automatically. The installer will also upgrade databases from earlier versions. If you are using the single user or enterprise versions of BPC Risk Manager (Enrima), on a single computer with optionally a separate database server, you can essentially just run the auto installer in full/complete mode and let the auto installer do its thing. It will do everything for you (except one manual step if you are using the SurveyManager components - which you can do after the installer completes).
==Can I use a 64bit operating system on the server?==
Yes, absolutely. RiskManager runs just fine in a 64Bit environment and talks quite happilly with 64Bit SQL Server (including the current SQL Server 2012). The Installer shipped with RiskManager will correctly set up your RiskManager system on 64Bit OS's (W2008/W7 and above) with no manual intervention required (except for the optional IIS components). Refer to the notes contained in the document: [[BPC RiskManager V6 on 64 bit Windows]] to see what you need to do to enable BPC RiskManager on a 64 bit Windows.
==How Will I Be Using The System?==
If in doubt, choose option 3 below, then you can do everything, and still decide to leave some things for later – but adopt them progressively as you desire, but will not have to install forgotten bits.
These three models of operation are independent of the size of the risk team. A single risk manager shop can be operating under option 3, and a 20 person shop could be operating under option 1.
===OPT 1: Core Risk Management System:===
Suitable for risk management functions primarily focused around the periodic administration of a risk register with centralized risk administration responsibility:
*Database (requires SQL Server)
*BPC RiskManager Dataserver (application server)
*BPC RiskManager Client (either Webbrowser ActiveX plugin or Native Windows programme)
===OPT 2: Extended Enterprise Risk Management System===
Suitable for risk management functions primarily focused around the continuous administration of a risk register with centralized risk administrative coordination but distributed responsibility and distributed mitigation strategy responsibility, where email driven reminders and advices are desired, but incident tracking is handled by a defined group of core users:
*Database (requires SQL Server)
*BPC RiskManager Dataserver (application server)
*Email Server (requires an SMTP email server – such as MS Exchange)
*BPC RiskManager Client (either Webbrowser ActiveX plugin or Native Windows programme)
===OPT 3: Integrated Enterprise Risk Management System===
Suitable for risk management functions supporting continuous administration of a risk registers, tracking of compliance, continuous identification of risks with either centralized or distributed risk administrative coordination and distributed risk responsibility and distributed mitigation strategy responsibility, where email driven reminders and advices are desired, and incident and compliance tracking is potentially handled by a wide group of users who will not necessarily be users of the core application, but responsible for completing web page based forms, compliance surveys, and control self assessment forms, etc:
*Database (requires SQL Server)
*BPC RiskManager Dataserver (application server)
*Email Server (requires an SMTP email server – such as MS Exchange)
*BPC SurveyManager (requires MS IIS)
*BPC RiskManager Client (either Webbrowser ActiveX plugin or Native Windows programme)
==Software Implication of the Options==
In each case all components can be on a single computer, or each component can be on its own computer. The key point to note is that for each corresponding option you will need the following software parts (letters correspond to the letters in the options above):
*Database : MS SQL Server 2000, 2005, 2008 or 2012 or SQL Express 2005 / 2008 / 2012 (not supplied).
*Application Server: BPC RiskManager Dataserver – (supplied)
*Email: Any SMTP compatible email server (eg MS Exchange – not supplied)
*Survey /Web Forms Engine:
**HTML Survey Engine: BPC SurveyManager (supplied)
**MS IIS 5/6/7+ Web server (not supplied but included with all windows systems)
*Client software: BPC RiskManager Client (either Webbrowser ActiveX plugin or Native Windows programme - supplied)
*Web browser : IE 5/6/7/8/9+ (not supplied but included with windows)
==What External Applications Do I want to use with BPC RiskManager?==
Various third party or addon applications or facilities, variations & extensions will interact with BPC RiskManager making a complete Enterprise Risk Management environment.
Some of these include:
'''1. REMETRICS / Advanced Risk Modelling'''.
In addition you may be a Benfield Remetrics (or equivalent advance statistical risk modeling) user in which case the you can integrate the client with the risk management software for statistical modeling of risk.
'''2. Intranet Web Systems & Document Management'''.
You might otherwise be an option 1 user, but have a large amount of supporting material in web pages or document management systems (or just MDS word documents on a central server). In this case you would want to make use of the extensive in-system external links and include the web server to store & deliver the documents to users (although it does not have to be IIS in this case).
'''3. MS Office'''.
The client system integrates with MS Office 2000-2007+ using document export and import, outlook (as an alternative email solution) and MS Word Spell Check dictrionaries. If these are on your desktop your user experience will be enhanced.
'''4. Project Team Websites (eg Team services, etc)'''
There are a number of places where web sites will display if linked in the application. In some cases there are logical places for entire websites to be linked (such as in the tree navigator for risks) in which case the team web site would display as a panel of the associated risk group.
'''5. Other 3rd Party Integration'''
While the risk management suite supplies components for registering and managing insurance, incidents and general compliance some industries have particularly advanced needs in these areas, in which case the preferred configuration set up may be to interface or reference these external systems from and to BPC RiskManager, or you might have your own dedicated Audit management or workflow management systems that you prefer to those built into the system,
==What software and access rights will I need to have available for the install?==
Depending on your answers to the previous question, you will need the following components with administration rights to all:
1. Microsoft Windows operating system software installed on all your computers (and probably the Microsoft Windows Installation disk to install components you have not yet installed).
2. SQL Server Database software (2000 / 2005 / 2008 / 2012 / SQL Express 2005 / SQL Express 2008 / SQL Express 2012 )
3. Microsoft Windows IIS 5+ (ISS 6+ if using HTTP/HTTPS communications connection to the server)
4. BPC RiskManager installation software
5. SMTP Server software (either the SMTP server built in to the Windows Operating system, or Microsoft Exchange or similar SMTP email software).
6. Microsoft Office 2000+ installed on the client PC’s
7. Other Third party applications for integration.
==How am I Going to Connect Users to My Application Server?==
If your client and application server will be on the same machine (such as a single user install) this issue does not matter to you and you can skip this section.
The simplest method suitable for most scenarios is the raw TCP/IP socket link. It allows the client application to talk to the application server over port 211. This is definitely best for local area networks and trusted wide area networks. It has the fastest communication speed and you do not have to do anything to set it up as it is enabled automatically be the install process.
If you wish to cross untrusted space then the socket traffic can easily the routed through a VPN tunnel (such as PPTP built in to your Windows computer).
Alternatively you might want / need to route traffic as HTTP or HTTPS (ie. encrypted SSL), in which case you can use the HTTP/HTTPS connection component on the server. This will put all traffic across port 80 (HTTP) or 443 (HTTPS) and encrypt it in the latter case. The primary advantage of this is that your client only needs the same connectivity as your browser. If the browser can get to your application server, then so can the BPC RiskManager client. The disadvantage is that the HTTP/HTTPS protocol imposes a small additional overhead and so the communications will be slightly slower AND you MUST have IIS 6 on the application server computer. While technically the HTTP/HTTPS connection works with reverse proxy servers, our experience has been that these are generally extremely slow and place an excesive burden on communications. It should be remembered that a fully configured RiskManager installation may have significant amounts of structured transactional data to deliver to the client which need to be available to the user rapidly and few reverse proxy servers are not especially good at handling this. Further, some more popular RPS's are extremely poor (read buggy) in handling keep-a-live pings. We do not recomend the use of reverse proxy servers. We will support you, and help where we can, but will not take responsibility for solving performance problems arising because of a particular reverse proxy technology.
We strongly recommend using the TCP/IP socket transport if possible.
==What is My System Load going to be like?==
You can easily run the entire system including the database, on a single laptop – even in multi-user mode, but it is more likely that you will use one of the following:
1. Low Traffic (Single user or 1 to 3 person team): Database + Application server + Web Server + Client on a single desktop with remaining team members having just the client on their computers.- (not the system can actually use you desktop Outlook programme as an email “server” so in this case you can configure it to use outlook for email transport if you wish.
2. Low to Medium traffic (eg 15 to 30 users): Database + Application server + SurveyEngine + IIS server on one machine and each user on their own computer with a copy of the client.
3. Medium to High traffic ( 20 to 200 RiskManager users) ( 1000 – 20,000 Survey users). Database on a dedicated server, Application Server + SurveyEngine + IIS on a shared dedicated server and each user on their own computer with a copy of the client.
4. High to Heavy traffic (200 to 400 RiskManager users) (15,000 to 40,000+ Survey users) Database on dedicated server, Application Server on Dedicated server, SurveyManager + IIS on dedicated server and each user on their own computer with a copy of the client.
5. Extreme traffic (Unlimited) - Database on dedicated server, Application Server on Multiple Dedicated servers, SurveyManager + IIS on multiple dedicated servers and each user on their own computer with a copy of the client.
As a guide 90% of clients will use either model 1 or model 3 above – and you can always change it later.
==What Terminology Will I Be Using?==
Practically all the labels and strings, risk models, management strategies, etc in the RiskManagement system can be customized by you in configuration screens, - or you can start off with the defaults and change them later. Clearly if you have some idea first you will find it easier to manage user expectations. You do not need to decide this before instaking however.
==What Databases Will I be Using?==
The system will `allow you to use as many databases simultaneously as you like. Normally you will run at least three databases:
*Production
*Training
*Test
We suggest you start with at least a production and a training database as you can always add a test database when you are looking at an upgrade.
In larger multi-user sites there will often be a production, training in production and test and training in test combination of databases. Some sites separate the compliance and a risk management functions into separate databases, although this is no longer our recommended architecture.
==What if I don’t Know All these Answers Yet?==
Its ok – you can easily change your mind later.
=Backlinks=
*[[BPC RiskManager Software Suite]]
*[[RM625ENT Installation Instructions]]
594bff93b3d69e14176561b04ddfa5cc4234c283
Install your MS Database Server
0
367
562
2019-09-10T13:05:10Z
Bishopj
1
Created page with "=BPC RiskManager Installation Step 1 - Install your MS Database Server= BPC RiskManager V6 (ENrima Edition) supports MSDE 2000, SQL Server 2000, SQL Express 2005 or SQL Serve..."
wikitext
text/x-wiki
=BPC RiskManager Installation Step 1 - Install your MS Database Server=
BPC RiskManager V6 (ENrima Edition) supports MSDE 2000, SQL Server 2000, SQL Express 2005 or SQL Server 2005. BPC RiskManager Express V5 supports Interbase, Oracle 10G+, MSDE 2000, SQL Server 2000, SQL Express 2005 or SQL Server 2005. You will need to install one of these database engines on the Database Server Computer.
The database server '''MUST be installed in sql server security (or mixed security) mode'''. Ie. The application server and mailmanager components will attempt to access the database server in sql authentication mode, not windows integrated security mode. In SQL Express and SQL Server 2005, windows integrated security mode is the default setting, so you must over-ride this to use Mixed mode/SQL security mode (as well as windows integrated security). If you don’t know what this means, just watch the sql server install wizard as it steps through the configuration steps – you will see the page early in the install process with the windows integrated security option.
When you choose mixed mode authentication, you will be asked to provide a password for the systems administration account (username = sa). In SQL Express and SQL 2005 you MUST enter a password. Make sure you remember it as you will need it later.
With SQL Express (2005), we recommend that your install BOTH the SQL Express 2005 server AND the SQL Server Management Studio. You will need the Management Studio to maintain the databases and possibly to apply updates as well as other tasks.
SQL Express 2005 server & management studio can be downloaded from Microsoft’s web site at this address:
http://www.microsoft.com/express/2005/sql/download/default.aspx
Follow the instructions on the web site. You may need to install the .Net framework Version 2 and the MSXML6 components first (although most current machines have these components already).
The two SQL components you definitely need are:
* Install Microsoft SQL Server 2005 Express Edition
* SQL Server Management Studio Express
=BackLinks=
*[[RM625ENT Installation Instructions]]
e955a4275ef1eeda49526bb65b393a67e8b87ae5
Uninstall prior BPC RiskManager version
0
368
563
2019-09-10T13:06:25Z
Bishopj
1
Created page with "=BPC RiskManager Installation Step 3 - <br>Uninstall prior version of Risk Manager Enterprise= If you used the installer to put your previous version of BPC RiskManager on..."
wikitext
text/x-wiki
=BPC RiskManager Installation Step 3 - <br>Uninstall prior version of Risk Manager Enterprise=
If you used the installer to put your previous version of BPC RiskManager on your computer, you can uninstall by running the uninstaller from "Add or Remove Programs", available on the control panel.
Strictly speaking uninstallation is not required as multiple versions of BPC RiskManager can co-exist on your computers. You might want to consider the following factors in deciding whether to uninstall a previous version:
<ol>
<li> Unless you have a specific reason for preserving earlier versions uninstallation is the cleanest and safest approach. Each version of BPC RiskManager will install into a different directory.
<br>
<li> If you are upgrading from one BPC RIskManager V5 or BPC RiskManager Express, you should probably keep your earlier version on the application server until you are satisfied that BPC RiskManager V6+ is working correctly with your data. BPC RiskManager Express, V5 & V6 can both operate and exist side by side.
<br>
<li> If you are upgrading from BPC RiskManager V6.0 through V6.18 it is probably easier to uninstall as the new RM dataserver will replace the previous version (all other components will coexist however. The Midas libraries have been upgraded which may cause intermittent failure of earlier releases.
<br>
<li> If you are upgrading from BPC RiskManager V6.18 through V6.2.5.6 (or later) the automated installer will take care of everything for you. If you want to uninstall earlier versions, you can.
</ol>
=BackLinks=
*[[RM625ENT Installation Instructions]]
9940a832d89410f72700bc2d835437a1ea2919d0
Install MS IIS
0
369
564
2019-09-10T13:07:28Z
Bishopj
1
Created page with "=Installing BPC RiskManager V6 - OPTIONAL IIS Installation= Note: Read the next step before executing the IIS install. Some components of BPC RiskManager make use of a Micr..."
wikitext
text/x-wiki
=Installing BPC RiskManager V6 - OPTIONAL IIS Installation=
Note: Read the next step before executing the IIS install.
Some components of BPC RiskManager make use of a Microsoft IIS webserver. It is recommended that you install IIS 6+ if available or not already installed, or that you have IIS available on your network. XP uses IIS 5 which is also fine, but IIS 6+ gives better worker process control.
Note: the SurveyManager component will work remotely from any IIS server, but if you intend to use the HTTP/HTTPS connection component instead of the raw TCP/IP socket connection component you MUST have IIS installed on each Application Server Computer.
To install IIS (you may need to have your Windows installation disks handy).
The IIS component is located in slightly differing positions across the versions of Windows, and the configuration selections are slightly different (if not in outcome).
Select the operating system with which you are working from the list:
*[[Install MS IIS On Windows XP|On Windows XP]]
*[[Install MS IIS on Windows Vista|On Windows Vista]]
*[[Install MS IIS on Windows 2003 - 2008|On Windows 2003/2008]]
=BackLinks=
*[[RM625ENT Installation Instructions|BPC RiskManager V6.x Installation Instructions]]
6ec45b2fc542f98787ba786c4e94417baab83f1c
Install MS IIS On Windows XP
0
370
565
2019-09-10T13:08:47Z
Bishopj
1
Created page with "=Installing on Windows XP= * Open the Windows control panel (eg. Start / Settings / “Control Panel”) and select “Add or Remove Programs” and then “Add/Remove Wind..."
wikitext
text/x-wiki
=Installing on Windows XP=
* Open the Windows control panel (eg. Start / Settings / “Control Panel”) and select “Add or Remove Programs” and then “Add/Remove Windows Components”:
[[image:AddRemWinComp.png]]
* In XP you will find the IIS Components here:
[[image:IIS_Install1_XP.png]]
* Click on the details button and tick components you want. We have selected all of them. Note the SMTP services. This is the simple email transport which you may wish to use if you do not have an email server easily accessible. (See the next section).
[[image:IIS_Install2_XP.png]]
* Select “OK” to run the installation.
=BackLinks=
*[[Install MS IIS]]
e33d724db71726d02ddec6d6b8c53d5802f39ce8
Install MS IIS on Windows Vista
0
371
566
2019-09-10T13:09:54Z
Bishopj
1
Created page with "=Installing IIS on Vista= * Navigate to “Control Panel” and open “Programs and Features” from the folder list. * From the left hand panel choose “Turn Windows Feat..."
wikitext
text/x-wiki
=Installing IIS on Vista=
* Navigate to “Control Panel” and open “Programs and Features” from the folder list.
* From the left hand panel choose “Turn Windows Features On or Off”
[[image:IIS_OnVista1.png]]
* The “Windows Features” window will open. Expand the “Internet Information Services” tree and ensure that at least the following options are ticked: Select OK until you are back at the desktop.
[[image:IIS_OnVista2.png]]
=BackLinks=
*[[Install MS IIS]]
1d06dd86d0e30d65cef3c778b970d3e044e9cd22
Install MS IIS on Windows 2003 - 2008
0
372
567
2019-09-10T13:11:44Z
Bishopj
1
Created page with "=Installing on Windows XP= * Open the Windows control panel (eg. Start / Settings / “Control Panel”) and select “Add or Remove Programs” and then “Add/Remove Wind..."
wikitext
text/x-wiki
=Installing on Windows XP=
* Open the Windows control panel (eg. Start / Settings / “Control Panel”) and select “Add or Remove Programs” and then “Add/Remove Windows Components”:
[[image:AddRemWinComp.png]]
* In W2003 you will find the IIS Components here:
[[image:IIS_On_W2003_1.png]]
* Click on the details and tick components you want. We have selected all of them here BUT you should read the Windows 2003 IIS documentation if unsure to understand the implications of the various capabilities. We need only a tiny portion of the IIS capabilities for RiskManager, but the decision as to what sub components to install depends on how you will augment your risk system with external components on this server. For example FTP would be needed only if you were allowing multiple users to upload/exchange documents such as procedure manuals via the web server across the internet, but there are other, possibly better ways to achieve this.. Note the SMTP services. This is the simple email transport which you may wish to use if you do not have an email server easily accessible. Risk Manager makes use of SMTP for sending emails. (See the next section).
[[image:IIS_On_W2003_2.png]]
* Select “OK” to run the installation.
=BackLinks=
*[[Install MS IIS]]
71c98a05996bcd24529239761ec63ff3f4d40d79
Install MS SMTP Services
0
373
568
2019-09-10T13:16:02Z
Bishopj
1
Created page with "If you do not have a separate email server that will accept relaying and will be using the Emailed reminders you should install the SMTP services. It is easiest to do this in..."
wikitext
text/x-wiki
If you do not have a separate email server that will accept relaying and will be using the Emailed reminders you should install the SMTP services. It is easiest to do this in the previous step.
If you did not do that we will work through the installation here.
*[[SMTP_On_XP|Installing The SMTP Server on Windows XP]]
*[[SMTP_On_W2003|Installing The SMTP Server on Windows 2003]]
*[[SMTP_On_Vista|Installing The SMTP Server on Windows Vista]]
=BackLinks=
*[[RM625ENT Installation Instructions|BPC RiskManager V6.x Installation Instructions]]
8425188908c08d71afc4b9e4582abccacb7d883e
SMTP On XP
0
374
569
2019-09-10T13:16:59Z
Bishopj
1
Created page with "* Open the Windows control panel (eg. Start / Settings / “Control Panel”) and select “Add or Remove Programs” and then “Add/Remove Windows Components”: image:A..."
wikitext
text/x-wiki
* Open the Windows control panel (eg. Start / Settings / “Control Panel”) and select “Add or Remove Programs” and then “Add/Remove Windows Components”:
[[image:AddRemWinComp.png]]
* In XP you will find the IIS Components including the SMTP service here:
[[image:IIS_Install1_XP.png]]
* Click on the details and tick SMTP service.
[[image:IIS_SMTP_Install.png]]
* Select “OK” to run the installation. (Instruction for configuring SMTP services are continued in the next section)
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
f5e57f9087ac4822869cb865588801c9102ebc8d
SMTP On W2003
0
375
570
2019-09-10T13:17:56Z
Bishopj
1
Created page with "* Open the Windows control panel (eg. Start / Settings / “Control Panel”) and select “Add or Remove Programs” and then “Add/Remove Windows Components”: image:A..."
wikitext
text/x-wiki
* Open the Windows control panel (eg. Start / Settings / “Control Panel”) and select “Add or Remove Programs” and then “Add/Remove Windows Components”:
[[image:AddRemWinComp.png]]
* In W2003 you will find the IIS Components here:
[[image:IIS_On_W2003_1.png]]
* Click on the details and tick SMTP service.
[[image:IIS_On_W2003_SMTP.png]]
* Select “OK” to run the installation.
=BackLinks=
*[[Install MS SMTP Services]]
4566742c472fc082d20670b038dfed82a046ae8f
SMTP On Vista
0
376
571
2019-09-10T13:19:20Z
Bishopj
1
Created page with "There is no SMTP service shipped with Vista. You will need to either use Exchange or an SMTP service on another computer in your network or install a third-part SMTP service...."
wikitext
text/x-wiki
There is no SMTP service shipped with Vista. You will need to either use Exchange or an SMTP service on another computer in your network or install a third-part SMTP service. If you require assistance procuring an SMTP service please contact Bishop Phillips Consulting directly.
=BackLinks=
*[[Install MS SMTP Services]]
0b0b63f75808c49f250d107b22c7d69aeb610f38
Configure MS SMTP Services
0
377
572
2019-09-10T13:21:05Z
Bishopj
1
Created page with "If you have a separate email server on your network that will accept relay requests from the Application Server Computer, then you can skip this step. If you have installe..."
wikitext
text/x-wiki
If you have a separate email server on your network that will accept relay requests from the Application Server Computer, then you can skip this step.
If you have installed the SMTP server on the Email Server Computer as above (which may also be the Web Server (IIS) computer and the Application Server Computer) you should configure the service and prevent it from being used as an unsolicited relay server.
'''''In W2003 & XP''''':
a. Right click on “My Computer” and choose “Manage”.
b. In the list of components on the screen expand “Services and Applications” and “Internet Information Services”
c. Right click on “Default SMTP Virtual Server” and select properties.
[[Image:MS_SMTP_CFG1.png]]
d. On the messages tab enter a valid email address at which you can receive non delivery reports from the SMTP server . This will ensure you are alerted when the smtp server fails to send deliver email on behalf of the RiskManager application. You may need to change the other message control fields to suite the size of your organization (or un-check the boxes). It is.wise to have some limits, but if you will have a few thousand employees likely to receive email reminders or survey invitations in a single batch you may need to up the number of messages in a connection substantially. Risk Manager is unlikely to have more than a few recipients in any given message (depending on how you set the CC lists, but 20 is probably more than enough).
[[Image:MS_SMTP_CFG2.png]]
e. Select the “Access Tab” and chose “Authentication”. Tick the “Anonymous Access” button. You could also choose “Basic Authentication” (or Integrated) but you will have to add the email service account on the Security Tab, and given the other settings we are about to do it is likely unnecessary. When you are finished select the “OK” button.
[[Image:MS_SMTP_CFG3.png]]
f. On the “Access Tab” select “Connection”. Add the IP address or Domain Name of the Application Server and Web Server Computers to the list of valid computers that can access the SMTP server. The setting shown assumes the SMTP server is on the same physical computer as the “Application Server and Web Server Computer”. 127.0.0.1 is the address you should use if that is also your setup. Select OK.
[[Image:MS_SMTP_CFG4.png]]
g. On the Access tab chose “Relay”. In the Relay settings window select “Only the List Below” and Add the IP address or Domain Name of the Application Server and Web Server Computers to the list of valid computers that can access the SMTP server. The setting shown assumes the SMTP server is on the same physical computer as the “Application Server and Web Server Computer”. 127.0.0.1 is the address you should use if that is also your setup. Uncheck the “Allow Computers which successfully authenticate to relay” check box UNLESS you have set the access control to require authentication in step e, above. Select OK.
[[Image:MS_SMTP_CFG5.png]]
h. Select Apply and OK.
=BackLinks=
*[[RM625ENT Installation Instructions|BPC RiskManager V6.x Installation Instructions]]
efd8e58ad9b6088f2f48140be1803e7adf94944f
Install RiskManager From CD or Downloaded Installer
0
378
573
2019-09-10T13:22:38Z
Bishopj
1
Created page with "=Introduction= The installation of Risk Manager Enterprise registers certain program components and sets up a programme menu on the machine on which it is run, but it is simp..."
wikitext
text/x-wiki
=Introduction=
The installation of Risk Manager Enterprise registers certain program components and sets up a programme menu on the machine on which it is run, but it is simple to perform the few key steps manually, and the folder structure is not critical to the installation and, finally, most components are self registering so you can install Risk Manager directly onto your application network server or install it to a local PC and then progressively move program components during the installation.
We recommend, however (to make your life easier) that you install BPC RiskManager on the target application server (or if you are using multiple application servers, the machine you will be notionally treating as the primary “home” of the application server).
In earlier versions of BPC RiskManager the Enterprise installer did less work than this release. With the advent of Vista it became necessary that the installer application performs most of the administration level tasks so in this release we have assumed the most common scenario.
To install in all versions of windows, simply run the “Setup.exe”. It will create a BishopPhillips menu in the Programs list that appears when you select the start button and a group of files in the “Program Files” directory that are the application files.
If you are installing under Vista or Windows 7, a number of the subsequent steps may require you to run an application “as administrator”. If the program in the instruction fails to run, right click on the icon in the folder and choose the “run as administrator” option on the relevant program. This only applies to setup and OS components. During normal usage the RiskManager client will always executed under normal user rights. The application server in configuration mode should normally be run "as administrator" from the start menu or desktop icon, by using the right mouse button-context menu and selecting "run as administrator" to start it rather than the normal single or double left mouse click.
The installer run in full/complete mode will leave you with a functioning BPC RiskManager installation and open the client to verify. In a single user environment this will generally be the end of the installation, but in an networked, enterprise or advanced single user configuration there are many more capabilities to be enabled after the installer completes. These are covered in other sections.
=Installers=
There are essentially 3 installers:
# Single user/desktop installer : Install / upgrade the entire system on a single computer.
# Enterprise installer : Install / upgrade entire system on an a single application server or different components on separate machines.
# Client installer(s) : Contained within the Enteprise Installation is a separate installer deleivered as both an single exe and a MSI file that allows installation of the just the desktop client application for distribution of the installable client desktop component. All versions also include the ActiveX client component for opional browser based client component distribution and installation.
==The Single User Installer==
Run this in Full/Complete or repair mode.
==Enterprise Installer==
There are multiple modes available after starting the installer. Normally you will run this in Full/Complete.
# Full/Complete: This mode will install all components on to the application server, set up local or remote databases and local services, optionally upgrade existing databases, optionally import backup databases, install web components on the local IIS server and test the installation.
# Databases: This mode just installs the database components. Use only on a database server, and only if Full/Complete not used.
# Minimum: Just copies the components onto the server in the programme files directory and registers the libraries. Allows a full manual installation as documented in this wiki. Only advised in advanced scenarios. Full/Complete mode also copies all the components used in Minimum and completes the install. It does not exclude a manual install as well.
# Client: Just installs the client components on a target computer. It is probably easier to use one of the supplied client installers copied to your server during Full/Complete and Minimum install modes. These versions will only present the client install option.
=BackLinks=
*[[RM625ENT Installation Instructions|BPC RiskManager V6.x Installation Instructions]]
04bbff49e4e99e860b01e90bf80ceff7fa1e606b
Install Database Connectivity Tools
0
379
574
2019-09-10T13:23:54Z
Bishopj
1
Created page with "=SQL Server or Oracle= ''Note: Database connectivity tools are ONLY required on the application server and the web server (if that is a different computer). Client computers..."
wikitext
text/x-wiki
=SQL Server or Oracle=
''Note: Database connectivity tools are ONLY required on the application server and the web server (if that is a different computer). Client computers do NOT require database connectivity tools.''
BPC Risk Manager uses ActiveX Data Objects (ADO) / MDAC to access data stored in a SQL database. The tools for this connectivity method are required on the servers.
Whether you are using SQL Server (any version) or Oracle (any version) you must have the database connectivity tools installed on the application server and the web server. Most Windows operating systems will include the correct versions by default. Some of the older MS Windows versions will require an update. If your MDAC/ADO version is later than the those listed here, that will be fine. We are only concerned that the correct MINIMUM version is on the relevant computer.
==Windows 2000 or NT==
If you are using Windows 2000 or earlier as your Application Server Computer OS…
* Install MDAC version 2.7 or MDAC 2.8 (or later) from the CD
Install MDAC version 2.7 or 2.8 from CD by running install file: [RMInstallDir]\DatabaseTools\MDAC\MDAC 2_7\ MDAC_TYP.EXE
(Note: Your CD may have MDAC 2.8 instead, look in [RMInstallDir]\DatabaseTools\MDAC\MDAC 2_8\ )
Optional: If you would like to know what version of MDAC you are running before performing this installation then run the MDAC component checker program first. Extract the MDAC component checker program from file ‘cc.exe’ available in folder ‘\Database Tools\MDAC\Component Checker’.
==Windows XP, Vista, Windows 2003, Windows 2008==
Versions of Windows XP, Vista, 2003 or later include the correct minimum MDAC version by default so if using Windows XP / Vista / Server 2003 or later, this step can be skipped..
=Oracle=
If you are using an Oracle Database & BPC RiskManager Express (V5)…
* Install Database Client Tools (ORACLE VERSION ONLY)
(This step applies ONLY to BPC RiskManager V5 and BPC RiskManager Express)
A separate installation of database client tools is typically only required when the database is housed on a separate network server. The risk database can be installed on the same network server as the Risk Manager server application or on another network server. The Risk Manager Data Server and Risk Mail Manager applications require database client tools in order to access the risk database(s).
Oracle sites installing database client tools need to select ‘Oracle Programmer’ and ‘Oracle Windows Interfaces’ components in order to install the necessary Oracle ADO database provider: ‘Oracle Provider for OLE DB’.
NOTE: You can skip the Oracle Tools installation if you are not using an Oracle Database Server.
=BackLinks=
*[[RM625ENT Installation Instructions|BPC RiskManager V6.x Installation Instructions]]
fd4e98f3875f7fd670cacc0fd7aba9c32f24e0c1
Install or Upgrade the Risk Database
0
380
575
2019-09-10T13:25:16Z
Bishopj
1
Created page with "=Upgrade Database or First-Time Install= Select the link appropriate to your situation: * [[Upgrade BPC RiskManager Database Installation]] (i.e. you are an existing BPC Ris..."
wikitext
text/x-wiki
=Upgrade Database or First-Time Install=
Select the link appropriate to your situation:
* [[Upgrade BPC RiskManager Database Installation]] (i.e. you are an existing BPC RiskManager user and you want to upgrade your existing databases from an earlier version).
* [[First Time BPC RiskManager Database Installation]] (i.e. you do not have or do not ish to preserve an existing BPC RiskManager database)
=BackLinks=
*[[RM625ENT Installation Instructions|BPC RiskManager V6.x Installation Instructions]]
11d8bd1886141bf16a8d402688e2a96e4db73529
Upgrade BPC RiskManager Database Installation
0
381
576
2019-09-10T13:26:02Z
Bishopj
1
Created page with "=Introduction= Four options are offered for upgrading the database. The easiest and safest is to either let the installer do it for you (option 1.) or get us to do it for yo..."
wikitext
text/x-wiki
=Introduction=
Four options are offered for upgrading the database. The easiest and safest is to either let the installer do it for you (option 1.) or get us to do it for you (option 3). Alternatively, with a small amount of skill and a little certainty as to which database version you are running, you can run the provided upgrade scripts. Upgrade scripts for the most commmon BPC RiskManager versions are provided in your installation folder. Note there are versions for SQL 2000 and SQL 2005 (and above) You must run each script for the appropriate server in the numbered order on EACH RM database. Script sets usually include structure and data update groups. You are also welcome to contact Bishop Phillips Consulting directly for any unusual scenarios.
==OPTION 1:==
The installer will do it for you. For every version after 6.18 the installer will allow you to nominate database(s) to be upgraded and the last RM application version to which each database applied and perform the database upgrade for you. This is by far the easiest option.
==OPTION 2:==
Database scripts are used to upgrade a risk database. Database scripts MUST BE RUN IN THE NUMBER ORDER PROVIDED IN THE SCRIPT DIRECTORY as each script applies the updates from one version to the next only. All database scripts can be run without the need for customisation (changes to the script files). If you do not want to run a standard order of installation please contact Bishop Phillips Consulting before modifying scripts.
To upgrade a risk database from a prior version please run the database upgrade scripts provided. Database upgrade scripts must be run in the correct order from lowest version number to the highest available. Each upgrade script will progressively upgrade the database structure to meet the requirements of each minor upgrade release of Risk Manager.
In some cases we will provide scripts to go directly from your database version to the target version.
In all cases a script's target database versions are identified in the name: eg "1_BPCRMUPDT_StructV618_to_625.sql" or something similar. The left hand number is the "from" database version, while the right hand number is the "to" database version. The version numbers are the database build version numbers and may nor exactly match the client version numbers.
How to determine what data structure version of Risk Manager you are running: Run database query:
"select sys_val from system_config where (module='RM') and (context='SYSTEM') and (keyname='DATA STRUCT VERSION');"
==OPTION 3:==
If you have any doubt about what you need to do to upgrade a database you can backup one of your databases, compress the backup into zip format and send it to Bishop Phillips Consulting (or arrange for us to download it from your server) and we will generate a custom update script, convert you database (and check you database for errors at the same time), and provide it back to you on a secure ID for download – all for a tiny nominal fee.
==OPTION 4:==
If you have a database comparator tool available such as SQLDelta and you know how to use it, you can install the new supplied default database on a server and run the database comparator between the old database and the new one. You should accept 100% of the structure changes. There may also be data changes in the new release, and these are best handled by running the "data update" sql script, or if you wish to use the comparator for that as well you should request a current list of tables for data comparison from BPC.
Option 4 requires a very powerful comparator tool (covering tables, views, procedures, triggers, indexes, user defined datatypes, constraints, and referential inegrity rules), and an advanced understanding of the comparator software, the riskmanager database, and SQL. It is NOT the recommended option and should be done in consultation with a BPC technical support officer. Remember to backup your database(s) first.
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
8e3a175088ba99a90e4651e879d83b2acafbff1d
First Time BPC RiskManager Database Installation
0
382
577
2019-09-10T13:27:29Z
Bishopj
1
Created page with " *[[First Time SQL Server Installation]] *[[First Time Oracle Installation]] =BackLinks= {{#dpl: linksto={{FULLPAGENAME}} }}"
wikitext
text/x-wiki
*[[First Time SQL Server Installation]]
*[[First Time Oracle Installation]]
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
6e5973f89616a8e4b17649eb50fd88a1acf60763
First Time SQL Server Installation
0
383
578
2019-09-10T13:28:15Z
Bishopj
1
Created page with "* [[Instaling BPC RiskManager Database on SQL Server 2005 or SQL Express]] * [[Instaling BPC RiskManager Database on SQL Server 2000]] =BackLinks= {{#dpl: linksto={{FULLPAG..."
wikitext
text/x-wiki
* [[Instaling BPC RiskManager Database on SQL Server 2005 or SQL Express]]
* [[Instaling BPC RiskManager Database on SQL Server 2000]]
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
12bc31346ad676c6bfb0976b29fdfe3d3a84bcc8
Instaling BPC RiskManager Database on SQL Server 2005 or SQL Express
0
384
579
2019-09-10T13:29:01Z
Bishopj
1
Created page with "# [[Make a server login id (BPC RM on SQL2005)]] # [[Make the database (BPC RM on SQL2005)]] # [[Restore the database access IDs (BPC RM on SQL2005)]] # Set up the initial u..."
wikitext
text/x-wiki
# [[Make a server login id (BPC RM on SQL2005)]]
# [[Make the database (BPC RM on SQL2005)]]
# [[Restore the database access IDs (BPC RM on SQL2005)]]
# [[Set up the initial user IDs]]
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
9b626d0e5fea8827599e09d87142a8ff5886d4ce
Make a server login id (BPC RM on SQL2005)
0
385
580
2019-09-10T13:29:51Z
Bishopj
1
Created page with "=Introduction= BPC RiskManager is a highly secure environment, so security setup of accounts is necessarily a little more involved than just starting up the database and the..."
wikitext
text/x-wiki
=Introduction=
BPC RiskManager is a highly secure environment, so security setup of accounts is necessarily a little more involved than just starting up the database and the application.
You have four options for application server login:
#. Use sa (SQL Server builtin systems administrator account)
#. Use the builtin riskmanuser user account (BPC RiskManager builtin master access account)
#. Use an account of your own choosing with administration rights.
#. Use an account of your own choosing without administration rights.
We recommend either option 1, 2 or 3 as this makes support and configuration slightly easier, and it is alread set up for you. The easiest is to use ‘sa’ to access the database from the application server – if you are doing this you can skip the rest of this step BUT the username and password will be stored in the registry on the application server. In a similar vein, you can create another account with systems administration rights (option 3) with the same drawback as using “sa” and the added burden of having to create the account in the first place. The advantage of using a systems administration level account is that you do not need to do anything about access rights for the database itself.
The generally preferred approach is option 2, using the built in user access account (or similar) with more restricted rights than ‘sa’. The rest of this step assumes you are using riskmanuser as the database login account. As the client components never access the database directly, the database access account is only used by the application server and the database never needs to be surfaced to any computer other than the application server, and the surveymanager host.
The databases ship with the “riskmanuser” and “mailmanager” user ids already created (the actual accounts may vary in your version - refer to the documentation shipped with your application) so if you use those ids you will find future administration easier. These accounts have highly restricted rights (less than a normal user) and are therefore the preferred option.
=Creating a Database Login Account in SQL Server 2005 & Express=
<ol>
<li>. Open Management Studio (SQL 2005/SQL Express) or Enterprise Manager (SQL 2000)
<li>. Expand the folder with corresponding to the name of your computer
<li>. Right click on the “Security” folder and choose “New Login”
<br>
<br>
[[Image:SQLStudMan_NewLogin1.png]]
<br>
<br>
<li>. Select “SQL Server Authentication”
<li>. Enter “riskmanuser” in the login name box
<li>. Enter your desired password and confirm the password
<li>. Write the password down somewhere handy as you will need it again soon.
<li>. Un-check “Enforce password expiration”
<li>. Un-check “User Must Change Password at Next Login”
<br>
<br>
[[Image:SQLStudMan_NewLogin2.png]]
<br>
<br>
<li>. Select “OK:
</ol>
Repeat the process for the MailManager account if you will be using a separate restricted mail account for the mailing system (optional)
NOTE: Do not assign any additional roles or rights to the account at this stage.
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
6ef99149497036bf229b13e9e6899aa153bb2260
Make the database (BPC RM on SQL2005)
0
386
581
2019-09-10T13:34:00Z
Bishopj
1
Created page with "=Introduction= Two options are available for creating a new risk database. The first option is easiest (BUT NOT PREFERRED) for users who do not have access to SQL Server too..."
wikitext
text/x-wiki
=Introduction=
Two options are available for creating a new risk database. The first option is easiest (BUT NOT PREFERRED) for users who do not have access to SQL Server tools (Eg Enterprise Manager or SQL Studio). This generally applies only to users of MSDE 2000. The second option is the safest and therefore preferred, but requires access to the Enterprise Manager (SQL 2000) or Database Management Studio (2005/Express) shipped with the database software. Detaching and reattaching Microsoft databases on different computers is not recommended by Microsoft.
The following instructions assume the default drive and directories are used for database files. You may substitute your own locations but must edit the supplied sql files accordingly.
=OPTION 1 – Attach Database (NOT PREFERRED in Enterprise)=
This is the best method for MSDE 2000 and single user installs.
For MS SQL Server 2005/MS SQL Express 2005
* Attach database MDF file provided
** Copy file [RMInstallDir]\Database\MDFToAttach\RiskManDB_Data.MDF to folder: 'C:\Program Files\Microsoft SQL Server\MSSQL.1\Data\'
** Run batch file: [RMInstallDir]\Database\MDFToAttach\AttachRiskMDFFile2005.bat
Notes: Please edit SQL file (AttachRiskMDFFile2005.sql) if you copy the file to a different location, a new SQL Server log file is automatically created.
Note: If attaching the database manually (ie. Not with the provided scripts, we suggest that you flag the owner of the databases as ‘sa’).
=OPTION 2 – Create & Restore Database (PREFERRED)=
* Database can be restored from SQL Server backup file. Follow these steps:
==Create database 'RiskManDB’ in SQL Server (2005+)==
* It is a good idea to create a couple of databases. Eg. a Training database, a Production (main) database and possibly a Testing database. You can have as many databases as you like in RiskManager.
<ol>
<li> Right click on the “DataBases” folder and choose “New Database” from the properties.
<br>
<br>
[[Image:SQLStudMan_NewDB1.png]]
<br>
<br>
<li> Enter a database name that makes sense to you. We recommend that you adopt a sensible, consistent naming convention for your databases to make management easier later. We suggest you start it with “RiskManDB” ending with a character string that identifies the database. E.G. “RiskManDB_Train08”
<br>
<br>
[[Image:SQLStudMan_NewDB2.png]]
<br>
<br>
<li> Select OK to generate the new database
<br>
<br>
</ol>
==Restore the backup file==
* The backup file is held in [RMInstallDir]\Database\BackupToRestore\2005\RiskManDB2005.bak to database. We must force the restore over the existing database file and fix the file locations.
<ol>
<li> In windows explorer, navigate to the supplied backup master directory:
[RMInstallDir] \Database\BackupToRestore\2005\
<li> Either double click on the supplied batch file “CopyMasterToDefaultBackup2005.bat” or manually copy the file:
<br>
<br>
[RMInstallDir] \Database\BackupToRestore\2005\RiskManDB2005.bak to the backup directory<br>
(DO NOT RESTORE DIRECTLY FROM THE SUPPLIED FILE).
<br>
<br>
The default SQL 2005 backup directory (and used by the batch file) is:
“C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\BACKUP”
<br>
<br>
THE BATCH FILE IS ONLY APPROPRIATE IF THE DATABASE SERVER IS ON THE SAME COMPUTER
<br>
<br>
<li> In Studio Manager Expand the database list for the target server.
<li> Right click on the database you wish to restore (in this case it is the database you just created)
<li> From the Menu that appears choose “Tasks” then “Restore” then “Database…”
<li> The Restore Database window will open. On that window the database name should already be displayed in the :”To Database” field. Select “From device” and click on the ellipsis button on the right hand side.
<br>
<br>
[[Image:SQLStudMan_RestoreDV2.png]]
<br>
<br>
<li> In the backup selection window, select file from the drop box and select “Add”
<br>
<br>
[[Image:SQLStudMan_RestoreDV3.png]]
<br>
<br>
<li> Select the backup file we copied into the backup directory and press “OK” and “OK” again.
<br>
<br>
[[Image:SQLStudMan_RestoreDV4.png]]
<br>
<br>
<li> On the “Restore Database” window tick the “Restore” check box and the select the entire database name from the “To Database” field and copy it (control-C), and then choose “Options”.
<br>
<br>
[[Image:SQLStudMan_RestoreDV5.png]]
<br>
<br>
<li> On the options panel tick the “Overwrite the existing database” check box.
<li> Now either use the ellipsis buttons to navigate to the correct file name (which will match the database name on the previous screen) and the correct log file (which will match the database name on the previous screen with _log at the end) OR simply replace the file name portion of file path on each line with the string you copied from the previous page on each line.
<br>
<br>
[[Image:SQLStudMan_RestoreDV6.png]]
<br>
<br>
<li> Then select OK to start the restore.
<br>
</ol>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
78c32a6730488a463e088e775061710567bc6fd6
Restore the database access IDs (BPC RM on SQL2005)
0
387
582
2019-09-10T13:35:13Z
Bishopj
1
Created page with "=Introduction= If you are using userid SA to connect to your database you can ignore this step. The databases ship with the user ids already installed, but when an MS SQL d..."
wikitext
text/x-wiki
=Introduction=
If you are using userid SA to connect to your database you can ignore this step.
The databases ship with the user ids already installed, but when an MS SQL database is moved from one server to another the internal GUID encoding of the user ids may be different on the destination server and you may find that you can not connect with the riskmanuser account, even though it seems to be present. You can either re-create them or run the provided SQL scripts to repair them.
BPC RiskManager will support connecting to many databases at once, so it is not unusual for you to find that you want to move a database from one server to another, or to duplicate a particular database across unlinked servers. You should do this by either:
# Using the builtin data transfer system of SQL server, or
# Backup and restore, and then following the steps in OPTION 2 at these times, as your riskmanuser id may already exist on the target recovery server.
Note also that if you are going to use more than one riskman database at once on the same database server, you will have to use the backup and restore (or equivalent duplication) method to install the database, rather than attaching, because the server will think your second attempt to attach a copy of the same database is trying to reuse the datafiles of the first and get difficult about attaching it.
=OPTION 1 (If you performed Step 1 as instructed)=
* The relevant scripts can be found in:
Scripts are in:<br>
[RMInstallDir] \Database\Scripts\2005\
Steps to reconnect the user ID’s for a restored/recovered/attached database:
<ol>
<li> In SQL Server Management Studio, navigate to the database name under the databases folder.
<li> Right click on the database in the tree view and choose ‘new query’ on the database and copy the provided script: “updateLoginRMU.sql” into the query window and run it by selecting “Execute”.
<br>
<br>
[[Image:SQLStudMan_AssignUserRights0.png]]
<br>
<br>
<li> Right click on the database in the tree view and choose ‘new query’ on the database and copy the provided script “fix_executerights_on_loginRole2005.sql.”
''The first script attempts to connect the databases version of riskmanuser with the server’s version of the same user id. The second ensures that that RiskManRole has execute access to our stored procedures in the database.''
<li> Expand the Security folder for the server (not the database!!) and expand the “logins” folder. Right click on the “riskmanuser” and choose properties.
<br>
<br>
[[Image:SQLStudMan_AssignUserRights1.png]]
<br>
<br>
<li> Select “User Mapping” from the left hand list and tick the restored database in the top panel. In the panel below that, verify that database role public, ‘RiskManRole’, db_datareader, and db_datawriter have been tisked for the id. If not, tick them to grant these roles. Select OK. If you still can’t connect the server’s riskmanuser id to the database – delete it from the database level and follow option 2.
<br>
<br>
[[Image:SQLStudMan_AssignUserRights2.png]]
<br>
<br>
</ol>
=OPTION 2 (If something went wrong)=
Steps to create the user ID’s:
# Creating login ‘riskmanuser’ and choose an appropriate password – you will need to remember this for later (it should already exist in the database, but you may need to delete it if you try to grant access from the top level security branch – it should then be recreated automatically in the database).
# Delete the riskmanuser id from the database (NOT THE SERVER) you just restored (and the riskmanuser schema in 2005)
# Assign login access to risk database(s) (at the server level).
# Assign database user membership to database roles: db_datareader, db_datawriter (at the server level).
# Assign database role ‘RiskManRole’ to the riskmanuser id (at the server level).
If riskmanuser has not been created successfully the application server will not connect at all when you attempt to connect later. Option 2 should always recover the access.
''In the event that you can connect from the application server, and perhaps even login from a user account<br>
via the client, but not create risks, etc, the problem will most likely be the stored procedure access<br>
rights which are held in the RiskManRole. Run the “fix_executerights_on_loginRole.sql.” to fix this <br>
problem.''
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
43b49e7c91b05b2f32e95ddeeb960d99c34c1d93
Set up the initial user IDs
0
388
583
2019-09-10T13:37:15Z
Bishopj
1
Created page with "=Introduction= The final step to setting up a new risk database is to create the initial accounts in the database for system administrators and optionally for selected users...."
wikitext
text/x-wiki
=Introduction=
The final step to setting up a new risk database is to create the initial accounts in the database for system administrators and optionally for selected users. Typically a single account is created in the database and this user then creates accounts using the application for all other system users.
If you are installing the single user version of BPC RiskManager in its default configuration, you do not need to do anything in this step and you can therefore skip these instructions.
For BPC RiskManager Enterprise users, there are three ways to do this. We will cover two of them. It is stressed that this information relates ONLY to the initial user(s) or initial bulk user creation. There are far easier ways to create users both in bulk and individually once the initial administration account is installed. For securtiy reasons, BPC RiskManager is shipped without accounts.
=OPTION 1 (COMPLEX) – Direct Database Table Update=
Versions 6.1.8 and earlier should follow this procedure to create the initial administration acount, or if you are going to create accounts in bulk you may also use this procedure. There is a MUCH simple procedure available for users of V6.1.9 and above (so those users should NOT use this method).
To create an account in the database add a record to tables:
RESOURCES and USERS.
Instructions:
For each database system user replace all text items in angle brackets with user information:
<UserDescription> User description field. EG: Jack Smith
<EmailAddress> Email field; must be unique
<DomainName\Username> User's Windows account if format: DomainName\NetworkUsername. Must be unique. EG: domain1\jsmith.
<FirstName> Optional field
<LastName> Optional field
INSERT INTO RESOURCES(DESCRIPT, EMAIL, USER_NAME, FIRST_NAME, LAST_NAME)
VALUES('<UserDescription>', '<EmailAddress>', '<DomainName\UserName>', '<FirstName>', '<LastName>');
INSERT INTO USERS(USER_NAME, ACCESS_ALL_AREAS, ASSIGNED_ROLE, AUDITOR_FLAG, ACCESS_ALL_RISK_TYPES)
VALUES('<DomainName\Username>', 1, 'ADMINISTRATOR', 1, 1);
=OPTION 2 (SIMPLE) – RootAdmin creation account=
If you are following these instructions through step by step...Do nothing at this time! This is the recommended approach.
We do it later during the application server install using a special administration account called "rootadmin". The rootadmin approach will involve creating a dedicated account directly from the application server control interface that can access the system using a normal client and then create other administrators normally within the application. Those new administrators can then revoke the rootadmin's access rights if desired.
If you are using any security model that relies on the operating system security (eg. LDAP or NT Groups) you will need to dedicate an operating systems level user id to be the rootadmin user. The account could be Administrator under windows – as this should never be used to access the application anyway, or some throw-away low level user account.
Security modes "Application Managed Access" (where the application requires its own login and handles security) or "Always Use Selected Group" do not require an operating systems account to be assigned to the rootadmin role.
Within the operating system, the rootadmin account does not need to have any special rights other than access to the BPC RiskManager client once (to create the real application administrators). Once other application administrators have been created, you can revoke the access rights of the rootadmin from within the application. Of course, you can also use an account that is intended to be the real SurveyManager administrator account as the rootadmin - there is no security implication from doing this as the rootadmin is no different from any other riskmanager administrator, with the exception that it can be created directly from the application server.
If you aren’t sure – leave it for the moment as we can always fix it later.
If you got to this page other than by following the installation process and are trying to find out how to create user accounts (or you want to see the alternative method of creating initial user id's) follow this link:
* [[BPC RiskManager Client - Add new users]]
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
96f6eff0ee9c0bd645455b28bb36a8d8b5fb9fe5
BPC RiskManager Client - Add new users
0
389
584
2019-09-10T13:43:29Z
Bishopj
1
Created page with "=Introduction= As BPC RiskManager supports multiple access and security models there are multiple ways that new users can be added to a database using the Risk Manager clien..."
wikitext
text/x-wiki
=Introduction=
As BPC RiskManager supports multiple access and security models there are multiple ways that new users can be added to a database using the Risk Manager client. The method you will use depends on your security model. Broadly they fall into three groups:
* Automatic (relevent only in single user installs where access rights are assumed, or trusted access models where the operating system login in trusted)
* Operating system secured (relevant where LDAP or NT Groups security is enabled)
* Locally secured (relevant where "locally secured in application" access model is used and operating system logins are not trusted - this is the most common scenario)
We will look at each of these later in this article.
In addition there are four types of user groups maintained by the system:
* Resources
* RiskManager users
* SurveyManager users
* Survey Respondents
Each of these four groups are independently managed inside the system, but the BPC RiskManager client substantially (but not completely) hides this from you. The RiskManager user group has further subdivisions which we will discuss shortly.
Firstly we will consider the four types of users, and then look at how these users are created.
=Types of RiskManager Users=
==Resources==
The resource is the minimum / lowest type of user known to the risk system. Every other type of risk user is at least a "resource".
Strictly speaking a resource is NOT a user of the riskmanager client directly. This user can not access the client, but can receive emails and alerts from riskmanager, be assigned tasks and responsibilities and be tracked in other ways.
A resource has a unique id assigned automatically by risk manager, but kept invisible from all users, has a first and last name and an email address and therefore is known by the application.
==RiskManager Users==
a RiskManager User must first be a resource, before they can be promoted to a user. Users have access to the RiskManager client in some degree. Users may be:
* Restricted (able to access only specific risk domains/organisational units), or
* Unrestricted (able to access all areas)
In addition a user has assigned roles which are one of:
* Reader/Inquirer (can only read the information to which they have access)
* Contributor/Risk Owner (can add or change information about risks, etc to which they have access)
* Risk Coordinator (can add information about risks)
* Risk Manager (can manage risks for one or more risk domains/organisational areas and create new risks)
* Administrator (has the power of a risk manager plus can create users and change application configuration)
* Auditor (can review risks and create audit assessments and has access to specialist audit functionality).
==SurveyManager Users & Respondents==
SurveyManager users are able to access the surveymanager capabilities, and therefore create surveys and review survey results. Within the riskmanager client these are usually RiskManagers, but if the specialist SurveyManager client is used there is substantially greater surveymanagement capability available.
SurveyManager users have various roles which are defined at both the database and indivudal organisational levels:
* Super (a super user can do anything the system supports)
* SurveyManager (grants the power to create and issue surveys at the organisational level - which depending on the organisational structure established may cover a region of organisation units)
* Dataentry User (can enter data on behalf of other users - such as where paper or phone based data collection is used)
* User (a survey respondent - can only respond to surveys to which they are invited)
==Synching of SurveyManager and RiskManager Users==
A simplified surveymanager interface is built into the RiskManager client. The user creation and access functions automatically create survey users and roles as required. With this interface a limited range of surveymanager capabilities are provided, but the surveymanager roles and personnel are synched with the riskmanager roles as follows:
<table border=1 >
<tr>
<th> Survey Manager Role</th><th> Risk Manager Role</th>
</tr>
<tr>
<td>Super</td><td>Administrator</td>
</tr>
<tr>
<td>SurveyManager</td><td>Risk Manager</td>
</tr>
<tr>
<td>Data Entry User</td><td>Not used</td>
</tr>
<tr>
<td>User / Respondent</td><td>Resource</td>
</tr>
</table>
As a consequence of the way users and roles are managed within the application suite, you can use the advanced SurveyManager clients (web and windows) interchangeably with the RiskManager client to manage the surveys. Most needs are covered by the RiskManager client, but where advanced survey layouts are desired, one of the more advanced survey manager clients may be appropriate.
=Creating Resources and Users in BPC RiskManager=
Resources and users can be created individually or in bulk within the system. A user must also/first be a resource, so a user is created as a resource first, and then granted access rights as a user.
Resources have no access rights within BPC RiskManager, but can receive emails, reminders, surveys, workflow tasks, etc.
Users have access rights using the BPC RiskManager client. These rights range from read only access to specific risks up to full administration of the entire system.
Depending on your configuration settings, Resources can be created on the fly. Administrators can also create and update both Resources and Users can using the security administration windows, both individually or in bulk:
* [[BPC RiskManager - Creating and Updating Resources]]
* [[BPC RiskManager - Creating and Updating Users]]
* [[BPC RiskManager - Creating and Resetting User Passwords]]
=Updating your user preferences in BPC RiskManager=
Various user details can also be set or updated on a per user basis by the individual user. These include your password, spell checker, screen colour coding, screen resoultion handling, etc.
* [[BPC RiskManager - Updating Your Personal Preferences]]
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
9ed6747c47dd8b85f9af035c690a810c9a2feeef
BPC RiskManager - Creating and Updating Resources
0
390
585
2019-09-10T13:44:45Z
Bishopj
1
Created page with "=Introduction= Resources and users can be created individually or in bulk within the system. A user must also be a resource, so resources are created first, and then grante..."
wikitext
text/x-wiki
=Introduction=
Resources and users can be created individually or in bulk within the system. A user must also be a resource, so resources are created first, and then granted access rights as a user.
Resources have no access rights within BPC RiskManager, but can receive emails, reminders, surveys, workflow tasks, etc.
There are broadly 4 ways to create resources and 2 ways to create users. All of these approaches broadly fall into:
* Adhoc creation
* Bulk creation
=Ad hoc Resource Creation=
==Introduction==
There are 3 ways to create resources ad hoc:
<ol>
<li> '''By attempting to access the database with the risk manager client.''' When a user attempts to access a riskmanager database with a riskkmanager client unless they are already a user with access rights, access will be denied. However, riskmanager will recourd their network user name and automatically create a resource record for them. Thus an administrator can easilly convert the new resource into a user by accessing the resource list and adding their username, email and password (if locally managed access security is turned on). The next time that user attempts to access they will then be allowed in. There are no security implications in being a resource in riskmanager, so this method entails no risk to the application.
<br>
<br>
<li> '''By being created manually as a resource on the resources screen'''.
<br>
<br>
<li> '''By being granted responsibility dynamically''' for a risk, strategy/control, task, insurance policy, etc. on the appropriate screen.
</ol>
The first two of these involve using the resources screen at some point to add the resource details. The third approach uses a shortform screen that opens from the relevant risk data screen, such as the "Maintain Risks" window. We shall therefore first consider this approach.
==Creating resources dynamically from a Maintain Risks window==
* While editing a risk, strategy/control, task, insurance policy or other record to which responsibility can be assigned, you will find a button beside the responsibility fields as shown (circled in red):
[[Image:RMC_ResourceCreate1.png]]
* Selecting this button will launch the following window:
[[Image:RMC_ResourceCreate2.png]]
* Enter the data in the fields and select OK to save it and allocate the responsibility to that person. Note, while a network user name is not required at this point, you are strongly advised to supply the correct network username / login id now if expect to later grant this new resource user access to the system. We strongly advise AGAINST login id's with spaces in them. Although the system will generally handle them correctly, some browsers will have problems with survey screens if you later send surveys/compliance forms to these users. The system generally assumes that login id's do not have spaces.
==Creating resources manually from the Maintain a Resource window==
You can create resources individually (as well as revising their details) using the resource maintenance window.
* Open your BPC Risk Manager client and select the Access tab (you must be an RM Administrator to do this)
[[Image:RMC_UserCreate1.png]]
* Select the security buitton
* A Set up Security window will open:
* Select the "Step 1: Maintaining a Resource" tab and "Option 2: Create & Maintain a Resource"
[[Image:RMC_ResourceCreate3.png]]
* To create a new resource, select "New"
* Enter the details in the edit boxes in the top of the screen. Note that the "Display Name" will be automatically assembled from the First and Last Names, but you can override it.
* If you know the network user name (login id) of the user, and you intend this resource to ultimately be able to login to RiskManager, you should override the automatically created network user name with the real user id. This will save confusion later and prevent duplication of user identities if they subsequently attempt to access and riskmanager automatatically creates a resource for them.
* '''NOTE:''' If the user has ALREADY attempted to access the system, you will probably already have a resource entry created for them. In this case, just locate them in the grid, select the corresponding row and enter the appropriate details. Their network user name will already be correct.
* Save the changes by selecting the green tick below the data entry feilds, or the save button at the top of the screen.
* Close the window by selecting the close button on the window.
=Bulk Resource Creation=
==Data Format==
Resources can be created en-mass by importing from a comma separated file or merely copying and pasting from a spreadsheet.
1. If a comma separated file is used the format is:
===CSV Format - Where only resources are being created===
* A text file
* With NO heading row
* Consecutive lines separated by cr/lf,
* Each line containing:
loginid, first_name, last_name, title, email
===CSV Format - Where resources and users are being created in one step===
* A text file
* With NO heading row
* Consecutive lines separated by cr/lf,
* Each line containing:
loginid, first_name, last_name, title, email, password
2. If the list is to be copied from a spreadsheet, create a spreadhseet with the following column layout:
===XL Format - Where only resources are being created===
* Consecutive rows,
* Each row containing columns with:
loginid, first_name, last_name, title, email
===XL Format - Where resources and users are being created in one step===
* Consecutive rows,
* Each row containing columns with:
loginid, first_name, last_name, title, email, password
As you will be copying the cell block excluding any headings, whether the spreadsheet has headings or not is up to you - just do not include any headings in the block you select and copy.
==Steps for Creating Resources in Bulk in BPC RiskManager==
* Open your BPC Risk Manager client and select the Access tab (you must be an RM Administrator to do this)
[[Image:RMC_UserCreate1.png]]
* Select the security buitton
* A Set up Security window will open:
* Select the "Step 1: Maintaining a Resource" tab and "Option 1: Bulk Import of resources"
[[Image:RMC_UserCreate2.png]]
* Now open an XL Spreadsheet and populate it with personnel data as shown, using the format below:
loginid, first_name, last_name, title, email
[[Image:RMC_UserCreate3.png]]
* Mark and select the area and copy the selected area to the clipboard (ctrl-c)
* Returning to BPC RiskManager, right click in the grid and choose "Paste". The grid will populate with the data you cpoied as shown:
[[Image:RMC_UserCreate4.png]]
* Verify that the "Create User access accounts" is UNCHECKED, as if this were checked, login-ids would be created and the import data structure would have to be different.
* You could alos have save the XL sheet to disk as a csv file, and read it in using "Load File" button.
* Now select the "Import Resources" button and the resources will be automatically imported
==Steps for Creating Resources and Users at the same time in Bulk in BPC RiskManager==
Using the bulk resource importer, it is possible to create both resources and their matching user records at the same time.
* Open your BPC Risk Manager client and select the Access tab (you must be an RM Administrator to do this)
[[Image:RMC_UserCreate1.png]]
* Select the security buitton
* A Set up Security window will open:
* Select the "Step 1: Maintaining a Resource" tab and "Option 1: Bulk Import of resources"
[[Image:RMC_UserCreate2.png]]
* Now open an XL Spreadsheet and populate it with personnel data as shown, using the format below:
loginid, first_name, last_name, title, email, password
[[Image:RMC_UserCreate5.png]]
'''Note:''' You can put any password in the password column you like. Users can change their passwords after they
log in. Either issue people with psuedo random passwords, or create all accounts with the same password and
require your users to change them on access if you like, but we recommend sensible random values at least
as some users will forget to change the password, or may not access the system immediately.
* Mark and select the area and copy the selected area to the clipboard (ctrl-c)
* Returning to BPC RiskManager, verify that the "Create User access accounts" checkbox is CHECKED (as shown). When it is ticked, you will notice an extra column appear in the grid. This column is for the passwords in your spreadsheat.
* Now right click in the grid and choose "Paste". The grid will populate with the data you copied as shown:
[[Image:RMC_UserCreate6.png]]
* You could also have saved the XL sheet to disk as a csv file, and read it in using "Load File" button.
* Now select the "Import Resources" button and the resources will be automatically imported, and for each row succefully imported the word "Done" will apear in the first column. For those that failed, an error message will be inserted in the first column.
[[Image:RMC_UserCreate7.png]]
* With the resources properly imported and corresponding user records created we should next assign appropriate roles.
* On the top tab bar select "Step 2 - Define Resource Access Rights". The user rights management screen will display.
* Imported users are created with no rights as shown.
[[Image:RMC_UserCreate8.png]]
* For each new user select their corresponding row in the grid.
* In the lower half of the screen (below the grid):
<ol>
<li> In the "Assigned Role" drop box field, select the user's access role from the list.
<li> If the user is an auditor, and should be able to create the audit assessments, etc. Select "Yes" from the "Is the user an Auditor?" field.
<li> Determine whether the user has restricted or unrestricted access rights, for risk types and business areas respectively. If the user is unrestricted you are finished with this part of the screen. If the user is restricted, you must grant them access to the particular risk types and business areas to which they should have access.
<li> Select the "Save" button at the top of the screen, if it is highlighted, to save the changes made so far.
<br>
<br>
For "Risk Type" restricted users:
<ul>
<li> In the "Access to risk types" column left click on "Add".
<li> Left mouse click the empty cell that appears in the left hand grid under "Risk Type", and a drop down box selector will appear.
<li> Select a risk type from the list in the drop down box.
<li> Select "Save" from the column control panel beside the Risk Type grid (if it is highlighted - the first row will often save automatically)
<li> Repeat for each Risk Type to which you want them to have access.
</ul>
<br>
<br>
[[Image:RMC_UserCreate9.png]]
<br>
<br>
For "Access to Business Area" restricted users:
<ul>
<li> In the "Access to Business Area" column left click on "Add".
<li> Left mouse click the empty "Business Group" cell that appears in the right hand grid under "Access to Business Areas", and a drop down box selector will appear.
<li> Select a Business Group from the list in the drop down box.
<li> Left mouse click the empty "Business Unit" cell that appears in the right hand grid under "Access to Business Areas", and a drop down box selector will appear.
<li> Select a Business Unit from the list in the drop down box.
<li> Select "Save" from the column control panel beside the "Access to Business Areas" grid (if it is highlighted - the first row will often save automatically)
<li> Repeat for each business group / business unit combination to which you want them to have access.
</ul>
<br>
<br>
[[Image:RMC_UserCreate10.png]]
<br>
<br>
<li> Finally select "Save" from the top of the screen (if it is highlighted) to save the changes made to this user.
<br>
<br>
[[Image:RMC_UserCreate11.png]]
<br>
<br>
<li> Repeat this process for each user for which you wish to make changes.
<li> When you are finished, simply close the screen using the normal windows close box in the top right hand corner of the window.
</ol>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
eaf09b8dded7ec88073c141e17f768c25eab2b15
BPC RiskManager - Creating and Updating Users
0
391
586
2019-09-10T13:47:26Z
Bishopj
1
Created page with "=Introduction= Both creation and updating of user access details is handled through the "Maintain Users" screen. With this screen you can: * Allocate access rights and ac..."
wikitext
text/x-wiki
=Introduction=
Both creation and updating of user access details is handled through the "Maintain Users" screen. With this screen you can:
* Allocate access rights and access areas
* Reset or create passwords
* Set or revoke the Auditor status
This document covers the process for handling the "Maintain Users" screen.
==Updating User/Resource Details==
If you want to change a person's name, title or email address, you need to use the "Maintain Resources" screen. Help with this screen is at [[BPC RiskManager - Creating and Updating Resources]].
==Updating User Access Rights==
If you want to change their access rights, then you are in the right place.
==Adding a resource as a User==
Before someone can be a user, they must first be a resource. If you are trying to add a new user, but have not yet made them a resource, you should first refer to [[BPC RiskManager - Creating and Updating Resources]]
If you are trying to add an existing resource as a user, you are in the right place.
=Adding a New or Updating an Existing User=
==Introduction==
There are two ways a previously created resource may present in the "Maintain Users" screen:
# As a result of a bulk import of resources where "add new user" flag has been set
# As a result of any other method of creating resources.
The essential difference, is that where users have been added in bulk they will have been created as a user with no rights (so you should use the section titled "Updating User Access Rights"), whereas where the resource has otherwise been created the resource will have no presence as a user and will have to be added (so you should use the section entitled "Turning a Resource into a User").
==Turning a Resource into a User==
Follow this procedure if:
* You wish to grant an existing resource access rights to BPC RiskManager.
* You added resource in bulk, but DID NOT check the "add as user" flag.
Steps for adding a new user:
* Open your BPC Risk Manager client and select the Access tab (you must be an RM Administrator to do this)
[[Image:RMC_UserCreate1.png]]
* Select the security button
* A Set up Security window will open.
* On the top tab bar select "Step 2 - Define Resource Access Rights". The user rights management screen will display.
[[Image:RMC_UserCreate12.png]]
* For each resource you wish to add:
** Left click the new button. (A new line will appear in the user grid, and the network user name drop list will become active).
** Left click on the "v" beside the the Network User Name drop list and the list of resources not yet granted user status will apear.
** Locate the target resource from the list and select it by left clicking on the corresponding line in the list. The list will close and the resource will be inserted into the Network User Name field.
** Select "Save" from the top of the screen.
* Now procede to "Updating User Access Rights" for each user you added and assign them their appropriate rights.
==Updating User Access Rights==
Follow this procedure if:
* You want to change (add or remove) an existing user's access rights,
* If your new user was created through a bulk import where the "add new user" flag was checked.
Steps for updating user access rights:
* Open your BPC Risk Manager client and select the Access tab (you must be an RM Administrator to do this)
[[Image:RMC_UserCreate1.png]]
* Select the security button
* A Set up Security window will open.
* On the top tab bar select "Step 2 - Define Resource Access Rights". The user rights management screen will display.
* New users automatically created during resource imported are created with no rights as shown. Existing users may already have access rights that you wish to change.
[[Image:RMC_UserCreate8.png]]
* For each user you wish to modify select their corresponding row in the grid.
* In the lower half of the screen (below the grid):
<ol>
<li> In the "Assigned Role" drop box field, select the user's access role from the list.
<li> If the user is an auditor, and should be able to create the audit assessments, etc. Select "Yes" from the "Is the user an Auditor?" field.
<li> Determine whether the user has restricted or unrestricted access rights, for risk types and business areas respectively. If the user is unrestricted simply chose the unrestricted option from the drop list, click the save button and you are finished with this part of the screen. If the user is restricted, you must grant them access to the particular risk types and business areas to which they should have access.
<li> Select the "Save" button at the top of the screen, if it is highlighted, to save the changes made so far.
<br>
<br>
For "Risk Type" restricted users:
<ul>
<li> In the "Access to risk types" column left click on "Add".
<li> Left mouse click the empty cell that appears in the left hand grid under "Risk Type", and a drop down box selector will appear.
<li> Select a risk type from the list in the drop down box.
<li> Select "Save" from the column control panel beside the Risk Type grid (if it is highlighted - the first row will often save automatically)
<li> Repeat for each Risk Type to which you want them to have access.
</ul>
<br>
<br>
[[Image:RMC_UserCreate9.png]]
<br>
<br>
For "Access to Business Area" restricted users:
<ul>
<li> In the "Access to Business Area" column left click on "Add".
<li> Left mouse click the empty "Business Group" cell that appears in the right hand grid under "Access to Business Areas", and a drop down box selector will appear.
<li> Select a Business Group from the list in the drop down box.
<li> Left mouse click the empty "Business Unit" cell that appears in the right hand grid under "Access to Business Areas", and a drop down box selector will appear.
<li> Select a Business Unit from the list in the drop down box.
<li> Select "Save" from the column control panel beside the "Access to Business Areas" grid (if it is highlighted - the first row will often save automatically)
<li> Repeat for each business group / business unit combination to which you want them to have access.
</ul>
<br>
<br>
[[Image:RMC_UserCreate10.png]]
<br>
<br>
<li> Finally select "Save" from the top of the screen (if it is highlighted) to save the changes made to this user.
<br>
<br>
[[Image:RMC_UserCreate11.png]]
<br>
<br>
<li> Repeat this process for each user for which you wish to make changes.
<li> If you wish to remove a user from access to a risk type or business area, simply select the row in the appropriate grid and chose delete from the panel beside the grid (NOT THE TOP OF THE SCREEN - as this will delete the entire user!)
<li> When you are finished, simply close the screen using the normal windows close box in the top right hand corner of the window.
</ol>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
1fd1f4ce1408f053e60de5f9af161ff85a0ea572
BPC RiskManager - Creating and Resetting User Passwords
0
392
587
2019-09-10T13:49:33Z
Bishopj
1
Created page with "User passwords are only relevant where the locally (application) managed security option in untrusted login is turned on at the application server. Users can set their own p..."
wikitext
text/x-wiki
User passwords are only relevant where the locally (application) managed security option in untrusted login is turned on at the application server.
Users can set their own passwords in the user settings screen, and you can load passwords when you load resources in bulk with the "add user" flag turned on, but when you create a new user individually from a resource or when the user has forgotten their password, and administrator can override the password setting. You can not see a user's password, but an administrator can reset it.
Blank passwords are not permitted.
Steps for changing a user's password:
* Open your BPC Risk Manager client and select the Access tab (you must be an RM Administrator to do this)
[[Image:RMC_UserCreate1.png]]
* Select the security button
* A Set up Security window will open.
* On the top tab bar select "Step 2 - Define Resource Access Rights". The user rights management screen will display.
[[Image:RMC_UserCreate13.png]]
* Locate the target user in the grid and select "Reset" in the password column of the grid.
* A password entry window will open.
[[Image:RMC_UserCreate14.png]]
* Enter new password and select "OK" to save it.
* Close the Setup User's screen by left clicking on the close box when you are done.
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
ca9992db38fb93fad917ea463ecb3ce034835c27
BPC RiskManager - Updating Your Personal Preferences
0
393
588
2019-09-10T13:51:29Z
Bishopj
1
Created page with "=Introduction= A user can change a number of general settings individually of the general defaults. Most user changes are remembered automatically on your computer when you..."
wikitext
text/x-wiki
=Introduction=
A user can change a number of general settings individually of the general defaults. Most user changes are remembered automatically on your computer when you make them (such as the last search or filter options).
Some settings must be explicitly saved. These are controlled in the "Set-Up Profiles/My Settings" screen.
The things you can set here are:
# Your password for accessing the application when locally managed security is operating
# Edit and read only grid colouring
# Spell checking
# Use of Tree Navigation for risks
# The form scaling method for when you are using non 96dpi /small fonts computer settings.
=Accessing the User Preferences Screen=
All user preference settings require the following initial steps:
* Open your BPC Risk Manager client and select the Administer tab
[[Image:RMC_UserSettings1.png]]
* Select the "Set-Up Profiles" button
* The "Set-Up Profiles" window will open.
* On the side bar select "My Settings", and the panel will change to display the "My Settings" button list.
[[Image:RMC_UserSettings2.png]]
* Proceed with your desired changes.
=Change your password=
User passwords are only relevant where the locally (application) managed security option in untrusted login is turned on at the application server.
Blank passwords are not permitted.
To change your own password:
* Select the "Change Password" button from the left hand button panel and a "Change Password" panel will apear in the right hand panel.
[[Image:RMC_UserSettings_ChngPWD1.png]]
* Enter your current password and the new password in both the "New Password" and "Confirm" fields.
* Select "Save" to save the change
* Close the Setup User's screen by left clicking on the close box when you are done.
=Change General User Preferences=
* Select the "User Preferences" button from the left hand button panel and a list of available user preference will appear in the right hand panel.
[[Image:RMC_UserSettings_UserPref1.png]]
==General Settings==
* To Change Grid Colours:
** Select your preferred edit and read only grid colouring using the drop lists.
** Select the "Save" button to save the new colour choices
* To clear the search grids column properties (the column choices that you have made earlier that have been automatically saved for you)
** Press the "Reset Columns" button. The changes are saved instantly.
* To enable or disable MS Office based spell checking:
** Untick (or tick) the "Enable MS Office Spell Check"
** Chose your default language from the drop list.
** Changes are recorded instantly
The application will report an error if MS Office is not available on the system. The application will always attempt to use you default language as defined in MS Office first, regardless of the language setting you specify here.
* If you wish to hide the Tree Navigation buttons in the application
** Uncheck the "Use Tree Navigation" check box.
** Changes are recorde instantly.
==Form Sizing Settings==
The application is written in 96dpi and small fonts, so the best performance and appearance will be at those desktop settings. However you can accommodate different screen resolutions by choosing an appropriate option from the "Form Control" drop box.
* The default setting is "Resolution Independent" which will attempt to rescale the windows and reposition labels and controls appropriately to accomodate your screen configuration. This should suite most scenarios.
* If your desktop settings are 96dpi with small fonts, you can safely choose "None" and the screen refreshes will be slightly faster.
* Fixed scaling is slightly faster than resolution independent scaling, but will not do as good a job of resizing the windows and controls as the resolution independent setting in most situations.
* Elastic Scaling will cause a form to grow or shrink as you resize the window, making the contents smoothly larger or smaller, while displaying the entire content (ie. scrolling of windows is turned off).
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
81dce9fe182e8bab82b7d955845332abdb21096b
Install Socket Server as a Service And HTTPSrvr as an ISAPI library
0
394
589
2019-09-10T14:45:41Z
Bishopj
1
Created page with "=Introduction= The RiskManager Dataserver (the application server) is a Windows DCOM server. It is registered and operates on the application server computer but requires..."
wikitext
text/x-wiki
=Introduction=
The RiskManager Dataserver (the application server) is a Windows DCOM server. It is registered and operates on the application server computer but requires a local listener component to listen for RiskManager Dataserver requests, launch the Dataserver object if necessary and relay the data requests and response between a Client Programme and the Dataserver. The RiskManager client application connects to the Risk Manager Dataserver using one or both of two methods:
* The Borland Socket Server that connects the client to the Application Server Computer via port 211, and/or
* The ISAPI library called HTTPSrvr that connects the client to the Application Computer Server via either HTTP (port 80) or HTTPS (port 443).
As the HTTPSrvr runs as part of the IIS server it represents a level of architectural complexity beyond the operation of the raw socket server solution. Where the HTTPSrvr is used, it is strongly recommended that you also install and run the SocketServer as it provides the simplest possible method of launching the RM dataserver and therefore a reliable method of isolating network and DCom issues because a system administrator can use a RiskManager client on the local area network to connect to the RiskManager Dataserver bi-passing the IIS dependent components.
Both the Socket Server and the HTTPSrvr must be run on the network server running the Risk Manager DataServer application server. It is theoretically possible to run them remotely from the application server, but we have not tested such a configuration.
* [[Installing the SocketServer]]
* [[Installing the HttpSrvr.dll (optional)]]
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
02c0b4565cc9cc7dccbb52f1cc66543f4ebc4c14
Installing the SocketServer
0
395
590
2019-09-10T14:46:37Z
Bishopj
1
Created page with "=Installing the SocketServer= The socketserver is installed by the installation program into the directory: C:\Program Files\Common Files\Borland Shared\Socket Server..."
wikitext
text/x-wiki
=Installing the SocketServer=
The socketserver is installed by the installation program into the directory:
C:\Program Files\Common Files\Borland Shared\Socket Server
Note: If you are running on WOW64 on a windows 64 bit computer navigate to:
C:\Program Files\Common Files\Borland Shared\Socket Server \" and right click on the socketserver.exe icon
and select properties. From the properties window set execution compatibility mode to be “Windows 2003 SP1”
To install as a Windows service either:
# Open a DOS session and enter the following command:
“C:\Program Files\Common Files\Borland Shared\Socket Server \scktsrvr.exe" –install
Or
<ol>
<li> Run batch file <install dir>\SystemFiles\AddSocketServerService.bat to perform the same action.
<li> The socket Server will not be activated immediately after installation as a service. Please start the service manually or simply restart the server:
<ol>
<li> Right click on the “My Computer” on your desktop and choose “Manage” from the menu.
<li> Expand the “Services and Applications” node in the tree and choose “Services”.
<li> From the list of services in the right hand panel, locate the “Borland Socket Server” entry and right click on the entry under the “Status” column.
<li> Choose “Start” from the context menu.
<br>
<br>
[[Image:RMSS_Service1.png]]
<br>
<br>
You do not need to perform any other steps to enable the socket server but if you wish to read further information please refer to the following file:
<install dir>\ ConfigurationSupportFiles\How to configure Borland Socket Server.txt
</ol>
<br>
<br>
<li> When the socket server is running an icon that looks like a power socket will appear in the System tray under Windows XP, Windows 2000, and Windows 2003. On Windows Vista and Windows 2008 the icon may not appear when the socket server is running as a service (because of the new security rules), but will appear if started manually.
<br>
<br>
[[Image:RMSS_Service2.png]]
<br>
<br>
</ol>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
edbb4a14c4f5b5ebed3497b0de74cd91e9b76f20
Installing the HttpSrvr.dll (optional)
0
396
591
2019-09-10T14:47:39Z
Bishopj
1
Created page with "The HttpSrvr ISAPI library is an alternative to the socketserver as a communications handler. It provides HTTP and HTTPS support for client to application server communicatio..."
wikitext
text/x-wiki
The HttpSrvr ISAPI library is an alternative to the socketserver as a communications handler. It provides HTTP and HTTPS support for client to application server communications. It is necessarily slower than the socketserver due to the overheads imposed by the HTTP / HTTPS protocols. If used, it must be installed as an IIS application.
The HttpSrvr dll is installed by the installation program into the directory:
<install dir>\scripts\
Depending on your version the HttpSrvr dll may also be installed into bpcsxsrvr instead. This is the later and preferred directory, although these notes still illustrate "scripts". (NOTE: An error has been reported in the V6256 installer or application set that suggests that the application will attempt to work with scripts although the installer sets up bpcsxsrvr. Until this error is traced and fixed, these instructions remain correct exactly is written. It might be wise to remove the automatocally created HTTPSrvr.dll copied into the bpcsxsrvr directory, or rename the virtual directory "scripts". We will advise when this problem is rectified.)
You will need to move it to the appropriate directory on your IIS server. The client is set, by default, to assume the scripts directory so we recommend you use a virtual directory called scripts to house the HttpSrvr dll (which will make it simpler for clients to connect).
<ol>
<li> (Skip if using IIS5) If your IIS server does not have a scripts directory (IIS6 and above) you will need to create one. Do this by creating an appropriate directory in a folder of your choosing and map that directory to IIS using the name “scripts” as the virtual directory name. Our advice is that you do NOT create this directory as a subdirectory of the wwwroot directory as it will require special access rights. We also advise that you do NOT simply map the installation folder to the scripts directory as future patches may directly update the installation directory, effectively instantly setting the new patched library to “live” mode.
<li> (Skip if using IIS5) To map the newly created folder to your web server right click on the folder and choose properties from the context menu. On the properties window select the “web sharing” tab. In the “web sharing” tab select “Share this folder”.
<br>
<br>
[[Image:RMSS_HTTPSrvr1.png]]
<br>
<br>
A window will open, enter “Scripts” in the Alias field. Tick “Read” and ensure the other check boxes are unticked, and select the “Execute (include scripts)” radio option and choose “Ok”.
<br>
<br>
[[Image:RMSS_HTTPSrvr2.png]]
<br>
<br>
Select OK again on the folder properties window to close the window.
<br>
<br>
<li> Copy the HttpSrvr.dll to your scripts directory.
<li> Open the IIS Manager (or right click on My Computer) and expand the “Internet Information Services”/”Web Sites”/”Default Web Site” tree.
<br>
<br>
[[Image:RMSS_HTTPSrvr3.png]]
<br>
<br>
<li> Right click on the “Scripts” object and choose “properties”.
<li> On the properties window select the “Virtual Directory Tab” and enter an Application Pool name:
<br>
<br>
[[Image:RMSS_HTTPSrvr4.png]]
<br>
<br>
<li> Still in the properties window, select the “Directory Security” tab and select the edit button in the “Authentication and access control”:
<br>
<br>
[[Image:RMSS_HTTPSrvr5.png]]
<br>
<br>
<li> On the Authentication methods tab, tick “Enable anonymous access” and untick any other options. In setting the anonymous authentication user, you can do one of three things: leave the user name as the built-in anonymous user account (in which case you will have to grant access to the application server to the anonymous user account) or create a dedicated local service account made for the purpose (eg. RiskManager Services - in which case you will have to create it with service execution rights and grant access to the application server exe - see below) or as is our suggestion use the builtin local computer administrator account as the activation Id. It is preferable to either use the local admin account or a dedicated application services account that you have created specially for the purpose. There are good arguments for all options. The principal advantage of the configuration illustrated is that the socket server access method and the httpsrvr access method will share the Application server activation space, and the application server will show on the administration console system tray when activated. Illustrated is our suggested configuration. By using the local administrator you can keep the application pool operating under the lower network service account. If you use the local administrator as the anonymous activation account, you should not have any other libraries in the script directory other than the HttpSrvr library (and irrespective that is probably the preferred scenario). Later in these instructions we illustrate configuring application server access using a dedicated RiskManager Services account. You will not be able to use the builtin local system account as you need the password at some points to configure it. After entering your settings, Select “OK” to close the window.
<br>
<br>
[[Image:RMSS_HTTPSrvr6.png]]
<br>
<br>
<li> On the IIS manager, expand the Application Pools tree and right click on the icon matching the application pool name you created in step 6. Select properties from the menu.
<br>
<br>
[[Image:RMSS_HTTPSrvr7.png]]
<br>
<br>
<li> On the properties window tick the Recycling check box on the Recycling tab and set the parameters for Recycling to a value that makes sense (or accept the default). The recycling time should be long, but at least every 1 to 3 days, or at a fixed time each day or night when users are unlikely to be accessing the application.
<li> Select the Performance tab and ensure that the “Web Garden” has a maximum number of processes set to 1.
<br>
<br>
[[Image:RMSS_HTTPSrvr8.png]]
<br>
<br>
<li> Select ok to close the properties window.
<li> On the IIS Manager select the “Web Service Extensions” tree node and in the right hand panel select “All Unknown ISAPI Extensions” and select the “Allow” button.
<br>
<br>
[[Image:RMSS_HTTPSrvr9.png]]
<br>
<br>
<li> Finally select the “Default Web Site” tree node in the IIS Manager and stop and restart the web site using the stop and play buttons or the stop and start options in the context menu that appears when you right click the icon.
<li> Next we have to set the execution parameters of the RiskManager DCom server to allow the HTTPSrvr to access it. This step is not required if using socketserver. Using windows explorer, navigate to the directory in which BPC RiskManager Dataserver was installed. Usually this will be:
<br>
<br>
C:\Program Files\BishopPhillips\BPCRiskManager\ApplicationFiles
(But it may be a little different depending on the version of BPC RiskManager you are installing).
<br>
<br>
* Locate the executable file “RiskManagerDataserver.exe”
* Double click on the file to start the application server.
* A green disk should appear in the system tray.
* Double click on the green disk and the BPC RiskManager DataServer configuration window will open, and the Dataserver will silently register itself as a DCom service.
* Choose “End Process” and slect “Ok” in the warning window that opens. The application will close and the.green disk will disappear.
<br>
<br>
<li> Open the Component Services Manager. This may be found in a few places. On W2003 it may be found by Selecting the Start Button and from the programs menu choose “Administration Tools/Component Services”
<li> Expand the tree in the left panel by expanding “Component Services” / “Computers” / “My Computer” / “DCom Components”
<li> Locate the “RiskManagerServices Object” in either the right panel or the list of nodes below the DCom Components and right click on it and choose the properties option to open the properties window.
<br>
<br>
[[Image:RMSS_HTTPSrvr10.png]]
<br>
<br>
<li> On the Properties window for the component select the “General” tab and ensure the Authentication level is set to default.
<br>
<br>
[[Image:RMSS_HTTPSrvr11.png]]
<br>
<br>
<li> On the Security tab select “customize” for both the “Launch and activation permissions” and the “Access Permissions”.
<br>
<br>
[[Image:RMSS_HTTPSrvr12.png]]
<br>
<br>
<li> Select the Edit button for the Launch and Activation permissions and the permissions window will open. Add the user you chose in step 8. In our example we have used the local administrator account, but we have also shown the window with a dedicated “specially created” “RiskManager Service Account” (which must have the similar rights to a local service account when created). In either case grant all rights to the chosen activation account. If you are using the local administrator account, be sure to add that account and not a network administration account.
<br>
<br>
[[Image:RMSS_HTTPSrvr13.png]]
<br>
<br>
<li> If you are using the builtin administration account, you can skip this step. Select the Edit button for the “Access Permissions” and grant the account used in step 8 access to the component by clicking on the “add” button and granting all rights.as shown.. When completed, close the window.
<br>
<br>
[[Image:RMSS_HTTPSrvr14.png]]
<br>
<br>
<li> Select the identity tab on the RiskManager services property window and select the “Interactive user” radio button. Close the property window
<br>
<br>
[[Image:RMSS_HTTPSrvr15.png]]
<br>
<br>
<li> Close the component services manager window.
</ol>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
c0d3f80503601c17c348d2a7ca92ec2e306759be
Configure the BPC RiskManager & BPC SurveyManager Application Server
0
397
592
2019-09-10T14:49:16Z
Bishopj
1
Created page with "=Introduction= The single user version of BPC RiskManager will work essentially 'out of the box' if you have used the defaults, but you should probably check the configuratio..."
wikitext
text/x-wiki
=Introduction=
The single user version of BPC RiskManager will work essentially 'out of the box' if you have used the defaults, but you should probably check the configuration by working through these steps regardles. If you are installing the enterprise configuration or intend to use the BPC SurveyManager web component, you will need to work through these steps. The application server automates almost all the steps of initial configuration or updating the configuration for you. You can run these steps as many times as you like - you will not hurt anything - although, obviously, you will update the configuration with whatever new information you enter.
The first time you start the BPC RiskManager application server you will automatically register it on your server (or desktop computer).
Using the application server configuration screens you can:
* Connect a database to the application server
* Generate the web pages and publish the web client access page(s) to web site(s)
* Create and Publish the BPC SurveyManager web site(s)
* Connect the mail manager to an SMTP mail server (or outlook mail transport system)
* Determine the access security model and set the LDAP access rules
* Create the root administrator user id for a database
=Configuring the Application Server=
Perform the following steps in order:
# [[BPC RiskManager - Registration]]
# [[BPC RiskManager - General Configuration]]
# [[BPC RiskManager - Database Configuration]]
# [[BPC RiskManager - Send Mail Options Configuration]]
# [[BPC RiskManager - Mail Server Connection Properties]]
# [[BPC RiskManager - Security Configuration]]
# [[BPC RiskManager - Logging Configuration (OPTIONAL)]]
# [[BPC RiskManager - Create the Root Administrator]]
# [[BPC RiskManager - Configure HTTPSrvr Library]] (Optional)
# [[BPC RiskManager - Distribution of Client Components]]
# [[BPC RiskManager - Install The SurveyManager]] (Optional)
# [[BPC RiskManager - Configure Risk Mail Manager]] (Optional)
# [[BPC RiskManager - Test a Client Connection]]
=BackLinks=
*[[RM625ENT Installation Instructions]]
2b6576156a72fbee80d5e7d366674d3775c72d18
BPC RiskManager - Registration
0
398
593
2019-09-10T14:50:22Z
Bishopj
1
Created page with "The Risk Manager application server is a COM server. The COM server needs to be registered first before it can be used to serve data between the risk database and client prog..."
wikitext
text/x-wiki
The Risk Manager application server is a COM server. The COM server needs to be registered first before it can be used to serve data between the risk database and client programs. To register the COM server please run the following program once:
Either:
# Select the “Start”button and choose the RM DataServer from the BishopPhillips folder in the programs menu, or
# In Windows explorer, navigate to [RMInstallDir]\ApplicationFiles\RiskManagerDataServer.exe (or RiskManagerDataServer6xx.exe).
The BPC RiskManage DataServer (the application server layer) WILL require read/write privileges to the registry of the application server. This is the default install scenario, but if your site is using unusual lockdown arrangements, you should ensure that the interactive users used by the socketserver and the HTTPSrvr (if in use) have registry read + write access to HK LocalMachine capability. These accounts are the Local System (for the socket server) and which ever account you chose to
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
294ce9c83c75d5db746eed99377d60998c4c3886
BPC RiskManager - General Configuration
0
399
594
2019-09-10T14:51:22Z
Bishopj
1
Created page with "=Introduction= If you have not started the application server (BPC RiskManager DataServer), go to the start menu on the server computer and select the corresponding menu item..."
wikitext
text/x-wiki
=Introduction=
If you have not started the application server (BPC RiskManager DataServer), go to the start menu on the server computer and select the corresponding menu item from the start menu.
When started, the application server appears as a service in the Windows system tray, typically located in the lower right hand corner of your screen. Please double click on the icon [[Image:RM_App_Server_SysTrayIcon.png]] to interact with this program.
=Configuration=
* On the General tab set the Risk Manager Edition to Web Edition (the default value is Single User). This change will enable extended database and security configurations only available to the Web Edition. Click ‘Save Settings’ to save this change.
[[Image:RMDS_GP2.png]]
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
a376ea2cfdfec6af96ab2a9da39b6e5e8f6a256e
BPC RiskManager - Database Configuration
0
400
595
2019-09-10T14:52:35Z
Bishopj
1
Created page with "=Introduction= The Database Configuration tab is used to configure the database ADO connections used by the application server to connect to the RiskManDB. Please add one e..."
wikitext
text/x-wiki
=Introduction=
The Database Configuration tab is used to configure the database ADO connections used by the application server to connect to the RiskManDB.
Please add one entry per risk database for the application to connect to. Recommend configuring one connection to be set as the Default connection. This connection string is used when no connection is selected by the user in the RiskManager client program. Remember to ‘Save’ each connection after editing any property.
Typical uses of more than one risk database are production, training and user acceptance testing.
=Configuration=
[[Image:RMDS_GP3.png]]
<ol>
<li> Select the “RM Database Configuration” tab
<li> Either select a row in the grid to change a connection’s details or select the “New” button to commence a new connection.
<li> Assuming you are entering a new connection, enter the name of the connection in the “Connection Name” field. Keep this name short, DO NOT INSERT SPACES ands use ONLY letters and digits and underscore, and make it sensible as this the connection by which users will know the underlying database. It may have to be typed at some points by users, it may be part of a web URL (hence no spaces) and should make sense so users can remember it if needed.
<li> If this will be the default connection set the “Default Connection” to yes. While you may have more than one default connection, only the first in the list will be seen by the application when the default connection is requested.
<li> Select the ellipsis button to build a new database connection string that will be stored with this connection.
<li> A connection string build window will open. Select “Use Connection String” and select the “Build” button.
<br>
<br>
[[Image:RMDS_GP4.png]]
<br>
<br>
<li> In the “Data Link Properties” window select the “Provider” tab and select the “Microsoft OLE DB Proider for SQL Server” OLE DB Provider from the list .
<br>
<br>
[[Image:RMDS_GP5.png]]
<br>
<br>
Notes for building ADO Connection Strings:
* Always use nominated provider below for each database:
** SQL Server: ‘Microsoft OLE DB Provider for SQL Server’
** Oracle: ‘Oracle Provider for OLE DB’.
** Interbase: Select ODBC driver for Interbase (needs to be installed separately).
<br>
<br>
<li> In the “Data Link Properties” window select the “Connection” tab or select “Next”and enter the connection details for the database to which you wish to connect..
<br>
<br>
'''SQL Server 2000 / SQL Server 2005/ MSDE'''
* In the “select or enter server name” field either:
** If the database server is on the same computer as the application server, simply entetr a “.” (full-stop or period).
** If the database is on a different computer, select the machine from the drop down list, or type its name.
<br>
<br>
'''SQL Express 2005'''
* SQL Express 2005 installs, by default into a named instance on your computer. Therefore the correct entry for server name field must include “\SQLExpress”.
** If the database server is on the same computer as the application server, simply entetr a “.\SQLExpress” (full-stop or period).
** If the database is on a different computer, select the machine from the drop down list, or type its name and add “\SQLExpress” to the end of the server name.
<br>
<br>
[[Image:RMDS_GP6.png]]
<br>
<br>
<li> Select the “Use a specific user name and password” radio button.
<li> Enter “riskmanuser” ( or “sa” or other username as appropriate) in the “User name” field and the appropriate password. If you are using SQL Server 2000 or MSDE 2000 and are using “sa” with a blank password (not recommended) you should tick the “Blank Password” check box
<li> Tick the “Allow saving password” check box.
<li> Select the “Select the database on server” radio button and choose your databse from the drop down list. If all the details entered were correct, the database name will appear in the drop list.
<li> Select “Test Connection” to verify that we can connect to the database.
<li> If the connection was successful, a message to that affect will appear, and you should select OK to generate the details as a connection string.
<li> Select OK again to cause the new connection to be written to the database.
<li> A couple of message windows may appear at this point that are reminding you that the survey and incident engine connections have been set to the same database which is the default setting (and the recommended setting in most circumstances).
<li> On the RM Database tab of RiskManager Dataserver, if the “Save” button is active, Select “Save” to ensure the details are saved.
<br>
<br>
[[Image:RMDS_GP7.png]]
<br>
<br>
</ol>
'''''Notes:'''''
*Do not choose Windows NT Integrated Security. NT integrated security uses the security credentials of the interactive NT account to connect to the database. This interactive user is not always the same user as the system administrator performing this configuration. The application server is launched and run under the context of the ID running the Borland Socket Server (configured in step 9: Install Socket Server as a Service). The Log On used by the Borland Socket Server (and many other services) is the ‘Local System Account’. Note: A different account can be configured using Windows Services. If you use windows authentication you would have to provide either that account or a specially created service account (with similar rights) to have access to the database – which is near administration level.
* Be sure to select to the save the password in the connection string: option ‘Allow saving password’.
* Please test all connections before saving.
* The program ‘RiskMailManager’ re-uses all database configurations made in this form.
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
cc48a20e79a44fb098d7dbd7b866133332bd82a2
BPC RiskManager - Send Mail Options Configuration
0
401
596
2019-09-10T14:53:46Z
Bishopj
1
Created page with "=Introduction= Email can be automatically sent to users on change of responsibility for risk and mitigating strategy records. This function is configurable. By default the c..."
wikitext
text/x-wiki
=Introduction=
Email can be automatically sent to users on change of responsibility for risk and mitigating strategy records. This function is configurable. By default the configurations are not selected. Please update this tab to enable this function. Recommend not selecting this setting when in training or user acceptance phases as email messages will be generated.
There are two ways that emails messages are generated by the system either instantly a triggering event (like a change in risk details or a change in the strategy details) occurs, or scheduled through mail queues built as a result of an event. Only two of the possible scenarios are controlled here (many others are managed only in the administration screens of the client).
The two checkboxes here enable instant messages, but these messages have minimal configuration options and are sent in text only format, with the advantage that they are sent immediately a change occurs. The more advanced email messaging using fully customizable report based email messages in rtf/html format that you can define to include almost any information you like and are set up from within the application server client (ie. Not here).
[[Image:RMDS_GP8.png]]
The two check boxes are as follows:
# Send email messages on change of risk responsibility. Ticking the box will enable messages to risk owners and delegates when the responsibility changes. These messages do not have an advanced customizable version and this option would generally be used is such messages are desired. We recommend ticking this box LATER after you have done your initial data setup, unless you want messages to be dispatched as you load up initial risks. If you are converting risks from another system, your current risk owners will generally already know they have responsibility for some risks.
# Send simple email messages on change of mitigating strategy/control/treatment responsibility. Ticking the box will REPLACE the equivalent end user / mail queue based functionality with the instant text only simple messaging that advices users instantly a strategy detail is changed. Most users will prefer advanced report layouts for strategy notification, so you will generally NOT use this function. One advantage of checking the box here is that the messaging requires no further configuration to be available, while another advantage is that it is instantaneous (while the client configured version requires some additional configuration in the client and the messages are queued until the scheduled mail manager sends them). As with the risk responsibility box, you will, in any case, probably want to delay activating this feature until you have set up the risk register for the first time.
=Configuration=
# Select the appropriate sendmail options and click “Save Settings”
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
0f3c62a39e4d771edf79f1c287271ec252f8f761
BPC RiskManager - Mail Server Connection Properties
0
402
597
2019-09-10T14:54:47Z
Bishopj
1
Created page with "=Introduction= This step is optional – if you will not be using email reminders or surveys you will not need to configure email. You may not want to (probably WILL NOT wa..."
wikitext
text/x-wiki
=Introduction=
This step is optional – if you will not be using email reminders or surveys you will not need to configure email. You may not want to (probably WILL NOT want to) turn on Risk Mail Manager during initial installation (a separate application). The RiskMailManager program handles batch based email which is most of RiskManager mail requirements with the exception of the instant messages in Step 4 and the Survey publication messages used by the survey system. Once scheduled the mail manager will start sending emails to users when the triggering conditions are met (like changes in responsibilities for risks and strategies, etc).
The settings you make here will ALSO be used by the RiskMailManager application, and the survey engine.
=Configuration=
On the RM SendMail configuration tab:
[[Image:RMDS_GP9.png]]
<ol>
<li> Enter the appropriate details in the boxes provided
<br>
<br>
<table border=1>
<tr>
<td>
Select Mail Connection
</td>
<td>
Not all editions of Windows support SMTP Server Protocol. Microsoft Outlook requires separate installation and configurations.
</td>
</tr>
<tr>
<td>
SMTP Host Address
</td>
<td>
Set to the outgoing SMTP mail server.
</td>
</tr>
<tr>
<td>
SMTP Server Port
</td>
<td>
The default port number is 25.
</td>
</tr>
<tr>
<td>
SMTP Server User ID
</td>
<td>
Recommend leave this setting blank (unless your SMTP server requires a user ID)
<br>
<br>
If you are using the setup we detailed earlier for SMTP this field is blank.
</td>
</tr>
<tr>
<td>
SMTP Server Password
</td>
<td>
Recommend leave this setting blank (unless your SMTP server requires a user ID)
<br>
<br>
If you are using the setup we detailed earlier for SMTP this field is blank.
</td>
</tr>
<tr>
<td>
SMTP Server From Address
</td>
<td>
Enter a valid email address that users can use to reply to any e-mail messages sent to them – EG: administrator@your-organisation.com
</td>
</tr>
<tr>
<td>
SMTP Server From Name
</td>
<td>
Enter a name that can identify the above user account – EG: This can be generic such as ‘Risk Manager System’ or the name of your organisation's risk manager such as ‘John Citizen’.
</td>
</tr>
<tr>
<td>
Default Message Format
</td>
<td>
Text or HTML. You will generally want to use HTML as this allows more complex and attractive layouts to be built in the enduser reporting tool of the client application for emails.
<br>
<br>
Email clients must be able to receive HTML mime type messages for this to work. Most modern email clients operate in this mode by default.
</td>
</tr>
</table>
<br>
<br>
* Please send a test message by clicking on the “Send Test Message” button to verify the settings made above. The test message should be received within a short period of time.
* Click the “Save Mail Properties” button to save the settings to the system registry.
<br>
<br>
<li> In the Checklist on the right is a list of all of your currently defined connections. Tick each connection that you wish to use these settings and then select “Apply Settings To These SurveyManager connections”.
</ol>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
6e86a166308bef7d08e8dfc1e2b5d636e33f8110
BPC RiskManager - Security Configuration
0
403
598
2019-09-10T14:55:53Z
Bishopj
1
Created page with "* [[Security Configuration - First Time Installation]] * [[Security Configuration - Update Installation and Reset]] =BackLinks= {{#dpl: linksto={{FULLPAGENAME}} }}"
wikitext
text/x-wiki
* [[Security Configuration - First Time Installation]]
* [[Security Configuration - Update Installation and Reset]]
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
bd8413e490b64a5ef75e040aa3189c9b174cc6f5
Security Configuration - First Time Installation
0
404
599
2019-09-10T14:56:41Z
Bishopj
1
Created page with "On the Security Configuration tab you select your preferred method for assigning secure access to Risk Manager. For the first time install using as an Enterprise system, you..."
wikitext
text/x-wiki
On the Security Configuration tab you select your preferred method for assigning secure access to Risk Manager. For the first time install using as an Enterprise system, you should use the settings as illustrated in the diagram. After you have successfully connected with a client you should revisit “Part B” of this step and set the security to the mode you wish to use.
<ol>
<li> (First Time Install) For the first time install, we will assign the application to use internal user name and password verification. By default the system should install with "always use selected group", with the group set to "Administrator". This is ok for single user, or initial testing of the installation - as it assumes that any user granted access to the application has the rights defined in the selected group. Alternatively you can set it initially to use local application managed security, which is the setting we are illustrating here. Either setting is ok, until you have tested that a client can connect. After that you should make a formal decision about the security model you want to use. We revisit this topic later in the install.
<li> On the Security Configuration tab set the fields as follows:
<br>
<br>
[[Image:RMDS_GP10.png]]
<br>
<br>
<li> Select Save Settings.
<br>
<br>
</ol>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
450307c39e42974dee257b768983654b37853780
File:RMDS GP10.png
6
405
600
2019-09-10T15:00:43Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Security Configuration - Update Installation and Reset
0
406
601
2019-09-10T15:02:45Z
Bishopj
1
Created page with "=Introduction= ==When to do this== Do this part on update installation when advised that changes have been made to the security model or when you wish to change the security..."
wikitext
text/x-wiki
=Introduction=
==When to do this==
Do this part on update installation when advised that changes have been made to the security model or when you wish to change the security model or AFTER an initial install AND you successfully connected to RiskManager using the client programme at least once after initial install.
By default the system will install with "Always Use Selected Group". This ensure that you can access the application initially and auto-create at least one resource ID. In this mode anyone connecting will be assumed to be authorised and Resource ID's are allocated automatically on first tconnection using the users currently active windows login account profile. Changing from that setting effectively engages the user tables and thus enables security in the selected mode while preserving the auto resource ID creation behaviour of anyone connecting, but does not grant them access to the application because they must have a user profile explicitly added by an application administrator.
==Before we begin: Start The Application Server==
If you have not started the application server (BPC RiskManager DataServer), go to the start menu on the server computer and select the corresponding menu item from the start menu.
==Starting Configuration==
When started, the application server appears as a service in the Windows system tray, typically located in the lower right hand corner of your screen. Please double click on the icon [[Image:RM_App_Server_SysTrayIcon.png]] to interact with this program.
Now navigate to the "Security Tab" of the application server configuration screen.
<br>
<br>
[[Image:RMDS_GP10.png]]
<br>
<br>
=Configuration=
<ol>
<li> On the Security Configuration tab select your preferred method for assigning secure access to Risk Manager.
<br>
<br>
<table border=1>
<tr>
<th width="30%" >
Secure Access Method
</th>
<th>
Description
</th>
</tr>
<tr>
<td>
Membership of NT Global Groups
</td>
<td>
Users are assigned access based upon which NT global groups they are a member of. For each Risk Manager role, a global NT group needs to be created. NT users are next assigned to one or many of these groups based on their designated application access level as directed by the Risk Management group. An NT administrator is required to maintain membership of the NT groups.
<br>
<br>
A database table is used to map the NT group names with the Risk Manager roles. Records are added to this table using sql script: EnterConfigDataScript_Enterprise.sql. The table can also be maintained by a Risk Manager administrator using the application.
</td>
</tr>
<tr>
<td>
Membership of NT Local Groups
</td>
<td>
Users are assigned access based upon which NT local groups they are a member of. For each Risk Manager role, a local NT group needs to be created. NT users are next assigned to one or many of these groups based on their designated application access level as directed by the Risk Management group. An NT administrator is required to maintain membership of the NT groups.
<br>
A database table is used to map the NT group names with the Risk Manager roles. Records are added to this table using sql script: EnterConfigDataScript_Enterprise.sql. The table can also be maintained by a Risk Manager administrator using the application.
</td>
</tr>
<tr>
<td>
Assign Access in Application (Login Not Trusted)
* Prefered Setting without LDAP or AD
</td>
<td>
Users are assigned access based upon their individual profile stored in the risk database. Secure access is configured by a Risk Manager administrator user using a security form in the Risk Manager application. Passwords are required to login and held in encrypted form in the RiskManager system. This method does NOT need an NT administrator to maintain user access.
</td>
</tr>
<tr>
<td>
Assign Access in Application (Login Trusted)
* Alternative Prefered Setting without LDAP or AD
</td>
<td>
Users are assigned access based upon their individual profile stored in the risk database. Secure access is configured by a Risk Manager administrator user using a security form in the Risk Manager application. The Username is restricted to the users windows username and separate passwords are NOT required to login as the user is assumed to have a valid windows login already to use the system. As access to actions and risk areas is defined per user in the application, merely connecting to the system does not automatically grant edit rights in the system. This method does NOT need an NT administrator to maintain user access.
</td>
</tr>
<tr>
<td>
Always Use Selected Group
</td>
<td>
This method is primarily used for testing purposes or in single user mode. All users are assigned access to the role selected in ‘Always Use Group Selection’ on the lower panel, and whether they have Audit status.
</td>
</tr>
<tr>
<td>
LDAP User Verification
* Prefered Setting if LDAP is available
</td>
<td>
Username and password details entered are verified using LDAP authentication. Once the user’s identity is verified the role is assigned from the individual profile which is stored in the database.
</td>
</tr>
<tr>
<td>
AD User Verification
* Prefered Setting if Active Directory is available
</td>
<td>
Username and password details entered are verified using AD (MS Active Directory) authentication. Once the user’s identity is verified the role is assigned from the individual profile which is stored in the database.
</td>
</tr>
</table>
<br>
<br>
* A test configuration program is available to test the results of using either of the two available NT group methods. This program is named ‘NTServicesTest.exe’. Please request this seperately from Bishop Phillips Consulting.
<br>
<br>
* A test configuration program is available to test the results of using LDAP User Verification. This program is named ‘LDAPServicesTest.exe’.
<br>
<br>
* A test configuration program is available to test the results of using AD User Verification. This program is named ‘ADSIServicesTest.exe’.
<br>
<br>
<li> Select the UserName format
<br>
<br>
On the Security Configuration tab select option to assign secure user identification. This setting is used to create a unique identifier for each user. Some networks enforce a unique username and this alone can be used as an identifier. However other networks allow a user to connect to the network when authenticated on the user’s PC. This potentially could allow a user to create a local account in another users name and impersonate this user. To uniquely identify users in this environment select setting ‘Use client domain and username’ to include the authenticating domain name in the client unique identifier.
The generally preferred setting is the "Use client user name only" as this will facilitate connecting from across the organisation (which is also the default from V6.2.0).
<br>
<br>
<table border=1>
<tr>
<th>
Assign Secure User Identification
</th>
<th>
Description
</th>
</tr>
<tr>
<td>
Use client username only (RECOMMENDED)
</td>
<td>
The network username of the connecting user uniquely identifies the user. Eg: <Username>
</td>
</tr>
<tr>
<td>
Use client domain and username
</td>
<td>
The network domain name and username of the connecting user uniquely identifies the user. Eg: <Domain>\<Username>
</td>
</tr>
</table>
<br>
<br>
<li> If using LDAP, record the LDAP server settings.
<br>
<br>
LDAP User Verification Configurations:
<br>
<br>
<table border=1>
<tr>
<th width="30%" >
Configuration Property
</th>
<th>
Description
</th>
</tr>
<tr>
<td>
LDAP Server Name
</td>
<td>
Enter LDAP server name
</td>
</tr>
<tr>
<td>
LDAP DN Lookup Mask
</td>
<td>
Enter a DN value with a format parameter for username. The format parameter needs to entered as: ‘cn=%s. When user verification is performed the %s characters are replaced with the username. All additional values work as filters to restrict access to RiskMan. Use of additional filters is required for large sites where the username is not globally unique.
EG: c=au, cn=%s, ou=Staff
</td>
</tr>
<tr>
<td>
LDAP Verification OK Value
</td>
<td>
Enter LDAP return string value for OK verification result. This supports any changes to the LDAP return messages.
EG: OK
</td>
</tr>
</table>
<br>
<br>
<li> If using AD, record the AD server settings.
<br>
<br>
[[image:RM_Config_ActiveDirectory_Step1.gif]]
<br>
AD User Verification Configurations:
<br>
<br>
[[image:RM_Config_ActiveDirectory_Step2.gif]]
<br>
<table border=1>
<tr>
<th width="30%" >
Configuration Property
</th>
<th>
Description
</th>
</tr>
<tr>
<td>
AD Authentication Server Name
</td>
<td>
(REQUIRED) Enter AD server name as:
*myauthenticationserver.mydomain.mynamespace or
*myauthenticationserver.mydomain (if no namespace is used)
</td>
</tr>
<tr>
<td>
AD Group
</td>
<td>
(OPTIONAL) Enter an AD group name of which an authenticating user MUST be a member. This is very rarely used and for normal uses this should be left blank.
</td>
</tr>
<tr>
<td>
Use LDAP or WinNT for object discovery
</td>
<td>
(REQUIRED) You will generally want to tick this box as the WinNT authentication step involves a rejected connection step will will increment the login attempt count and will trigger lockout if counters are set to lockout in under 4 tries.
</td>
</tr>
<tr>
<td>
LDAP Search DN Value
</td>
<td>
(OPTIONAL) You can enter additional LDAP search criteria here such as DC values and a DN value with a format parameter for username. The format parameter needs to entered as: ‘cn=%s. When user verification is performed the %s characters are replaced with the username. All additional values work as filters to restrict access to RiskMan. Use of additional filters is required for large sites where the username is not globally unique. EG:
*cn=%s, ou=Staff, DC=bishopphillips, DC=com
*DC=bishopphillips,DC=com
Normally this should be left blank.
</td>
</tr>
</table>
<li> '''When finished click ‘Save Settings’ to save changes made.'''
</ol>
<br>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
f1457e2ecea49ead5d807b1baf5cca8cc40c390b
BPC RiskManager - Logging Configuration (OPTIONAL)
0
407
602
2019-09-10T15:04:18Z
Bishopj
1
Created page with "Some of the activites performed by the application server program can be written out to a log file. To enable this configuration: * Check the ‘Enable Logging’ control,..."
wikitext
text/x-wiki
Some of the activites performed by the application server program can be written out to a log file. To enable this configuration:
* Check the ‘Enable Logging’ control, select a folder for the ‘Log File Directory’
* Select the ‘Save Settings’ button.
When configured a new log file will be created for each day and stored in the nominated folder.
In the event that the program is unable to write to the log file all messages are written to the form in the trace errors control. Possible causes are moving the folder or the folder being read only.
'''DO NOT TURN LOGGING ON UNLESS REQUESTED – THIS IS FOR PROBLEM SOLVING ONLY'''
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
5a96d3a901c5dd04c5487269acb8eefc63018add
BPC RiskManager - Create the Root Administrator
0
408
603
2019-09-10T15:05:30Z
Bishopj
1
Created page with "=Introduction= This tab has only one purpose: to create a single account that will be used to create the first application / risk administrator account. You only need to us..."
wikitext
text/x-wiki
=Introduction=
This tab has only one purpose: to create a single account that will be used to create the first application / risk administrator account. You only need to use this account if you have NOT loaded account(s) during database setup.
Note: If you are restoring an already ‘in use’ database, or upgrading from an earlier version, your access accounts will already exist and you do not have to do any of this and can skip this step entirely.
Only one rootadmin can exist in each database. It will be linked to the network logon id you enter in the user name field on this screen – and that user id can not therefore be used as a normal user id until another id (or nonsense string) is entered into the username field on this screen. We recommend using the Administrator id as this account should not normally access the system.
=Configuration=
==Part A==
[[Image:RMDS_GP11.png]]
To set up the rootadmin:
# For each database configured on the database connections tab, choose the database connection from the database tab. When you connect to the database the ‘not connected’ string will change to ‘connected’.
# Enter the domain (if using domain/username style authentication). This field will only be useable if you have enabled domain/username authentication in the security configuration above.
# Enter a real network user id in the username field and an email address for that account in the username and email fields respectively. The username must correspond to the user id that will be used to access the system in rootadmin mode because the application will attempt to authenticate the rootadmin as that user id with the network security system. The userid chosen does not need any special rights, except that whilever it is associated with the rootadmin account it will no be able to be a normal BPC RiskManager user. So if ‘jackstraw’ is used as rootadmin, you will not be able to create a separate jackstraw user of the system, and his name will always appear as ‘rootadmin’ - not Jack Straw. You can of course simply assign a different nonsense user id later to the rootadmin if you want – which will free up jackstraw for reuse as a real user id. We suggest using Administrator for his reason.
# Select the ‘Create’ button.
# Repeat for each database.
==Part B==
Once the rootadmin has been used to create another risk administrator account account, the new risk administrator can remove the admin rights from the rootadministrator account from within BPC RiskManager through the secure access screen, and the rootadmin will then have no powers within the application, and this screen can not be used to re-grant powers to the rootadmin account. In other words – it is a one way exercise:
# Allocate the rootadmin account to a real user account – we will call that account ‘administrator’ for the purposes of this discussion.
# Attempt to login to the application using another real account – we will call that account ‘smith’ for the purposes of this discussion (so the access details are recorded) – this login will fail (which is correct).
# Login to the application as the rootadmin (automatically done by using the chosen network id as the login account) - in this case administrator –
# Look for ‘smith’ in the resources section of the security screen and grant administration rights to that account (i.e. access all areas + auditor + user mode of ADMINISTRATOR). We have now created the real administrator.
# Logout of the application as rootadmin.
# Login as ‘smith’. This account now looks for rootadmin in the security screen and removes ADMINISTRATOR access and restricts access to only defined areas (and defines no areas of access) and removes auditor rights. Rootadmin can now not do anything.
# The new real administrator (smith) may proceed with creation of accounts as required.
Rootadmin will never be required again, but if for some reason you want it back, just re-enable its ADMINISTRATOR rights and access all area right. Of course you would have to do this from another administrator ID.
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
af53bf7d1359ac869005fded88f6ae7852cca414
BPC RiskManager - Configure HTTPSrvr Library
0
409
604
2019-09-10T15:07:09Z
Bishopj
1
Created page with "=Introduction= NOTE: This step is optional and ONLY relevant if you are using the HTTPSrvr broker. The default setup of BPC RiskManager initialises the SocketServer as the b..."
wikitext
text/x-wiki
=Introduction=
NOTE: This step is optional and ONLY relevant if you are using the HTTPSrvr broker. The default setup of BPC RiskManager initialises the SocketServer as the broker. The HTTPSrvr is optional.
The HTTPSrvr library is a packet broker, just like its cousin, the SocketServer. The SocketServer and the HTTPSrvr are the two methods that a RiskManager client can use to talk to the RiskManager DataServer. They can both operate at the same time on the one server.
As its name implies, the HTTPSrvr supports HTTP/HTTPS communications via an IIS web server. The IIS web server must be on the same server as the application server to which it talks. Its job is to listen on the server for incoming client connections and invoke a session on the application server for that client connection. It holds the client and server connections while a packet stream is being exchanged, resending if required. It distributes packets to the appropriate application server session.
Just like the SocketServer, it can be started and used without further configuration. There are a few configuration options available that may be important to you. In particular, if you will be running with more than 32 simultaneous connections, you will need to perform some configuration. The HTTPSrvr defaults to 32 simultaneous sessions. This roughly (but not entirely) matches the number of users. There are, however, times when additional sessions may be established from a client, so it is wise to leave about 5 to 10 sessions minimum headroom, and about 10% to 15% maximum.
=Configuration=
Once installed and configured under IIS (see [[Install Socket Server as a Service And HTTPSrvr as an ISAPI library]]), the HTTPSrvr options are configured in the RiskManagerDataServer on the application server.
*On the application server computer, start the RiskManagerDataServer from the start menu. Because we are going to set registry settings, remember to start it as "run as administrator" on Vista or higher OS (right click on the RiskManager DataServer application in the start menu and choose the "run as administrator" option from the menu if it is available) or the settings will not be saved to the correct profile.
*A green disk will appear in the system tray. Double click on that disk to open the configuration screen.
*Select the HTTP/HTTPS tab. You will see a panel similar to the illustration.
[[Image:RMDS_HSS01.png]]
*The options available are:
**Disable Compression. - The default for this is enabled. There should be no reason to disable this (except during debugging). Compression in HTTP/HTTPS comms is important as it significantly improves the system performance by significantly reducing the network transmission time at the expense of slightly increased server load. Our advice is to leave it alone. If disabled, you will also have to disable it on the clients.
**Maximum simultaneous connections - The default is 32. If you are expecting more than about 23-25 simultaneous connections increase this to the maximum number of connections you think are likely + 15%. There is no theoretical upper limit, but the background server load will increase the higher this number is raised as the garbage collector will have to work a harder, and more memory will be consumed. Normally we suggest pushing it up to about 100 the first time you decide you will need to exceed 32 users, and then increments of 32.
**Recycle time - This is the amount of time the garbage collector waits to clean the session pool, and hence the lag for cleaning used sessions slots. The default is 6 minutes, but 2 minutes is possibly better. We run our servers on 2 minutes. Anything below that starts to load the server for little or no improvement. Certain types of comms failures may require the garbage cleaner to have swept the sessions before connection can be re-established, so while 6 minutes puts almost no noticeable load on your server, it can be a little long under failure prone HTTP connections.
**The log file directory - This is ONLY used when logging is enabled. We recommend setting it up for when it is required but not actually using it unless we advise you to. The directory must be writable by the id under which the HTTPSrvr is run on IIS.
**Communication Problem Resolution - This section is ONLY for trouble shooting. Do NOT enable this. You would generally only enable this if BPC support requested you to. Logging in this case is not about tracking access, and alerts it is about dumping communications packets to the drive. It is just like running a sniffer permanently on your network. It is very, very heavy on the server as it essentially dumps everything that is being sent between all clients and the server.
*Remember to press "Save Settings" if you made any changes.
*Finish your configuration session by choosing "End Process" and "OK"
*On IIS, stop and start the worker process that hosts the HTTPSrvr.dll to force it to read the new settings
*Connect from a remote client using HTTP to verify that the HTTPSrvr is operating.
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
6d2897df2864be422e759456c5850ce4294bae5c
BPC RiskManager - Distribution of Client Components
0
410
605
2019-09-10T15:08:45Z
Bishopj
1
Created page with "=Introduction= Two methods of distributing client components are available. You will need to select the one which best suits your requirements. <table border=1> <tr> <th> D..."
wikitext
text/x-wiki
=Introduction=
Two methods of distributing client components are available. You will need to select the one which best suits your requirements.
<table border=1>
<tr>
<th>
Distribution Method
</th>
<th>
Discussion
</th>
</tr>
<tr>
<td>
Internet Information Server
</td>
<td>
'''Browser Plugin ActiveX (code signed by Verisign) (ocx)'''
<br>
<br>
Risk Manager components are published on a web server and downloaded with a web page. This method uses ActiveX frameworks. Users do not need to manually install any software components and distribution of updates is automatic. ActiveX controls are signed with a digital certificate.
<br>
<br>
Assuming the web server that will serve the activeX cab file is on your intranet, you should allow installation / execution (prompted is wisest) of signed ActiveX components for the intranet zone ON YOUR client browsers.
<br>
<br>
Users must be operating Internet Explorer 5 or above OR Netscape Navigator (any version) or FireFox 1.5 with ActiveX plug-in (provided with install set).
<br>
<br>
''Note: version 2+ of FireFox do not (to our knowledge) support the ActiveX plugin extension due to a deliberate rewrite and exclusion of the necessary libraries. There is a FireFox extension available that allows the launching of IE in a FireFox container for web sites requiring IE compatible browsing, which would solve the problem of ActiveX hosting, but requires IE to be installed on the client machine as well as FireFox. We advise, in this circumstance using IE directly or the Client Program (below) instead.''
<br>
</td>
</tr>
<tr>
<td>
Client program install set
</td>
<td>
'''Non Browser Windows32 Application (executable)'''
<br>
<br>
Risk Manager client components can be installed on a client PC merely by copying the “.exe” file in the “NonBrowserClient” directory to a target directory on a client computer. The RiskManager client is completely self contained and requires no separate install, nor do they register themselves in the local registry, although the registry is used to store user specific settings and options.
<br>
<br>
The disadvantage of a non-install client is that it does not, by default, install menu and desktop shortcuts. For those wishing for these features to be automatically added during the distribution phase of the non-browser client, an installer for the non-browser client is available that will create menu and desktop shortcuts.
</td>
</tr>
</table>
=Client Registry Access=
Both client components will require registry read/write access on the client machine for HK_LOCAL_MACHINE and HK_CURRENT_USER.
Under Vista SP1 with UAC enabled the default Vista installation will allow the access used in the BPC RiskManager client and automatically map the registry access to the current user’s HK_LOCAL_MACHINE space. Both clients will therefore work correctly under Vista SP1.
The ActiveX browser plugin will require OCX registration rights in the client computer registry (the default windows setup), and DLL registration rights for the supporting Midas.dll automatically installed with the OCX. The ActiveX also uses the registry to store and access local user settings.
The Windows Non-browser client is not an ActiveX and has no separate supporting Midas.dll. It does NOT therefore need to register itself to run, but does use the registry to store local user settings, however if access is not available the non-windows client should still work, but will not “remember” any user settings.
You need only think about the Registry access requirements if you are using an unusual lockdown scenario – such as where no local user level writes are allowed.
=Which Method Should I Use?=
# The first thing to note is that you can, and probably will, use BOTH methods simultaneously. You can even use both methods simultaneously from the same client computer, and they will share the registry settings.
# The second thing to note is that there is absolutely NO difference between the functionality, look and feel and appearance of the two clients. In fact they are exactly the same programmes in two different wrappers.
# If you desire the simplest single point of publication distribution, with clients automatically updating when you publish a new client component to a single location, then the browser plug-in is the best solution – as all you have to do to release a new version is use the built-in web page generation and cab file distribution tool (or manually copy) to a single intranet/internet location and all clients will update the next time they access.
# If you have extreme PC lockdown configurations with no access to registry for registering components then you will have to use the non-browser client.
# If you do not have IE as a browser on your network you will have to use the non-browser client.
# If you do not allow signed ActiveX plugin’s under any condition in IE, you will have to use the non-browser client.
# If you prefer windows applications to browser plug-in applications then the non-browser client is preferred.
# The Browser client (ActiveX) has a slightly simpler way of specifying the list of connections for a user than the windows client, as they can be listed in the hosting web page. This alone may be the a good argument for using the browser plug-in.
# If you have legacy applications with a dependence on older versions of the Midas.dll – you *may* have to use the non-browser client, as only one version of Midas can be registered on a computer at once. However, in this event you can (and probably should) contact BPC directly for an alternative solution. This is an extremely rare scenario, and to date has not been reported by any client.
=Distributing the Windows Non-browser client=
There are two methods available. The first is to merely copy the exe file to an appropriate location on the target computer, and manually set a shortcut to it on the desktop, the second is to run the supplied .msi installer, which essentially does the same thing, but will automatically add a Start Menu entry and desktop icon.
The installer application is shipped with the string “install” in it, while the raw exe does NOT have the word “install” in the name.
The Risk Manager client program can be distributed as a Windows install set. The install set is available as a single file install (‘exe’) or Windows install file (‘msi’).
Option 1:
<ol>
<li> The copiable install set is located in the folder:
<br>
‘[InstallDir]\DistributeWin32Client\Install’.
<br>
<li> Copy the installer to the target computer and double click on it. Accept all defaults.
</ol>
Option 2:
<ol>
<li> The copiable executable version is located in the folder:
<br>
‘[InstallDir]\DistributeWin32Client\Exe’.
<li> Create an appropriate folder to house the executable under c:\Program Files or similar location (or place it on a network drive for shared access)
<li> Right click on the icon in windows explorer and choose copy
<li> Click anywhere on your desktop and then right click and choose “Paste as ShortCut” (this will put a shortcut on your desktop)
<li> Optionally create an appropriate menu entry in the start menu, again pasting into the menu folder as a shortcut (NOT using the full paste option – or you will copy the entire program rather than just a short cut)
</ol>
In both options you can also edit the shortcut command line in the properties of the shortcut and add various command line properties such as the list of database connections to be available to the user (OPTIONAL).
Note, on a 64 bit client computer, you would need to right click on the application icon (not the shortcut) and select a W32 compatible execution mode in the properties window.
=Test The Non Browser Client Connection:=
<ol>
<li> From a client computer (or from the application server computer if no client computer is easily available) open the BPC RiskManager Client. If you have set up a start menu or used the installer you will have a menu option available called something like “BPC RiskManager”. Otherwise the executable file is called “RiskManagerW32V625Clent.exe” or similar (depending on the version you have installed).
<li> You should be able to connect using the Root Administrator ID you established earlier. We will assume that ID is “Administrator”.
<li> When the program starts you will see a login screen as follows:
<br>
<br>
[[Image:RMC_Login1.png]]
<br>
<br>
<li> Select “Specify Account” and enter the username “Administrator” and the password you defined earlier
<li> Set the connection protocol to “Normal” – this will connect using the socket server port 211 connection.:
<li> Enter the name of a database connection (not a database) in the “Select Database Connection” field that you set up in Step 3.
<li> Ensure the correct computer name is in the “Risk Server Name” field. If not click on the “Select” button and a network browsing window will appear.
<br>
<br>
[[Image:RMC_Login2.png]]
<br>
<br>
<li> Type the correct computer name is in the “Risk server computer name” field, or if this is on a windows intranet you may be able to locate the computer using the “Browse” button.
<li> Select “OK”.
<li> Now, on the login window choose “Connect”.
<li> If you see the window below, you have successfully connected to the RiskManager Dataserver, but you username and password are not valid in the database. The test for our purposes right now has been passed, but you should probably try again with the the correct Username and password, or make reset the root administrator user name and password in the appropriate earlier step. (It is ok to do it again).
<br>
<br>
[[Image:RMC_Login3.png]]
<br>
<br>
<li> If you are successful you will see a screen similar to this:
<br>
<br>
[[Image:RMC_Login4.png]]
<br>
<br>
<li> Close the window and continue.:
</ol>
=Distributing the Browser Client=
==Introduction==
Firstly, you should be aware that the brand of web server is irrelevant to BPC RiskManager for the purposes of distribution of the browser client. You could as easily host the client components on an apache server running on a Sun box as on an IIS Server. With respect to the browser client, all the webserver does is provide the pages and cab image to the client when it is required. After that point the programmes ignore the webserver. We suggest IIS, because that is already present on Windows, we test on it, we provide files that work on it, the surveymanager components require it and we support it.
The following configurations will publish Risk Manager on your intranet. All files for publication are available in folder:
<install dir>\Publish_On_Intranet\
This directory contains a complete intranet/internet page that you can edit by hand and associated signed cab file set containing the RiskManager OCX, and helpfiles that you can copy to your web site. (This is NOT the recommended method of deployment). If you REALLY want to edit you own web page, there are instructions at the end of this section on the content and requirements of the default web page – but as there is provision for creating a template for the built-in publishing tool to use, there is really very little likelihood that you would need to go this route.
The BPC RiskManager DataServer contains built-in a web page publication system that will handle a variety of simple and complex scenarios:
# A single web page named ‘default.htm’ (or other name of your choice) in a single generic folder with one or more database connections
# Multiple web pages named uniquely with the connection name and stored in a single folder of your choice.
# A single web page named ‘default.htm’ (or other name of your choice) in multiple folders named uniquely with the name of the connection and with each page containing a connection to a unique database matching the folder name.
The most common scenario is option 1. As BPC RiskManager is designed to handle many complex set up arrangements, including multi-organisational hosting, the other scenarios allow for sites with a very large number of databases and a large number of separate organizations being centrally hosted.
With Option 3, if you have, say, 40 client organizations with a training and production database per client, with intelligent structured use of connection names, and matching folder names, you can publish (or update) the clients for all organizations in 2 or 3 minutes using the built-in publication tool.
Let us assume that your web site is referenced like this:
Http://myorg.com/
''Under option 1'', you might decide that your decide that your BPC RiskManager web page will be:
Http://myorg.com/ERMS/default.htm
In this Http://myorg.com/ERMS/ location you will have:
# a default.htm page containing the reference to the embedded BPC RiskManager ActiveX cab file (supplied), and a list of the database connections, and a set of links to help materials.
# a riskmanager_download.cab file (supplied). This file contains the information for the browser on where to find the BPC RiskManager OCX and the Midas.dll
# a riskmanager cab file (supplied) that contains the actual RiskManager OCX and the Midas.dll
# a folder containing various help materials and manuals.
This is the generic most common scenario.
''Under Option 2'', in a multi organization setup, you might still use a single virtual folder:
Http://myorg.com/ERMS/
But instead of having a single default.htm file with all the database connections, you might instead have multiple web pages, each named with the name of the connection, but otherwise the same as the standard default.htm file, and one copy of the cab and help files. Eg. If my connections were OrgA and OrgB, I would end up with two pages:
Http://myorg.com/ERMS/OrgA.htm
Http://myorg.com/ERMS/OrgB.htm
''Under Option 3'', in a multi organization setup, you might still use single virtual folder as your root:
Http://myorg.com/ERMS/
But from there you would have a unique folder for each organization (generated from the connection names you set up in the RM Database Configuration tab):
Http://myorg.com/ERMS/OrgA/
Http://myorg.com/ERMS/OrgB/
In each folder you would then have a single default.htm file containing the database connection corresponding to the folder name, but otherwise the same as the standard default.htm file, and one copy of the cab and help files in each folder.
'''''In the majority of cases you will be using option 1.'''''
In the following steps we will assume Option 1. In any of the cases, the first step is to create ALL your virtual directories on the web server. If you are using option 2 or 3, there is one point at the end of the process where you choose a different option and the RM Dataserver will perform the appropriate configuration for you.
The built-in publisher contains a generic (plain) web page, but you can just as easily use your own template and drag and drop it into the publisher if you wish. To do this just insert a [#RMOBJECT#] string into your page where you want the BPC RiskManager component to appear and supply that to the publisher when asked. If this is you first installation, however, we suggest that you run with the built-in version for now – you can change it in only a few minutes later on.
==Internet Information Server (IIS) Configuration (FIRST TIME INSTALL)==
You can publish Risk Manager from this install location (if you are using a single user installation) or you can choose to move or copy this folder to your standard intranet publications area of your network server. In any case the first time you install riskmanager you will need to create a virtual directory on the web server:
<ol>
<li> Create an appropriate directory to house the RiskManager web page in a folder of your choosing and map that directory to IIS. We will call our folder “Bpcrms” and use that as our virtual directory name. Our advice is that you do NOT simply map the installation folder to the web site as future patches will directly update the installation directory publish_on_intranet folder, effectively instantly setting the new patched files to “live” mode (complete with incorrectly configured default page thus destroying your existing web site & confusing the built-in publishing tool).
<li> To map the newly created folder to your web server right click on the folder and choose properties from the context menu. <br>
* On the properties window select the “web sharing” tab. In the “web sharing” tab select “Share this folder”
<br>
<br>
[[Image:RMWC_WSSetup1.png]]
<br>
<br>
* A window will open, enter “Bpcrms” (or your preferred virtual directory name) in the Alias field. Tick “Read” and ensure the other check boxes are unticked, and select the “none” radio option (or scripts if you will be using php or other server side scripted pages in the folder) and choose “Ok”.
<br>
<br>
[[Image:RMWC_WSSetup2.png]]
<br>
<br>
* Select OK again on the folder properties window to close the window.
<br>
<br>
<li> Open the IIS Manager (or right click on My Computer) and expand the “Internet Information Services”/”Web Sites”/”Default Web Site” tree.
<br>
<br>
[[Image:RMWC_WSSetup3.png]]
<br>
<br>
<li> Right click on the “bpcrms” object (or whatever your website folder was called) and choose “properties”.
<li> On the properties window, select the “Directory Security” tab and select the edit button in the “Authentication and access control”:
<br>
<br>
[[Image:RMWC_WSSetup4_XP.png]]
<br>
<br>
<li> On the Authentication methods tab:
<ol>
<li> If you wish to allow anonymous access (the normal scenario), tick “Enable anonymous access” and untick any other options. You should leave the user name as the built-in anonymous user account
<li> If you wish to have secured access then we suggest:
<ol>
<li> Untick “Enable anonymous access”
<li> Tick Integrated windows authentication (or other security model or your choice)
</ol>
<li> Select “OK” to close the window.
</ol>
<br>
<br>
[[Image:RMWC_WSSetup5_XP.png]]
[[Image:RMWC_WSSetup6_XP.png]]
<br>
<br>
<li> Still in the properties window, select the “Documents” tab and ensure that “default.htm” is listed as a default document page (or which ever page name you will be using). (You should not worry about connection named pages here).
<li> Select “OK” to close the properties window
</ol>
* Once you have created completed this part you are now ready to publish the web page client.
==Publish the Web Client (FIRST TIME INSTALL & ON PATCH/UPGRADE)==
Both initially, and on every patch or upgrade you will be repeating these steps. They are designed to be very fast, and all the instructions are on the screen. Read the screens and you will probably not need to refer to these instructions again.
<ol>
<li> Open BPC RiskManager from the start menu. Either:
<ol>
<li> Select the “Start”button and choose the RM DataServer from the BishopPhillips folder in the programs menu, or
<li> In Windows explorer, navigate to [RMInstallDir]\ApplicationFiles\RiskManagerDataServer.exe (or RiskManagerDataServer6xx.exe).
</ol>
<li> The application server appears as a service in the Windows system tray, typically located in the lower right hand corner of your screen. Please double click on the icon [[Image:RM_App_Server_SysTrayIcon.png]] to interact with this program.
<li> On the configuration window, select the “RM Web Distribution” tab and “Step 1.”
<br>
<br>
[[Image:RMDS_RMWD1.png]]
<br>
<br>
<li> On this page you choose between the generic web page or you own template.
<ol>
<li> If you want to use your own template, either tick the “Enable Drag-Drop of my template web page” check box and drag your template page onto the “Drop HTM Page Template Here” panel, or use the browse button (the yellow folder) and locate the file. (If you are using Vista, you will probably find you have to do use the browse method)
<li> If you want the generic web page, just tick the “Request a generic web page”. When you do this, a window with some notes will appear. Select “Ok”
</ol>
<br>
<br>
[[Image:RMDS_RMWD2.png]]
<br>
<br>
<li> In either case the right hand panel will be populated with the text of your page:
<br>
<br>
[[Image:RMDS_RMWD3.png]]
<br>
<br>
<li> Select “Step 2”. On this tab, all the default settings should be correct, except possibly the application server computer name. Enter the fully qualified domain name that a user on a remote computer would need to use to access the application server computer. (On an internet site, for example this would need the “.com” part of the domain as well as the computer name).
<br>
<br>
If you change the plugin dimensions by accident you can restore them to the default values by selecting the “Restore” button. The values on your screen may be different from those in the screen shot due to version changes.
<br>
<br>
[[Image:RMDS_RMWD4.png]]
<br>
<br>
<li> Select “Step 3”. Tick the “Enable Drag-Drop check box and locate the “RiskMan_Dwonload.cab” file from the “Publish_On_Intranet” directory of the <install dir> using windows explorer and drag and drop it on the drop panel. (Vista users may have to use the browser folder button instead). There are two cab files in the publish on intranet folder. Only ONE has the word “download” in it’s name. This is the one you want. If the drop panel received a file you will see the following window:
<br>
<br>
[[Image:RMDS_RMWD5.png]]
<br>
<br>
Select OK and if the file contains the correct information you will see this window next:
<br>
<br>
[[Image:RMDS_RMWD6.png]]
<br>
<br>
Select OK and you should see the CLSID and CodeBase information appear in the appropriate windows on the screen:
<br>
<br>
[[Image:RMDS_RMWD7.png]]
<br>
<br>
<li> The information required to set up the web page has been collected, so now just select “Apply Attributes” to populate the page template.
<li> Next choose the “Step 4.” Tab. In the “Select all connections for which to generate web pages”, tick each connection you want available through the web pages. If you have multiple databases you will generally not be using the default connection.
<li> Decide your page model. In most cases the default selection will be correct: “Make one page with all these connections listed”. This will make all the connections available from one page in one folder. The other options are described in the introduction section of this part of the manual. (“One page per connection” will make a page for each connection named with the connection name, while “Make one page per connection in its own folder” will create a default.htm page in a unique folder path and insert it in the folder named for that connection).
<br>
<br>
[[Image:RMDS_RMWD8.png]]
<br>
<br>
<li> Browse to the location on the web server where you want the page(s) to by created by clicking on the yellow folder icon
<li> When everything is done, select “Generate Web Pages”. This will cause the appropriate web pages to be created and copied to the target location.
<li> Select Step 5. The “download” cab file (the top drag-drop panel and folder edit field) should already be correct and populated from the earlier screen. Tick the second “Enable Drag-Drop of the BPC RiskManager cab file” check box and, using windows explorer, locate the “RiskManagerXVxx.cab” file in the “Publish_On_Intranet” folder. (The other cab file in that folder.)
<br>
<br>
[[Image:RMDS_RMWD9.png]]
<br>
<br>
<li> Finally, select the “Distribute Cab Files” button, and the cab files will be copied to your web site(s).
<li> Close the RM DataServer application by choosing the “End Process” button.
<li> From a client computer, open you web browser and navigate to your new web site and test the connection. You should be able to connect using the Root Administrator ID you established earlier. The web page should load, and the cab file install and the green disk appear on the Application Server system tray. (See the next section – Test the Browser Plugin Client)
</ol>
=Test The Browser Plugin Client Connection:=
==Browser Setup For ActiveX Plugins (IE 7 shown)==
<ol>
<li> From a client computer (or from the application server computer if no client computer is easily available) open Internet Explorer.
<li> Choose “Tools” from the menu bar and “Internet Options” from the menu that appears.
<li> Select the “Security” tab.
<br>
<br>
[[Image:RMC_IESetup2.png]]
<br>
<br>
<li> Select the zone in which your risk manager application server resides relative to you client computer on the “Select a zone to view or change settings” tool bar
<li> Select “Custom Level”
<li> On the “Security Settings” window scroll through the settings list until you find the “Download signed ActiveX Controls” setting. Enable the “Prompt” option (which is Microsoft’s recommended setting). Our ActiveX controls are signed with current Verisign ceritificates. Administrators can achieve higher level of security by also flagging controls from Bishop Phillips Consulting as being trusted, or from the riskmanager application server web site as being trusted – but the recommended setting should be enough.
<br>
<br>
[[Image:RMC_IESetup1.png]]
<br>
<br>
<li> We also set the automatic prompting for ActiveX controls to enable, but this may not be required in all scenarios.
<li> Scroll a little further down the list and enable the running of ActiveX plugins as follows:
<br>
<br>
[[Image:RMC_IESetup3.png]]
<br>
<br>
<li> Now select OK and close the security settings window, and select OK again and close the Internet Options window. You should now be back at your browser window.
</ol>
==Test the SocketServer Connection==
<ol>
<li> Enter in the web address of the BPC RiskManager website just created and and a web page should appear and a prompt to download a signed authenticated ActiveX component from “Bishop Phillips Consulting”.
<li> Select OK.
<li> A second prompt should appear to download another signed ActiveX component from “Bishop Phillips Consulting”. Select OK to that as well.
<li> The components will now download from the web site and install themselves on your computer. Once this is completed you should see a login window as below.
<li> You should be able to connect using the Root Administrator ID you established earlier. We will assume that ID is “Administrator”.
<li> When the program starts you will see a login screen as follows:
<br>
<br>
[[Image:RMC_Login1.png]]
<br>
<br>
<li> Select “Specify Account” and enter the username “Administrator” and the password you defined earlier
<li> Irrespective of whether you are going to use the HTTP/HTTPS protocols to connect ultimately, this first test should ideally be done using the low-tech socket server port 211 connection. Set the connection protocol to “Normal” – this will connect using the socket server port 211 connection.:
<li> Enter the name of a database connection (not a database) in the “Select Database Connection” field that you set up in Step 3.
<li> Ensure the correct computer name is in the “Risk Server Name” field. If not click on the “Select” button and a network browsing window will appear.
<br>
<br>
[[Image:RMC_Login2.png]]
<br>
<br>
<li> Type the correct computer name is in the “Risk server computer name” field, or if this is on a windows intranet you may be able to locate the computer using the “Browse” button.
<li> Select “OK”.
<li> Now, on the login window choose “Connect”.
<li> If you see the window below, you have successfully connected to the RiskManager Dataserver, but you username and password are not valid in the database. The test for our purposes right now has been passed, but you should probably try again with the the correct Username and password, or make reset the root administrator user name and password in the appropriate earlier step. (It is ok to do it again).
<br>
<br>
[[Image:RMC_Login3.png]]
<br>
<br>
<li> If you are successful you will see a screen similar to this:
<br>
<br>
[[Image:RMC_Login4.png]]
<br>
<br>
<li> Close the browser and the connection will be terminated and continue with the install.
</ol>
==Test the HTTP / HTTPS Connection (Optional)==
<ol>
<li> If you have completed the socketserver connection test and you will be using the HTTP or HTTPS connection methods, and you completed the HttpSrvr setup in the earlier section, you may wish to test this now.
<li> Simply open the browser again and navigate to the web page again as before.
<li> When the login prompt appears, change the connection protocol from Normal to HTTP or HTTPS as appropriate. You will see a new button labeled “Set Path” appear on the login page:
<br>
<br>
[[Image:RMC_Login5.png]]
<br>
<br>
<li> If you used the recommended default path (i.e. “Scripts”) the path will be correct and you can ignore this option. If you need to override it, then select the button (you only need to do this the first time as BPC RiskManager will remember it) and a configuration window will appear:
<br>
<br>
[[Image:RMC_Login6.png]]
<br>
<br>
<li> In the edit box enter the correct sub path as shown (note the slashes should be in “/” not “\”….and select “OK”
<li> At the login window, enter the rest of the login details if they are not already correct, (if in doubt refer to the steps in the socketserver test above) and select “Connect”.
<li> You should see the first screen of the application as before.
<li> Close the browser and the connection will terminate and continue with the installation on the application server.
</ol>
=Creating your own template page - OPTIONAL=
If you want a custom look and feel to the page you should create a page with [#RMOBJECT#] where you want the BPC RiskManager object to be inserted and then drag and drop the page onto the appropriate panel of the publishing tool. It will then use your template, rather than the built-in version to generate the default web page(s).
On VISTA you will have to use the browse button to search for your page as the “Drag and Drop” functionality will not work because the application is executing as Adninistrator and you are probably not.
=Editing the supplied Default.htm page – OPTIONAL – NOT PREFERRED=
This section includes the notes for editing the default web page supplied in the “Publish_on_intranet” folder. This is an extremely unlikely scenario, but might be appropriate where you wish to embed the RMS in your own content managed web site. The web page downloads and initialises the ActiveX control. This is the Default.htm file. Please set the following parameters on the ActiveX control:
<table border=1 >
<tr>
<td>
ApplicationServer
</td>
<td>
Set the value to the name of the network server on which you have installed the application server. This value will be used as a default for the risk computer server name option in the connection screen. This value can be changed by users when connecting.
<br>
The default value is ‘<Risk Server Name>’
</td>
</tr>
<tr>
<td>
RiskManagerEdition
</td>
<td>
Do not modify this value.
<br>
The default value is ‘WEB’ for Web edition.
</td>
</tr>
<tr>
<td>
ShowLoginScreen
</td>
<td>
The default value is ‘YES’. You can set this to ‘No’ if you do not want users to see the connection screen. When this configuration is set to ‘No’ the application server and database connection values use their assigned defaults. Users are not able to select an alternate database to connect to. A ‘No’ setting is recommended when there is only a single risk database to connect to and/or when IIS Integrated Windows Authentication is used so that users only see the one login screen (from Windows) and don’t need to interact with the connection screen.
</td>
</tr>
<tr>
<td>
DatabaseConnections
</td>
<td>
The default value is blank. This parameter enables a user to select from a list of available database connections in the client program connection screen. Some sites have many risk databases in use such as a production database to house the ‘live’ data and additional databases for training and/or testing. Multiple databases can also be used to house separate risk data and compliance data.
<br>
Set this value to a comma delimited list of connection names created in step ‘6. Database Configurations’ ealier.
<br>
EG: <PARAM NAME="DatabaseConnections" VALUE=" ProdSQLDB, TrainSQLDB">
<br>
If no connections are entered then the default connection will be used.
</td>
</tr>
</table>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
e51513cb72afa07edf73fdc15830dc4b0cdc52ae
BPC RiskManager - Install The SurveyManager
0
411
606
2019-09-10T15:10:56Z
Bishopj
1
Created page with "=Introduction= If you will be using the BPC SurveyManager components, you should install the survey libraries on the web site. The BPC SurveyManager system is an extremely p..."
wikitext
text/x-wiki
=Introduction=
If you will be using the BPC SurveyManager components, you should install the survey libraries on the web site. The BPC SurveyManager system is an extremely powerful highly scalable stateless survey engine, capable of hosting thousands of users simultaneously. It contains built in reports and might be better described as a web-forms engine.
The BPC RiskManager suite surfaces only a tiny portion of the real capabilities of the survey engine, although you have the full engine supplied as part of your BPC RiskManager system. For the purposes of the RiskManager install we will demonstrate installing it on the same web server as the RiskManager components have been installed, but in reality you could easily host the SurveyManager libraries on a web farm. Your license entitles you to unrestricted use of the SurveyManager engine.
In addition to the simplified survey creation and maintenance tools built in to the RiskManager client, for more generalized use there are additional survey creation and maintenance clients available, including a pure HTML/Javascript client. You should contact Bishop Phillips directly if you wish to explore these options.
For now we will install the SurveyManager engine as it is used in BPC RiskManager.
The first thing to understand is the unusual way that SurveyManager matches a database to a survey library. When a respondent completes a survey, at no stage do they ever see the underlying database name. It is not included in any session information, at least not directly.
This is because a unique copy of the BPCSurveyManager dll is created for each database the survey engine accesses. Each copy of the BPCSurveyManager.dll therefore gets a unique name.
When the library launches it uses its name as the key to an entry in the local computer registry to collect all the information it needs to access the associated database containing the survey. This means that a number of website configurations are possible, and more importantly ALL survey libraries can be stored in a single virtual folder on the website without interfering with eachother.
Here we shall consider two broad options configuration options:
# You could put the surveymanager library for a specific risk database in the same directory as the associated RiskManager web page and cab files were put. (Most appropriate if you have a single RiskManager web page from which many or all of your databases are available).
# You could create a single dedicated folder for all surveys regardless of system or database to which they belong. (Most appropriate if you have many riskdatabases with web pages in unique directories).
By far the majority of sites should use Option a. Option b is typically the centrally hosted / multi-organisation scenario with many separate risk management teams working in self contained and distinct databases.
In truth it really doesn’t matter. If you choose to put the surveymanager libraries into their own shared folder, it means you have all surveylibraries, whether for riskmanager or other purposes neatly contained in one spot – but you will have to create another web folder. If you put your surveymanager library (ies) in the riskmanager directory, you don’t have to create an extra folder but, if you have more than one riskmanager folder you will need to individually configure the survey libraries rather then use the bulk configurer.
=Preliminary Steps=
* Locate the SurveyManager Files
If you examine your “Publish_On_Intranet” folder your will note two files additional files that we have not yet used:
# BPCJavaScriptLib2.js
# BPCSurveyManager1.dll
These are the two parts of the SurveyManager we now require.
* Decide On Your Website Layout
Read the comments in the introduction and decide whether you want to re-use your riskmanager website folder (suggested if you have only one) or create a dedicated surveymanager folder (suggested if you have more than one riskmanager folder).
In either case you will need to enable the folder for execution. We will assume you will be creating a dedicated folder called “surveymanager” and mapping it to your website.
* Create the SurveyManager web site in IIS
<ol>
<li> Create an appropriate directory to house the SurveyManager libraries in a folder of your choosing and map that directory to IIS. We will call our folder “surveymanager” and use that as our virtual directory name. Our advice is that you do NOT simply map the installation folder to the web site as future patches will directly update the installation directory publish_on_intranet folder.
<li> To map the newly created folder to your web server right click on the folder and choose properties from the context menu.<br>
* On the properties window select the “web sharing” tab. In the “web sharing” tab select “Share this folder”
<br>
<br>
[[Image:SMWS_SetUp2.png]]
<br>
<br>
* A window will open, enter “surveymanager” (or your preferred virtual directory name) in the Alias field. Tick “Read” and ensure the other check boxes are unticked, and select the “Execute (includes sciptrs)” radio option and choose “Ok”.
<br>
<br>
[[Image:SMWS_SetUp3.png]]
<br>
<br>
* Select OK again on the folder properties window to close the window.
<br>
<br>
<li> Open the IIS Manager (or right click on My Computer) and expand the “Internet Information Services”/”Web Sites”/”Default Web Site” tree.
<br>
<br>
[[Image:SMWS_SetUp4.png]]
<br>
<br>
<li> Right click on the “surveymanager” object and choose “properties”.
* On the properties window select the “Virtual Directory Tab” and enter an Application Pool name (such as surveymanager):
<br>
<br>
[[Image:SMWS_SetUp5.png]]
<br>
<br>
<li> On the properties window, select the “Directory Security” tab and select the edit button in the “Authentication and access control”:
<br>
<br>
[[Image:SMWS_SetUp6.png]]
<br>
<br>
<li> On the Authentication methods tab:
<ol>
<li> If you wish to allow anonymous access (the normal scenario), tick “Enable anonymous access” and untick any other options. You should leave the user name as the built-in anonymous user account
<li> If you wish to have secured access then we suggest:
<ol>
<li> Untick “Enable anonymous access”
<li> Tick Integrated windows authentication (or other security model or your choice)
</ol>
<li> Select “OK” to close the window.
</ol>
<br>
<br>
[[Image:SMWS_SetUp7.png]]
<br>
<br>
<li> Select “OK” to close the properties window
<li> (IIS 6+ / W2003+ ONLY) The final step is to grant permission to IIS to run the surveymanager ISAPI dll. You should now be back at the general IIS management console window (if not then press OK until you have closed all the property windows).
<li> Scroll down the tree on the left hand panel until you can see the "Web Service Extensions" folder.
<li> Select it and you should see the Werb Service Extensions properties appear in the right hand panel as shown below:
<br>
<br>
[[Image:SMWS_SetUp8.png]]
<br>
<br>
<li> In the right hand panel. Select "All Unknown ISAPI Extensions" at the top of the list and then select "Allow".
<li> You can now close the IIS management console.
</ol>
* Once you have created completed this part you are now ready to configure the surveymanager in the application server and publish the surveymanager libraries.
==Configure and Publish SurveyManager==
Initially, and very rarely if the surveymanager libraries are patched or you add databases will you be repeating these steps. They are never-the-less designed to be very fast, and all the instructions are on the screen. Read the screens and you will probably not need to refer to these instructions again.
<ol>
<li> Open BPC RiskManager from the start menu. Either:
<ol>
<li> Select the “Start”button and choose the RM DataServer from the BishopPhillips folder in the programs menu, or
<li> In Windows explorer, navigate to [RMInstallDir]\ApplicationFiles\RiskManagerDataServer.exe (or RiskManagerDataServer6xx.exe).
</ol>
<li> The application server appears as a service in the Windows system tray, typically located in the lower right hand corner of your screen. Please double click on the icon [[Image:RM_App_Server_SysTrayIcon.png]] to interact with this program.
<li> On the configuration window, select the “Survey Manager” tab and “Individual Database Configuration.”. All the settings are on the one page. Follow the number steps in order:
<br>
<br>
[[Image:RMDS_SM1.png]]
<br>
<br>
<li> From the drop box in step 1on the screen, choose the database connection with which you will work. The word “connected” will display when you select a database and a successful connection is established.
<li> Unless you are sending the surveys to a different database that the riskmanager database, skip step 2 on the screen (contact BPC if you want to set your system up with a separate survey database for the risk system).
<li> Accept the default group configuration code in step 3. (This can be any four character string, but it is used only to store multiple configurations for different servers accessing the one database – so you do not need to do that in the current scenario. It is used for things like web farms, or distributed database configurations of SurveyManager).
<li> In the edit box of step 4 on the screen enter the FULL URL of your surveymanager web site. Clicking on the “Launch Browser” string will open a browser so you can navigate there to get the string right if you want – but you will have to copy and paste the address once you have found it.
<li> Click on the yellow folder icon of step 5 on the screen and navigate to the folder on your computer that will contain the surveymanager library(ies) – i.e. the path of the URL referenced in step 7 above.
<li> Accept the defaults in steps 6 and 7 on the screen.
<li> If you are using a proxy server that requires configuration settings tick the check box in step 8 on the screen and click on the button and add your proxy server details. It is VERY rare to have to configure this on modern networks (in fact no current client is known to need it – even those with reverse proxies running).
<li> If you are satisfied that the settings are right, click “Save Settings”.
<li> A wizard will automatically open to facilitate distribution of the survemanager and javascript support libraries.
<li> Tick the “Create BPC SurveyManager ISAPI and Javascript libraries” and then select “Next”. We will not be using the second check box. This check box enables a page for saving the registry records to a file so they can be moved to a different server – such as where the IIS server is on a different machine(s) from the application server.
<br>
<br>
[[Image:RMDS_SM2.png]]
<br>
<br>
<li> Using windows explorer locate the BPCSurveyManager1.dll in the “Publish_On_Intranet folder and drag and drop it on the panel (or use the folder icon on the path edit box to browse and select the file).
<li> Then select “Next”.
<br>
<br>
[[Image:RMDS_SM3.png]]
<br>
<br>
<li> Confirm the default setting by ticking the two “Confirm” check boxes (unless, of course, you see an obvious error!).
<li> Then select “Next”.
<br>
<br>
[[Image:RMDS_SM4.png]]
<br>
<br>
<li> Using windows explorer locate the BPCJavaScriptLib2.js in the “Publish_On_Intranet folder and drag and drop it on the panel (or use the folder icon on the path edit box to browse and select the file).
<li> If the “Confirm the new library name” check box is not ticked, then tick it.
<li> Then select “Next”.
<br>
<br>
[[Image:RMDS_SM5.png]]
<br>
<br>
<li> Select the “Create Now” button and the surveymanager library will be created and the BPCJavascript library will be deployed to the target web site.
<li> Select Finish and close the Wizard.
<br>
<br>
[[Image:RMDS_SM6.png]]
<br>
<br>
<li> Close the RiskManager Dataserver by choosing “End Process”
</ol>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
52658b26d27b3b5eec3818d5593e1d087f886672
BPC RiskManager - Configure Risk Mail Manager
0
412
607
2019-09-10T15:12:30Z
Bishopj
1
Created page with "=Introduction= This step is optional. You may not want to (probably WILL NOT want to) configure Risk Mail Manager during initial installation and come back to this step lat..."
wikitext
text/x-wiki
=Introduction=
This step is optional. You may not want to (probably WILL NOT want to) configure Risk Mail Manager during initial installation and come back to this step later. Once enabled the mail manager will start sending emails to users when the triggering conditions are met (like changes in responsibilities for risks and strategies, etc)
Run program: <install dir>RiskMailManager.exe. Steps 1 to 5 below are configuration. Step 6 is to schedule the program to run as an automated process.
Note: RiskMailManager is not normally scheduled for users of Risk Manager Single User Edition. The manual ‘Send Mail’ process in step 5 below is typically used.
=Configuration=
==Step 1: Configure Mail Connection==
* Enter the appropriate details in the boxes provided
<table border=1>
<tr>
<td>
Select Mail Connection
</td>
<td>
Not all editions of Windows support SMTP Server Protocol. Microsoft Outlook requires separate installation and configurations.
</td>
</tr>
<tr>
<td>
SMTP Host Address
</td>
<td>
Set to the outgoing SMTP mail server.
</td>
</tr>
<tr>
<td>
SMTP Server Port
</td>
<td>
The default port number is 25.
</td>
</tr>
<tr>
<td>
SMTP Server User ID
</td>
<td>
Recommend leave this setting blank.
</td>
</tr>
<tr>
<td>
SMTP Server From Address
</td>
<td>
Enter a valid email address that users can use to reply to any e-mail messages sent to them – EG: administrator@your-organisation.com
</td>
</tr>
<tr>
<td>
SMTP Server From Name
</td>
<td>
Enter a name that can identify the above user account – EG: This can be generic such as ‘Risk Mail Manager’ or the name of your organisations risk manager such as ‘John Citizen’.
</td>
</tr>
<tr>
<td>
Default Message Format
</td>
<td>
Text or HTML. You will generally want to use HTML as this allows more complex and attractive layouts to be built in the enduser reporting tool of the client application for emails.
<br>
<br>
Email clients must be able to receive HTML mime type messages for this to work. Most modern email clients operate in this mode by default.
</td>
</tr>
</table>
* Please send a test message to verify the settings made above. The test message should be received within a short period of time.
* Click the “Save Mail Properties” button to save the settings to the system registry.
==Step 2: Select Database Connections (Secure Accounts) to Send Mail to==
* Please select which connections you want RiskMailManager to connect to and to send mail for. Check the connections to select individual databases.
Note: The connections are configured by the application server program. The default connection is created when RiskManager is first run. Additional connections can be created manually using the application server program.
When RiskManager is running in ‘User Acceptance Testing’ or ‘Evaluation’ mode you may only want to check the training databases for sending mail.
When RiskManager is running in production mode you may only want to check the production databases for sending mail.
To view the mail log file and manually send mail please select the connection and then click 'Open Connection' to begin working with this database.
* Click the “Save Changes” button to save the settings to the system registry.
==Step 3: Configure Mail Options.==
* All mail messages generated by RiskMailManager can be copied to and blind copied to a list of e-mail addressees. This is useful if a risk administrator wants to see all email messages generated by the system in order to follow up on the people responsible for actions.
Note: As of V6.2.5 This configuration has been moved to the RiskManager program and is no longer available in the MailManager program.
==Step 4: View Mail Log.==
* To view the mail log which contains a record for each email message sent by RiskMailManager please enter a search date range and then click ‘View’.
The mail log is a useful tool to review mail generated by the program. It will verify that messages are being sent. If messages have failed an error message is recorded here for each record which can assist networking and administration troubleshooting.
==Step 5: Send Mail.==
Use this tab to manually send mail messages. This is useful in system testing (such as ‘User Acceptance Testing’ or ‘Evaluation’) and when RiskMailManager automation fails and a manual or ad-hoc process is required.
* Select to send mail for the ‘Current Connection’ or ‘All Selected Connections’. The current connection does not need to be a checked connection in the ‘Database Connection’ tab for the process to run. This is required if using the ‘All Selected Connections’ options.
* You may override the HTML/Text messaging format here if you wish, but you are advised to generally leave this alone.
==Step 6: Schedule RiskMailManager to run as an automated process.==
RiskMailManager can be run as an automated process. This means that the program can be scheduled to run automatically each night without the need for user interaction. The program will use the configurations supplied for mail connection and selected database connections.
RiskMailManager can be scheduled using a Windows scheduler or a basic WinAT command. The WinAT method is more difficult as there is no user interface to interact with.
When specifying the executable to run you MUST supply the parameter ‘AUTO’. This parameter is used to control the program and get it to send mail without user interaction.
EG: <install dir>\\RiskMailManager.exe AUTO
Example of a WinAT command to run RiskMailManager each weekday at 2 AM: Create a batch file (.bat) with the following line and execute it once.
at 2:00 /every:M,T,W,Th,F "C:\Program Files\RiskManagerSingleUser\RiskMailManager.exe” AUTO
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
46a34eeb28a3555e65f676d5e7a658851ac073b8
BPC RiskManager - Test a Client Connection
0
413
608
2019-09-10T15:14:54Z
Bishopj
1
Created page with "=Introduction= Finally, if you are using a security mode other than the “maintained by RiskManager” mode, you should restart the RiskManager Dataserver client and switch t..."
wikitext
text/x-wiki
=Introduction=
Finally, if you are using a security mode other than the “maintained by RiskManager” mode, you should restart the RiskManager Dataserver client and switch to the security mode you will be using (eg. LDAP or NT Groups) and test that the connections from the various clients are working from remote computers.
The installation is not complete until you can successfully test a client application. This will ensure that you are ready to distribute the client components and can begin to ‘roll-out’ your new Risk Management solution.
If this is a fresh install, and you have set up the rootadmin user ID, you should use the user id linked to the rootadmin to do this test (and at the same time create the first real Risk Administrator user account – per the earlier instructions).
=Test 1: Test Connection From Internet Explorer=
''This test is relevant if you will be using the browser plugin client.''
* T1.1: Open Internet Explorer and point your browser to the Risk Manager intranet site
'''Comment:''' If the virtual web directory has security set to ‘Integrated Windows authentication’ then loading the page will test its correct activation.
* T1.2: Verify download of ActiveX controls
'''Comment:''' The default web page contains ActiveX controls. These controls are the Risk Manager client components. To enable distribution of Risk Manager’s client components using a web browser the browser needs to accept digitally signed ActiveX controls from the Intranet zone (allowing download and run) or other zone ass appropriate to your usage. This is the default security value for Internet Explorer when installed.
* T1.3: Verify correct settings of ActiveX control parameter values
'''Comment:''' The application server name and available connections in the connection screen are assigned from the ActiveX parameter values entered during configuration.
* T1.4: Verify authentication of the connecting network user
'''Comment:''' Each system user must be setup and assigned access in the Risk Manager database. If the user testing the program has not been assigned access then this test can be concluded as successful since all configurations can be verified as correct at this point OR you will need to add database records to tables: RESOURCES & USERS to assign system access.
The network user must pass the access requirements of the selected method of assigning secure access:
<ul>
<ul>
<li>If either of the NT group methods are applied then the network user must be a member of the correct NT group.
<li>If ‘Application Access’ method is applied the network user must have a record in the RESOURCES table set with the correct NETWORKED_USERNAME value and have field ASSIGNED_ROLE set to a valid Risk Manager role.
</ul>
</ul>
=Test 2: Test Connection From Risk Manager Windows Program=
In V6.1.9 and above non-browser client no longer requires an install set. The client can be merely copied to a target computer and run. If you wish to create menu buttons and desktop shortcut links, this can be done manually or using the provided install set. The installset version and the exe client are the same application.
* T2.1: Verify correct operation of the method of delivering the non browser client
'''Comment:''' The non browser client can be delivered in two ways:
<ul><ul>
<li> The install set is downloaded and automatically installed silently for new users when they connect to the network.
<li> New users are required to install the program themselves either from a shared network location or from CD.
</ul></ul>
If using networking software to ‘push’ the install set out automatically to new users please verify that the ocx control is registered to the interactive user.
* T2.2: Verify authentication of the connecting network user
Run program ‘BPC RiskManager’ from the Start menu. Enter selections for Risk Server Computer Name and Database Connection (optional). Click ‘Connect’.
'''Comment:''' Each system user must be setup and assigned access in the Risk Manager database. If the user testing the program has not been assigned access then this test can conclude as successful since all configurations can be verified as correct at this point OR you will need to add database records to tables: RESOURCES & USERS to assign system access.
The network user must pass the access requirements of the selected method of assigning secure access:
# If either of the NT group methods are applied then the network user must be a member of the correct NT group.
# If ‘Application Access’ method is applied the network user must have a record in the RESOURCES table set with the correct NETWORKED_USERNAME value and have field ASSIGNED_ROLE set to a valid Risk Manager role.
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
9b9089d9ee6ed60c8defc47c18e25b0c67a5a79f
Instaling BPC RiskManager Database on SQL Server 2000
0
414
609
2019-09-10T15:17:19Z
Bishopj
1
Created page with "# [[Make a server login id (BPC RM on SQL2000)]] # [[Make the database (BPC RM on SQL2000)]] # [[Restore the database access IDs (BPC RM on SQL2000)]] # Set up the initial u..."
wikitext
text/x-wiki
# [[Make a server login id (BPC RM on SQL2000)]]
# [[Make the database (BPC RM on SQL2000)]]
# [[Restore the database access IDs (BPC RM on SQL2000)]]
# [[Set up the initial user IDs]]
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
3c85a4d308366ac326d6b3bb6d8984d183f69b08
Make a server login id (BPC RM on SQL2000)
0
415
610
2019-09-10T15:18:16Z
Bishopj
1
Created page with "=Introduction= BPC RiskManager is a highly secure environment, so security setup of accounts is necessarily a little more involved than just starting up the database and the..."
wikitext
text/x-wiki
=Introduction=
BPC RiskManager is a highly secure environment, so security setup of accounts is necessarily a little more involved than just starting up the database and the application.
You have four options for application server login:
#. Use sa (SQL Server builtin systems administrator account)
#. Use the builtin riskmanuser user account (BPC RiskManager builtin master access account)
#. Use an account of your own choosing with administration rights.
#. Use an account of your own choosing without administration rights.
We recommend either option 1, 2 or 3 as this makes support and configuration slightly easier, and it is alread set up for you. The easiest is to use ‘sa’ to access the database from the application server – if you are doing this you can skip the rest of this step BUT the username and password will be stored in the registry on the application server. In a similar vein, you can create another account with systems administration rights (option 3) with the same drawback as using “sa” and the added burden of having to create the account in the first place. The advantage of using a systems administration level account is that you do not need to do anything about access rights for the database itself.
The generally preferred approach is option 2, using the built in user access account (or similar) with more restricted rights than ‘sa’. The rest of this step assumes you are using riskmanuser as the database login account. As the client components never access the database directly, the database access account is only used by the application server and the database never needs to be surfaced to any computer other than the application server, and the surveymanager host.
The databases ship with the “riskmanuser” and “mailmanager” user ids already created (the actual accounts may vary in your version - refer to the documentation shipped with your application) so if you use those ids you will find future administration easier. These accounts have highly restricted rights (less than a normal user) and are therefore the preferred option.
<ul>
<li> Open Enterprise Manager (SQL 2000)
<li> Expand the folders “Microsoft SQL Servers”, “SQL Server Group” and the corresponding to the name of your computer
<li> Expend the “Security” folder.
<li> Right click on “Logins” and choose “New Login”
<br>
<br>
[[Image:SQLEnt_NewLogin.png]]
<br>
<br>
<li> Select “SQL Server Authentication”
<li> Enter a “riskmanuser” in the login name box
<li> Enter your desired password and confirm the password
<li> Write the password down somewhere handy as you will need it again soon.
<li> Select “OK:
<br>
<br>
[[Image:SQLEnt_NewLogin2.png]]
<br>
<br>
</ul>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
1d409a4345cb587c8ce674a22ce98cd9de0edb8d
Make the database (BPC RM on SQL2000)
0
416
611
2019-09-10T15:19:13Z
Bishopj
1
Created page with "=Introduction= Two options are available for creating a new risk database. The first option is easiest (BUT NOT PREFERRED) for users who do not have access to SQL Server too..."
wikitext
text/x-wiki
=Introduction=
Two options are available for creating a new risk database. The first option is easiest (BUT NOT PREFERRED) for users who do not have access to SQL Server tools (Eg Enterprise Manager or SQL Studio). This generally applies only to users of MSDE 2000. The second option is the safest and therefore preferred, but requires access to the Enterprise Manager (SQL 2000) or Database Management Studio (2005/Express) shipped with the database software. Detaching and reattaching Microsoft databases on different computers is not recommended by Microsoft.
The following instructions assume the default drive and directories are used for database files. You may substitute your own locations but must edit the supplied sql files accordingly.
=OPTION 1 – Attach Database (NOT PREFERRED in Enterprise)=
This is the best method for MSDE 2000 and single user installs.
For MS SQL Server 2000/MSDE 2000
* Attach database MDF file provided
** Copy file [RMInstallDir]\Database\MDFToAttach\RiskManDB_Data.MDF to folder: 'C:\Program Files\Microsoft SQL Server\MSSQL\Data\'
** Run batch file: [RMInstallDir]\Database\MDFToAttach\AttachRiskMDFFile.bat
Notes: Please edit SQL file (AttachRiskMDFFile.sql) if you copy the file to a different location, a new SQL Server log file is automatically created.
=OPTION 2 – Create & Restore Database (PREFERRED)=
* Database can be restored from SQL Server backup file. Follow these steps:
==Create database 'RiskManDB’ in SQL Server (any version)==
* It is a good idea to create a couple of databases. Eg. a Training database, a Production (main) database and possibly a Testing database. You can have as many databases as you like in RiskManager.
<ol>
<li> Right click on the “DataBases” folder and choose “New Database” from the properties.
<br>
<br>
[[Image:SQLEnt_NewDB1.png]]
<br>
<br>
<li> Enter a database name that makes sense to you. We recommend that you adopt a sensible, consistent naming convention for your databases to make management easier later. We suggest you start it with “RiskManDB” ending with a character string that identifies the database. E.G. “RiskManDB_Train08”
<br>
<br>
[[Image:SQLEnt_NewDB2.png]]
<br>
<br>
<li> Select OK to generate the new database
</ol>
==Restore the backup file==
* The backup file is held in [RMInstallDir]\Database\BackupToRestore\2000\RiskManDB2000.bak to database. We must force the restore over the existing database file and fix the file locations.
<ol>
<li> In windows explorer, navigate to the supplied backup master directory:
[RMInstallDir] \Database\BackupToRestore\2000\
<li> Either double click on the supplied batch file “CopyMasterToDefaultBackup2000.bat” or manually copy the file:
<br>
<br>
[RMInstallDir] \Database\BackupToRestore\2000\RiskManDB2000.bak to the backup directory<br>
(DO NOT RESTORE DIRECTLY FROM THE SUPPLIED FILE).
<br>
<br>
The default SQL 2000 backup directory (and used by the batch file) is:
“C:\Program Files\Microsoft SQL Server\MSSQL\BACKUP”
<br>
<br>
THE BATCH FILE IS ONLY APPROPRIATE IF THE DATABASE SERVER IS ON THE SAME COMPUTER
<br>
<br>
<li> In Enterprise Manager Expand the database list for the target server.
<li> Right click on the database you wish to restore (in this case it is the database you just created)
<li> From the Menu that appears choose “All Tasks” then “Restore Database”
<li> The Restore Database window will open. On that window the database name should already be displayed in the :”Restore as database” field. Select “From device” and click on the “Select Devices” button on the right hand side.
<br>
<br>
[[Image:SQLEnt_RestoreDB1.png]]
<br>
<br>
<li> In the Choose Restore Devices window, select “Add”
<br>
<br>
[[Image:SQLEnt_RestoreDB2.png]]
<br>
<br>
<li> Select the “File Name” radio button and select the ellipsis button on the right hand side. Navigate to and select the file we just copied into your backup area and select OK.
<br>
<br>
[[Image:SQLEnt_RestoreDB3.png]]
<br>
<br>
<li> The “Choose Restore Devices” window should now be populated with your backup file. Select OK again.<br>
<br>
[[Image:SQLEnt_RestoreDB4.png]]
<br>
<br>
<br>
<li> In the “Restore Database” window, copy the database name string from the “Restore as Database” field and select the Options tab.
<li> Tick the “Force restore over existing database and replace the file name portion of the physical file name (PRESERVING the path and the “_data.mdf” and “_log.log” portions of the file name) column in the grid with the database name you copied in the previous step. You should do this on both lines.
<br>
<br>
[[Image:SQLEnt_RestoreDB5.png]]
<br>
<br>
<li> Select OK to start the restore.
</ol>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
774d27361d05371bc5521e45a32cb77e9b60cf3a
Restore the database access IDs (BPC RM on SQL2000)
0
417
612
2019-09-10T15:20:22Z
Bishopj
1
Created page with "=Introduction= If you are using userid SA to connect to your database you can ignore this step. The databases ship with the user ids already installed, but when an MS SQL d..."
wikitext
text/x-wiki
=Introduction=
If you are using userid SA to connect to your database you can ignore this step.
The databases ship with the user ids already installed, but when an MS SQL database is moved from one server to another the internal GUID encoding of the user ids may be different on the destination server and you may find that you can not connect with the riskmanuser account, even though it seems to be present. You can either re-create them or run the provided SQL scripts to repair them.
BPC RiskManager will support connecting to many databases at once, so it is not unusual for you to find that you want to move a database from one server to another, or to duplicate a particular database across unlinked servers. You should do this by either:
# Using the builtin data transfer system of SQL server, or
# Backup and restore, and then following the steps in OPTION 2 at these times, as your riskmanuser id may already exist on the target recovery server.
Note also that if you are going to use more than one riskman database at once on the same database server, you will have to use the backup and restore (or equivalent duplication) method to install the database, rather than attaching, because the server will think your second attempt to attach a copy of the same database is trying to reuse the datafiles of the first and get difficult about attaching it.
=OPTION 1 (If you performed Step 1 as instructed)=
* The relevant scripts can be found in:
Scripts are in:<br>
[RMInstallDir] \Database\Scripts\2000\
Steps to reconnect the user ID’s for a restored/recovered/attached database:
<ol>
<li> In SQL Enterprise Manager Studio, navigate to the database name under the databases folder and select it.
<li> Open MS Query Analyser or equivalent sql query processor on the database and copy and run the provided scripts: updateLoginRMU.sql and fix_executerights_on_loginRole2000.sql. The first script attempts to connect the databases version of riskmanuser with the server’s version of the same user id. The second ensures that that RiskManRole has execute access to our stored procedures in the database.
<li> Navigate to the riskmanuser id under the “security” folder and “logins” at the server level and right click and choose properties.
<br>
<br>
[[Image:SQLEnt_AssignRMURights0.png]]
<br>
<br>
<li> Select the “DataBase Access” tab and tick the database we just restored.
<li> Verify that the database roles ‘RiskManRole’, db_datareader, and db_datawriter have been allocated to the riskmanuser id at the server level. If not tick them to grant these roles. Now select “OK”.
<br>
<br>
[[Image:SQLEnt_AssignRMURights1.png]]
<br>
<br>
<li> If you still can’t connect the server’s riskmanuser id to the database – delete it from the database level and follow option 2.
</ol>
=OPTION 2 (If something went wrong)=
Steps to create the user ID’s:
# Creating login ‘riskmanuser’ and choose an appropriate password – you will need to remember this for later (it should already exist in the database, but you may need to delete it if you try to grant access from the top level security branch – it should then be recreated automatically in the database).
# Delete the riskmanuser id from the database (NOT THE SERVER) you just restored
# Assign login access to risk database(s) (at the server level).
# Assign database user membership to database roles: db_datareader, db_datawriter (at the server level).
# Assign database role ‘RiskManRole’ to the riskmanuser id (at the server level).
If riskmanuser has not been created successfully the application server will not connect at all when you attempt to connect later. Option 2 should always recover the access.
''In the event that you can connect from the application server, and perhaps even login from a user account<br>
via the client, but not create risks, etc, the problem will most likely be the stored procedure access<br>
rights which are held in the RiskManRole. Run the “fix_executerights_on_loginRole.sql.” to fix this <br>
problem.''
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
cbacc313881260c6256ce1edc20f175497ef372b
BPC RiskManager Quick Help With Common Tasks
0
418
613
2019-09-10T15:26:08Z
Bishopj
1
Created page with "=Introduction= This page includes a list of links to pages grouped by common tasks for rapid access. =Adding an Extra Database or Moving A Database From a Different Locati..."
wikitext
text/x-wiki
=Introduction=
This page includes a list of links to pages grouped by common tasks for rapid access.
=Adding an Extra Database or Moving A Database From a Different Location=
BPC RiskManager can access as many databases as you like. Essentially, adding a database is largely a matter of attaching a database or restoring a database backup. The only real issue to which you should pay attention is the access / login id used by the application server or BPC SurveyManager library to access the database.
You can find instructions for setting up a new database here:
Create a new database in your database server:
* [[Instaling BPC RiskManager Database on SQL Server 2005 or SQL Express]]
* [[Instaling BPC RiskManager Database on SQL Server 2000]]
You will probably also need to establish the local configuration options for the risk manager application server and survey manager library to be able to access the database and connect the database to your network environment (such as mail servers, web sites, etc). You can find instructions for these aspects here:
* Add a connection from the Risk Manager Dataserver to the database [[BPC RiskManager - Database Configuration]]
* Connect the database to your email system [[BPC RiskManager - Mail Server Connection Properties]]
* Distribute web pages with connections to the new database [[BPC RiskManager - Distribution of Client Components]]
* Generate and connect a new survey manager library for the database [[BPC RiskManager - Install The SurveyManager]]
=Restoring or Replacing a RiskManager Database=
In this scenario we assume that you are restoring an existing 'in use' database from a backup, or from an upgraded database returned after conversion or upgrade by Bishop Phillips Consulting. The key issue here is that the connection from the risk manager data server already exists, so you do not need to create a new one.
The first step is to restore the database from the backup:
* [[Make the database (BPC RM on SQL2005)]]
In moving the database from one server to another the login id held in database that the riskmanager dataserver uses will probably become 'disconnnected' from the SQL Server instance, so we need to re-link the login id to the restored database.
* [[Restore the database access IDs (BPC RM on SQL2005)]]
If the database is otherwise the same as the original database you are replacing (ie. you are restoring from a backup), then you are finished.
Assuming that the database being restored is a replacement database, not originally sourced from the destination database server, we will need to perform another couple of configuration steps:
* If you are using locally managed access control you should
** [[BPC RiskManager - Create the Root Administrator]]
** And either [[Set up the initial user IDs]] or [[BPC RiskManager Client - Add new users| add users using the BPC RiskManager client]]
* Connect the database to your email system [[BPC RiskManager - Mail Server Connection Properties]]
* Generate and connect a new survey manager library for the database [[BPC RiskManager - Install The SurveyManager]]
=Changing the security model=
BPC RiskManager supports musltiple access security modes. These can be changed at any time and are interchangeable, but only one access mode can exist on a specific application server (although multiple application servers connecting to one database could each use a different acccess model if one desired).
The modes currently supported are:
* Fixed (where everyone accesses with a single default role)
* Managed in RiskManager (where the Risk Manager manages roles, passwords and all the access rights itself)
* LDAP (where users are authenticated by an LDAP server and roles are managed in the application)
* AD (where users are authenticated by an MS Active Diretory server and roles are managed in the application)
* NT Groups (where users and roles are defined in NT Groups)
For Locally managed modes the logins may be trusted or untrusted. Trusted means that the system assumes the login id presented by the client is legitimate and authorised (and so does not check the password), while untrusted means that risk manager requires a password and authenticates that with the user id.
In addition user ids can be include or exclude the network domain portion. Generally we recommend excluding the network domain. If you include the network domain, then the same user logging in from different domains is treated as two seperate users - one of whon may have access, the other may not.
To change the access security mode to "locally managed" or any of the other modes go to:
* [[Security Configuration - Update Installation and Reset]]
<br>
=How to move RiskManager into production after dev/test has been approved=
We have prepared a comprehensive guide to rolling BPC RiskManager into production (or adding an extra application server in production).
* [[Steps For Migrating RiskManager V6.x from Test To Production]]
=What to do after adding a new application server or moving to test/dev into production=
Enterprise sites normally have multiple installations - possibly multiple application servers or (more commonly) seperate dev/test and production systems. In these sites the database to which the new application server is connecting may have already been configured and installed and you just want to connect to it from a new application server. The best way to do this is simply run the installer on the new application server. After you do, however the client may seem to not be able to connect. This is because the installer has set you up as if you were a new installation, rather than an existing one.
This page tells you how to switch the application server into the correct security model for your site.
* [[BPC RiskManager Server - After installing in production or adding an application server]]
Also, it might help to check out the guide on setting up production. In particular the section about "after installation"
* [[Steps For Migrating RiskManager V6.x from Test To Production]]
=Managing Resources & Access=
* [[BPC RiskManager Client - Add new users| Add users using the BPC RiskManager client]]
=Updating your user preferences in BPC RiskManager=
Various user details can also be set or updated on a per user basis by the individual user. These include your password, spell checker, screen colour coding, screen resoultion handling, etc.
* [[BPC RiskManager - Updating Your Personal Preferences]]
=Recycle IIS WorkerProcess for the HTTPSrvr.dll=
In the unlikely event that users report that they can not log on using the HTTP or HTTPS connection mode, and that they get an error message that either questions whether the socket server is running on the server, or states "could not convert variant of type(Dispatch) into type (Integer)", it is likely that your HTTPSrver.dll worker process on the server has locked up:
[[Image:HttpSrvr Client LoginErrorMsg1.png]]
You can either wait 20-30 minutes for the process to naturally recycle, or do the following:
* [[BPC RiskManager - Recycling the HTTPSrvr Worker Process]]
=Configure IE for the BPC RiskManager Plugin=
The BPC RiskManager is shipped with multiple client versions, one of which is a browser plugin for IE and compatible browsers. The browser plugin is Versign signed ActiveX, similar to other plugins supplied with IE such as Flash and Adobe, etc. To allow your browser to be able to download, install and run the plugin you may need to make some browser configuration changes as detailed on this page:
* [[BPC RiskManager Client - Browser Setup For ActiveX Plugins using IE 7]]
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
78747d8ee0fb83073b782ae7401a1f2b51d4a47d
BPC RiskManager - Recycling the HTTPSrvr Worker Process
0
419
614
2019-09-10T15:27:31Z
Bishopj
1
Created page with "=Introduction= Occassionally the HTTPSrvr library that provides your HTTP or HTTPS mode connection to your BPC RiskManager client may need to be recycled. One scenario that..."
wikitext
text/x-wiki
=Introduction=
Occassionally the HTTPSrvr library that provides your HTTP or HTTPS mode connection to your BPC RiskManager client may need to be recycled.
One scenario that may induce this is an unexpected and critical connection failure during login.
In the unlikely event that users report that they can not log on using the HTTP or HTTPS connection mode, and that they get an error message that either questions whether the socket server is running on the server, or states "could not convert variant of type(Dispatch) into type (Integer)", it is likely that your HTTPSrver.dll worker process on the server has locked up:
[[Image:HttpSrvr Client LoginErrorMsg1.png]]
HTTPSrvr library provides the HTTP based connection brokerage and is run under an IIS worker process on the IIS server. In the event you are receiving this message you can:
# Connect using the normal mode (if network connection to the socket server is available on the server), or
# You can either wait 20-30 minutes for the process to naturally recycle, or
# Manually force the process recycling on the IIS Server as follows.
=Recylcing the IIS Worker Process=
<OL>
<li> Open the IIS Manager (or right click on My Computer) and expand the “Internet Information Services”/computer name tree.
<li> On the IIS manager, expand the Application Pools tree and right click on the icon matching the application pool name you created in step 6 during the install. In the example provided the application pool is call "RMSSL". Select "Recycle" from the menu.
<br>
<br>
[[Image:RMSS_HTTPSrvr7.png]]
<br>
<br>
</OL>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
7cbcbbf90e793ab4ceef0b5e8d2dcded50cba4f7
BPC RiskManager and BPC SurveyManager Importer Masks
0
420
615
2019-09-10T15:30:22Z
Bishopj
1
Created page with "=Introduction= Both BPC SurveyManager and BPC RiskManager are shipped with a low level import/export tool. In BPC RiskManager the tool is hosted in the application server co..."
wikitext
text/x-wiki
=Introduction=
Both BPC SurveyManager and BPC RiskManager are shipped with a low level import/export tool. In BPC RiskManager the tool is hosted in the application server configuration screens and protected by randomly generated code which you access by providing telephoning or emailing BPC HelpDesk with the information on the screen when requested and entering the code we provide you. In surveymaager the import/export tool is built into the client server client. The tool provides low level access to the database and reads CSV files (such as those produced by MS XL).
The importer has a number of useful capabilities designed to allow manipulation of the data during import and is primarilly intended for bulk import tasks. One of those capabilitites is the importer masks, which allow rapid conversion and formatting of data during import.
The importer masks, therefore, make modificatons to the imported fields.
=About BPCMasks=
The structure of a bpcMask is as follows:
FORMAT=part1;part2|part3
part 1 is required, parts 2& 3 are optional
part1 is a literal string with an optional '*'
The * is replaced by the existing contents of the field (after part2 & part3 mods are applied)
part2 is '_' and/or '0' and/or 'K;'
The '_' causes all blanks to be replaced with '_'
The 'Cx' causes the leading x character to be removed
The 'Kx' causes everything to the right of (and including the first 'x') to be deleted
The 'kx' causes everything to the left of (and including the first 'x') to be deleted
The 'Fx-n.mf causes a max of m chars from the source to be placed in an n char field left (-) or right (no -) justified and padded with x
part3 is a delphi format mask (see below)
Makes mods to the imported fields as follows:
T#;_0|(000)_000-0000;0;*
In this Order:
- Replace blanks with '_' (_)
- Strip leading zeros (by converting to a number) (0)
- Use everything after | as a format mask
- Finally take the result and put a T infront
So 00001234567899 would become T(123)_456-7899
And With T#;_
would make 00ABC DEF
become T00ABC_DEF
Another example:
T*;C0F06.5f|000000;0;* will convert 0000003628 into T003628
Refer to FormatMaskText Below for more info.
<pre>
Character Meaning in mask
! If a ! character appears in the mask, optional characters are represented in the returned string as leading blanks. If a ! character is not present, optional characters are represented in the returned string as trailing blanks.
> If a > character appears in the mask, all characters that follow are in uppercase until the end of the mask or until a < character is encountered.
< If a < character appears in the mask, all characters that follow are in lowercase until the end of the mask or until a > character is encountered.
<> If these two characters appear together in a mask, no case checking is done and the data is formatted with the case present in the Value parameter.
\ The character that follows a \ character is a literal character. Use this character to use any of the mask special characters as a literal.
L The L character requires an alphabetic character only in this position. For the US, this is A-Z, a-z.
l The l character permits only an alphabetic character in this position, but doesn't require it.
A The A character requires an alphanumeric character only in this position. For the US, this is A-Z, a-z, 0-9.
a The a character permits an alphanumeric character in this position, but doesn't require it.
C The C character requires an arbitrary character in this position.
c The c character permits an arbitrary character in this position, but doesn't require it.
0 The 0 character requires a numeric character only in this position.
9 The 9 character permits a numeric character in this position, but doesn't require it.
# The # character permits a numeric character or a plus or minus sign in this position, but doesn't require it.
: The : character is used to separate hours, minutes, and seconds in times. If the character that separates hours, minutes, and seconds is different in the regional settings of the Control Panel, that character is substituted in the returned string.
/ The / character is used to separate months, days, and years in dates. If the character that separates months, days, and years is different in the regional settings of the Control Panel, that character is substituted in the returned string.
; The ; character is used to separate the three fields of the mask.
_ The _ character automatically inserts spaces into the returned string.
</pre>
Any character that does not appear in the preceding table can appear in the first part of the mask as a literal character. Literal characters are inserted automatically if the second field of the mask is 0, or matched to characters in the Value parameter if the second field is any other value. The special mask characters can also appear as literal characters if preceded by a backslash character (\).
The second field of the mask is a single character that indicates whether literal characters from the mask are included in the Value parameter. For example, the mask for a telephone number with area code could be the following string:
(000)_000-0000;0;*
The 0 in the second field indicates that the Value parameter should consist of the 10 digits of the phone number, rather than the 14 characters that make up the final formatted string.
A 0 in the second field indicates that literals are inserted into the Value string, any other character indicates that they should be included. The character to indicate whether literals should be included can be changed by changing the MaskNoSave constant that is declared in the MaskUtils unit.
The third field of the mask is the character that appears in the returned string for blanks (characters that do not appear in Value). By default, this is the same as the character that stands for literal spaces. The two characters appear the same in the returned string.
Note: When working with multibyte character sets, each special mask character represents a single byte. To specify double-byte characters using the L, l, A, a, C, or c specifiers, the mask characters must be doubled as well. For example, LL would represent two single-byte alphabetic characters or a one double-byte character. Only single-byte literal characters are supported.
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
7ffd79a9ca36eccd3b0926f76c934c65c0d91722
BPC Surveymanager - Key Features
0
421
616
2019-09-10T15:31:54Z
Bishopj
1
Created page with "=Key Features:= *Optional integration with BPC RiskManager. *Optional integration with the Winnfield & Waisman* Virtual World Learning System in Second Life** for remote trai..."
wikitext
text/x-wiki
=Key Features:=
*Optional integration with BPC RiskManager.
*Optional integration with the Winnfield & Waisman* Virtual World Learning System in Second Life** for remote training virtual campus management.
*Unlimited organisations, survey managers, respondents, with configurable access rights, and unit and group reporting/analysis.
*Surveys work on any HTML 4+ web browser as well as PDA’s
*Distribute surveys across hundreds of organisations with organisation level customisation but central question list control means survey look can be customised at the lowest reporting unit level, but question content centrally controlled.
*Surveys can contain a huge range of input controls for radio buttons, lists, drop lists, menus, buttons, links, login, password fields, text, file upload, surveys within surveys, clickable image maps, pop-up hints, custom defined, and many more.
*WYSIWYG input controls for multi-line text input.
*Change a single property in a question definition to instantly change response lists between lists, drop lists, radio buttons, normal buttons, link lists, menus, etc
*Set responses to collect single or multiple responses per question
*Server side or client side edit checks (range, type, alha-numeric content input validation)
*Embed anything displayable on a web browser in a survey question.
*Change input and response types and survey content/questions after publication (in fact any part of the survey, at any time!)
*Layout and configure the survey web pages and questions in any layout displayable in a web browser
*Automatically collect user advised ‘importance’ or weighting ratings for each question in a survey if desired by checking one flag
*Mandatory or non-mandatory questions
*Display different questions to different users for the same organisation, survey, survey instance, etc.
*Windows client supports a distributed database architecture allowing development, testing and analysis of surveys on a laptop and transfer of data between databases.
*Optionally store documents and files in the surveymanager database for retrieval as part of a web page/survey, etc (saves enabling write privileges to local drives)
*Automatic print friendly versions of each page, or an entire multipage survey.
*Optionally use CSS style sheets for layout and page behaviour, or use the built in layout tags instead
*Share questions across multiple surveys in multiple organisations
*Optionally allow users to revisit and revise survey responses multiple times
*Run the same survey each week to the same people (uses survey instances)
*Each response keyed by organisation, survey, survey instance, question, question fork (used in 360 degree surveys), and user.
*Separation of concepts of survey creation, distribution and publishing
*Define exception ranges for response and track follow up actions.
*Dynamically generates survey pages based on organisation, survey question, survey page, user, user properties, previous responses to current and previous surveys or external application tests, plus more.
*Rules engine allows for rules based page construction and includes built a in natural language parser to process unstructured text responses, as well as tests for conventional conditional operators for equal to, greater than, less than, like, contains, etc.
*Multicolumn surveys.
*Build surveys of surveys. Eg. Put one survey in a left handle panel that is a menu of surveys a user can select, and display in the right hand panel each survey selected in the left hand panel.
*Build survey questions with responses from other (or the same) surveys embedded in the question text as lists, statistical analysis, full dynamically built sentences, and more.
*Build surveys with automatic annotations based on responses (allows web pages that automatically provide running advice or commentary based on user responses)
*Build surveys that allow for reviewer’s comments on other user’s responses.
*Build surveys in both structured “question – answer” layouts and unstructured report layouts, with paragraphs and responses embedded in the text. (Allows templated reports with mixed survey data, analysis and management report commentary).
*Build quizes and tests that allow for automated marking.
*Build ‘homing’ survey pages that continually redisplay the same page or fork to other pages and return to the original homing page on completion.
*Build dynamically generating surveys with unlimted questions per page, while still enabling the use of rules to select the questions displayed – no single question to a page surveys needed
*Plug in libraries allow real-time two way interfacing to external systems
*Built in archiving of survey responses
*Instant, real time report production with many options per question including, responses, response counts, responder names, percentage response breakdown, graphical response display, etc.
*Optionally allow unrestricted automatic (anonymous) or restricted (invited) survey access.
*Upload recipients from csv (comma separated) files, export responses to XL and database readable XML database format
*Distinguishes between online and paper based survey responses
*Optionally create surveys directly from marked up MS Word documents (Note – dynamic capabilities are not supported with MS Word documents)
*Mouse over sensing controls for hints and help fields
*Built in randomising news server.
*Manages organisations, organisation structures, surveymanagers, respondents, guests, etc
*Generates emailed invitations /reminders (any number with changing text based on the number of invitations sent to the individual).
*Timestamps survey commencement and completion
*Manages Distributed databases and data replication
*All data stored internally in 2byte character sets
*Works with http or https.
*Really simple to install.
*Survey engine is completely stateless.
*Connects to as many separate databases as your system can hold.
*….And lots more
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
1f0c46c711432236760f71496e83efc097f047d1
BPC SurveyManager - Introduction
0
422
617
2019-09-10T15:33:54Z
Bishopj
1
Created page with "==BPC SurveyManager - Purpose, Origin and Capability== ===Introduction - The Purpose=== BPC Survey Manager is an exceptionally powerful survey engine. It was originally c..."
wikitext
text/x-wiki
==BPC SurveyManager - Purpose, Origin and Capability==
===Introduction - The Purpose===
BPC Survey Manager is an exceptionally powerful survey engine. It was originally conceived to support control self assessment, but in the intervening years has grown to service a bewildering range of web data collection scenarios. The best way to think of it is that the survey system is a specialized web page design and delivery engine that happens to be oriented to surveys, and therefore stores everything a user enters as if it was a unique survey response.
The current version of the BPC Survey Manager is used for purposes ranging from state-wide teritiary student surveys, performance measurement, compliance tracking, risk data collection, controls analysis, performance measurement, audit data collection, 360 Degree staff reviews, automated legal assessment, web site construction, and, oh - surveys. Its original purpose as a control self assessment tool is fundamental to its success as a general purpose survey tool.
Although the system comes with an array of management clients, once a survey has ben built, you can actually administer a survey using just an email client like outlook - which was how the very first version worked!
===The Origin===
Control self assessment is an audit concept from the 1980's that was designed to reduce the costs of audit and control by establishing a framework of control compliance checklists and performance records that are then completed by the operators of the various business processes and control systems in an organisation. The control self assessment forms effectively become compliance attestation statements that are completed in line with the relevant control cycle. So, if a supervisor had to do certain things on a weekly basis he or she might would complete a control form that attested to whether those things had been done and possibly the performance statistics associated with having done those things. This would generate a form each week. The result for management was a continuous up-to-date picture of systems performance, control and compliance.
The efficiency gain appeared when the audit process began. In a control self assessment control model, audit does not audit the underlying transactions, but rather they audit the veracity of the control self assessment system. They do this by testing the honesty of the person completing the control self assessment forms. Instead of sampling the entire transaction base of say a transaction system over a period - resulting in potentially large sample sets - audit block samples the control self assessment forms and tests the transactions that relate to the forms. Essentially a stop-go sampling method can be employed which delivers a pass of fail as to the reliability of the self assessment. Only if the control self assesment forms are found to be in error is a full statistical sample set required of the underlying transactions. Our meaurements of the efficiency gain were in the order of 30% reduction in audit cost.
The weakness in controlself assessment (CSA) when it was conceived in the 1980's was that cost of managing the mountains of paper control forms generated far outweighed the cost advantage acieved from more efficient audit methods. Consequntly CSA enjoyed minimal adoption in process design, with a few notable exceptions in Total Quality Control organisations. The advent of internet and intranet technologies provided a potential solution to that problem, but only if the cost of preparing the control forms could be minimised and the automation employed proove able to handle extreme data loads at low cost.
===BPC Survey Manager Capability===
For a control self assessment form is essentially a survey, but in order for a survey system, such as BPC SurveyManager, to handle it the survey engine must satify a few essential criteria:
#. It must be able to publish a survey without requiring a web programmer.
#. Surveys must be very fast to construct and deploy.
#. The surveys must be able to be published to specific people - not simply anonymously.
#. Survey responders should be able to receive invitations to take a survey via a simple email message with a clickable link.
#. The range of information collected must include text, numbers, dates, selection lists, weights, documents, etc.
#. The system must be able to collect data from both people and devices.
#. The data collected must be able to be analysed down to each responder - not just in the form of a poll or vote.
#. Content should be able to be re-used in other surveys
#. Questions should be uniquely identified across all surveys - not just within the survey, so that cross survey data analysis can be performed.
#. The content must be able to interface with the content of other surveys, previous results and change dynamically depending on the answers given.
#. Real-world organisational structures should be supported inlcuding matrix structures as well as divisional trees.
#. The survey must be able to be altered after commencement without invalidating the previously collected results.
#. The survey must be able to be delivered to every kind of input device - from a PDA to a laptop.
#. The survey should behave like paper - you should be able to complete it over days without having to re-key everything you already entered.
#. The survey engine and data must be future proofed so that it continues to work as the underlying technology evolves.
#. The data collection engine must be able to scale spectacularly.
The issue of scaling is partricularly critical. In a CSA environment, with 70,000 or more employees, completing a weekly CSA survey, they might all decide to complete it at 4:50 PM - ten minutes before leaving work for the weekend. The survey engine must be able to handle such a large load without falling over, or staff will simply not bother and the system will break down.
BPC Survey Manager handles all these requirements, and more.
==BPC Survey Manager - System Components==
===Introduction to the Components===
The vesatility of the survey management system means that it is potentially an extremely complex application in terms of how it works and what it can do. To deal with this problem, one of our ongoing problems is simplifying it to the user so you don’t need a degree in it to use it. 95% of the time a few simple capabilities are what is required, but the BPC Survey Manager system is designed to handle a larger array of very obscure scenarios. So there are often multiple ways of accomplishing the one task.
Essentially BPC SurveyManager comes in two broad components:
# The BPC Survey Engine – This is the library that actually delivers the surveys to a responders screen and also a range of reports and some administration functions, and
# The BPC SurveyManager Client - This is the component that designs and manages surveys that are responded to by the Survey Engine. It is primarilly an administrator's tool.
===The BPC Survey Engine===
The Survey engine is pretty straight forward. There is one of these and it does everything (at least as far as delivering surveys and getting responses). It is a stateless ISAPI dynamic link library (dll) that essentially operates like a web service, without the self publication component. It is designed to work on IIS servers version 5 and above, and can be in a secured or anonymous user access configuration. It is essentially insensitive to the version of IIS.
It does not care what version of windows you are running and will work on Windows 98/ Windows 2000/ XP/ 2003/ Vista/ Windows 7 and Windows 2008. It has an extremely low memory resident data load and is only a few megabytes in size itself. It can be installed by simply copying onto a web server, although there are some registry entries that would need to be added.
It connects to an SQL Server database via ADO (built into all MS OS's above Win 2000 and above) - MS SQL 2000, MSDT 2000, MS SQL Server 2005, MS Express 20005, MS SQL Server 2008, MS Express 2008. It can coexist with many instances of itself on the same IIS server, and does not care what it is called, and the survey engine can work with as many separate survey databases as you like and multiple engines can even talk to eachother.
A browser that connects to the survey engine never knows the name of the database to which the survey engine is connecting.
It can handle thousands of simultaneous users and be deployed in a web farm and can be stopped and started while operating with almost no impact. It should operate as an IIS worker process, and likes having the cycle time set - so it runs virtually unattended for years. Further you can run database backups without stopping the survey engine itself.
It has a built-in debug mode so you can get a dump of all what it is doing to build a page on a per survey basis when you are designing complex surveys if needed.
The BPC Survey Engine delivers HTML 4 code (but essentially using mainly the HTML 3.2 subset), but optionally supports both DHTML / CSS extensions and Javascript. this means that the surveys are essentially insensitive to the browser accessing them. Further it can optionally utilise templates and plugin libraries that use the Survey Engine plugin extension API.
===The BPC Survey Manager Client===
While the Survey Engine is really the heart of BPC Survey Manager, most people think of one of the clients as the "Survey Manager" system. This is understandable as this is the way most survey administrators see the system. Of course, survey responders never see the SurveyManager client as they just access the system via a browser, usually from a link sent in an emailed invitation.
The vesatility of the survey engine can rapidly create very complex management clients if all capabilities are surfaced at once. This has lead to an array of BPC SurveyManager clients that are delivered through a variety of mechanisms (Browser, Desktop Executable, and RiskManager) and serve specific requirements. There are a number of these and the work in different ways and deliver different combinations of the core engines capabilities – to get some semblance of simplicity to users.
These are the options:
* BPC RiskManager – RM has a simplified SM client that thinks the world consists of only the “default” organization. It ignores filters, but surfaces properties and allows the creation of moderately complex surveys, but most importantly knows that there are RM tables in the database and can update those tables as well as the SM tables. It is an application server client so it talks to an intermediary layer which then talks to the database. It also has the advantage that it can make use of the RM script engine for playing with the results, assign and track actions arising from responses, note exceptions and
* BPC SurveyManager DeskTop – This can do everything, and it is way complex as a result. Some of its screens are a bit brutal. The desktop client is the only way to make use of the distributed database capabilities of the surveyengine. Notionally the SM Desktop can be setup to talk to a local copy of the survey database on which you design a survey and then tell it to distribute the survey to a particular range of survey engine databases, and then later to get the results down from those databases. On the downside, it knows nothing about the fact that there is are RM tables in the database, so it ignores them.
* BPC SurveyManager WebClient – the web client is designed for pure browser based management of surveys. It is intended for large scale survey databases with large numbers of organizations or organization units and supports bulk actions well (like importing large numbers of responders/users) and allows separate administration users for each organization and region (group of organizations) as well as whole of database “super users”. It has a simple survey creation model and very good multi-org deployment capabilities. It knows about templates, and can prevent administrators from changing questions in a survey that is being centrally deployed (where-as the Desktop assumes the only users using it have god like status). On the downside, it knows nothing about the existence of the RM tables and therefore ignores things like exceptions and action tracking. It also restricts the appearance to a pretty standard look where-as both the RM and Desktop clients allow you to completely manipulate the look of a survey. The web client also has a very good manual which is set at the idiot level. We use the web client to handle the Victorian (Australia) tertiary education student survey which spans 400+ organizations and thousands of students annually, with the 400 orgs each having their own administration area in the one database.
All these clients can be used simultaneously on the one survey database and the one survey. So a survey might be designed in one client and managed in another. They are not mutually exclusive.
The survey engine in the BPC RiskManager database is the full survey engine, and to make it work with BPC Risk Manager, the full survey engine database is merged into the risk amanager database. At this time, the survey manager client in the risk manager system assumes a single organisation for all surveys, relying on the risk manager application to distribute results where needed to risks and then – indirectly – across the risk amanager's view of the organisation structure in the held in the risk manager part of the database. The BPC SurveyManager desktop and web clients, do not know the BPC RiskManager system exists and they assume they are responsible for everything and therefore have a bit more power to them with respect to organization control.
==BPC Survey Manager - Survey Components==
The survey engine never actually stores a displayed page per-se, instead it dynamically builds every page line by line as required. Because of its original purpose, we call every line in the page a question but this is a little misleading as a line may actually be a picture, or a heading, or a hidden screen region, etc. In its simplest form a survey has:
<table border=1 >
<tr>
<td>
Survey Header
</td>
<td>
This contains the administration control and general layout information for a survey. It acts as the hub for all the other components of a survey. The layout section allows you to provide any html layout you desire and wraps all the other parts of the survey. Everything that comes from the survey engine is referenced in this layout by special markup tags - including the entire survey body itself.
</td>
</tr>
<tr>
<td>
Survey Reminders
</td>
<td>
A survey can have any number of reminders that are plain text or HTML rich emailable messages with their own markup tags allowing the embedding of large amounts of custom information, including responses and reports from this or other surveys.
</td>
</tr>
<tr>
<td>
Survey Pages
</td>
<td>
The survey pages are dynamically generated as required in groups of questions we call "question groups". In most situations one or more question groups match a conventional survey page, but this is not required.
</td>
</tr>
<tr>
<td>
Survey Page Header
</td>
<td>
You can define custom headers that appear on survey pages, or, as is more common you just let the survey layout definition deliver the header and footer.
</td>
</tr>
<tr>
<td>
Survey Page Footer
</td>
<td>
You can define custom footers that appear on survey pages, or, as is more common you just let the survey layout definition deliver the header and footer.
</td>
</tr>
<tr>
<td>
Survey Questions
</td>
<td>
Survey questions represent a pool of questions that are dynamically added to each page as required. Survey questions are actually stored centrally so a the same question can be re-used in each survey and therefore its unique identifier will be the same in each survey it is used. This allow responses to be analysed across different surveys. Every survey question has an optional weight or performance rating that is stored as a floating point number with the response. This can provide a valuable insight into the reason for certain responses. The field can alos be used for any other purpose where a weight is desired.
<br>
<br>
A question has both a layout and a content portion allowing complex layouts in the question itself. Eg. In one of the example surveys we show a survey with an entire other survey embedded in one of the questions. You can even put the same survey into a question in a survey - effectively creating a recursive survey - like the effect of having two mirrors reflecting eachother.
<br>
<br>
Qustion layouts can range from conventional question - response structures through to report style layouts where the response is embedded as part of a body of text. This is ideal for generating management reports and other templated structures directly from the survey engine.
</td>
</tr>
<tr>
<td>
Distribution list
</td>
<td>
The distribution list is the list of people responding to the survey. We refer to the act of connecting a survey to its distribution list as "publishing" the survey.
</td>
</tr>
<tr>
<td>
Event messages
</td>
<td>
Event messages are a variety of events that will result in information being communicated to the user - such as the survey end, or survey access is not allowed, etc. These can all be customised if desired.
</td>
</tr>
<tr>
<td>
Rules
</td>
<td>
Because the questions on each page are actually dynamically generated, there is a rules engine in the system. Every question can have its own set of rules which may connect to the an external plugin to which it might pass the response, or from which it might collect some additional information (or even a replacement response). More commonly the rules will determine which questions to display to the user on the next page. the rules might cause survey to loop back on itself or reject the answer and request a different response.
</td>
</tr>
<tr>
<td>
Survey Responses
</td>
<td>
Ultimately the survey is about getting and recording responses. The responses are recorded uniquely by organisation, survey, instance, person, question, and realm. Wher the option is tuurned on, the importance rating of each question is stored with the person's response.
</td>
</tr>
</table>
==BPC SurveyManager - The Database Structure==
===A waterfall diagram of the Survey Manager database===
The grossly oversimplified structure of the survey engine database is (object names repeat because they are indexed in multpiple ways and therefore the same object can be accessed with mutliple sets of indexes):
Database has:
<table>
<tr>
<td>Server Configurations</td><td></td>
</tr>
<tr>
<td>Instances</td><td></td>
</tr>
<tr>
<td>People</td><td>(have)</td>
</tr>
<tr>
<td>.</td><td>Access Rights</td><td></td>
</tr>
<tr>
<td>.</td><td>Instances</td><td>(have)</td>
</tr>
<tr>
<td>.</td><td>.</td><td>Properties</td><td></td>
</tr>
<tr>
<td>.</td><td>.</td><td>Filters</td><td></td>
</tr>
<tr>
<td>.</td><td>.</td><td>Surveys</td><td>(have)</td>
</tr>
<tr>
<td>.</td><td>.</td><td>.</td><td>Instances</td><td>(have)</td>
</tr>
<tr>
<td>.</td><td>.</td><td>.</td><td>.</td><td>Responses</td><td></td>
</tr>
<tr>
<td>Data Folders</td><td></td>
</tr>
<tr>
<td>Archive</td><td></td>
</tr>
<tr>
<td>Publishing Servers</td><td></td>
</tr>
<tr>
<td>Reports</td><td></td>
</tr>
<tr>
<td>Organisation regions</td><td>(have)</td><td>.</td>
</tr>
<tr>
<td>.</td><td>Organisation units</td><td></td>
</tr>
<tr>
<td>.</td><td>Responses</td><td>[View]</td>
</tr>
<tr>
<td>Global Organisation</td><td></td>
</tr>
<tr>
<td>Organisation units</td><td>(have)</td>
</tr>
<tr>
<td>.</td><td>Data Folders</td><td></td>
</tr>
<tr>
<td>.</td><td>Responses</td><td>[View]</td>
</tr>
<tr>
<td>.</td><td>Properties</td>
</tr>
<tr>
<td>.</td><td>Questions</td>
</tr>
<tr>
<td>.</td><td>People</td>
</tr>
<tr>
<td>.</td><td>Surveys</td><td>(have)</td>
</tr>
<tr>
<td>.</td><td>.</td><td>Properties</td>
</tr>
<tr>
<td>.</td><td>.</td><td>Responses</td><td>[View]</td>
</tr>
<tr>
<td>.</td><td>.</td><td>Instances (like January, February, etc)</td><td>(have)</td>
</tr>
<tr>
<td>.</td><td>.</td><td>.</td><td>Responses</td><td>[View]</td>
</tr>
<tr>
<td>.</td><td>.</td><td>.</td><td>Question Groups (like pages)</td><td>(have)</td>
</tr>
<tr>
<td>.</td><td>.</td><td>.</td><td>.</td><td>Properties</td>
</tr>
<tr>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
Questions
</td>
<td>
(have)
</td>
</tr>
<tr>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
Responses
</td>
</tr>
<tr>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
Properties
</td>
</tr>
<tr>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
Filters
</td>
</tr>
<tr>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
Rules
</td>
</tr>
<tr>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
Select Ops
</td>
</tr>
<tr>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
Numeric Ops
</td>
</tr>
<tr>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
.
</td>
<td>
Date Ops
</td>
</tr>
</table>
Some notes on the above tree diagram:
*This tree layout is necessarilly simplified to provide a feel for the overall database structure using primary relationships. In actual implementation it is considerably more complex than this and there are many more cross relationships than shown.
*Object names are repeated to show indexing relationships.
*Some objects have been ommitted for clarity.
*The object names do not correspond to the underlying tables names - rather the object names correspond to the 'purpose' of the object.
*There is one response tables but it has multiple views built on top of it - hence the use of the [view] marker to indicate this.
*Question groups are not a grouping mechanism rather than an indexing relationship. Questions can be directly indexed off the surveys, rather than requiring the queston group to be found as the diagram relationship implies.
*Relationships are complicated by an implied global id for most objects called "default". The global organisation is really an organisation unit called "default" but things placed in in the gloabal organisation are accessible by all orgaisations regardless of any regional organisation grouping. Variations of this concept exist for other objects, such as the instances. A property defined with the default instance is visible in all instances, etc.
*Realms are not illustrated. A realm is not a table so much as an index level that exists only in the response table to allow forking of response sets (see below).
Almost all of these structural parts have separate property tables (as shown) which are discussed later. The property tables provide a key additional dimension to the self modifying nature of the survey engine. In the structure above, some tables repeat. That is because logically they appear at both parent and child levels.
Organisation regions are actually a tree with the organizations as leaves (like a file directory tree) and people exist at the database level and are attached to organizations and surveys and survey instances. In each case there is a built-in object called "default" so if you don't want any organisations, use the "default" organisation, or only one instance, again use the "default" instance. If you want a survey that is automatically selected if the user does not specify a survey - build a survey called "default".
=====Storing Responses=====
When a response is stored it is unique to, and indexed by, organization-survey-survey instance-person-question and realm. Realms are not illustrated in the above tree diagram and are rarely used. A number of views are present which join the various tables back to the reponse table so that the response view can present a single point of report access (no need to join tables to see everything about a response including the question text. Views of the response table are provided which group the reponses by the various indexes and provide user and response counts appropriate to the grouping.
A Realm is a special kind of beast that allows the same instance of a survey to be forked, for example in response to a question. The default realm is '\'. In the absence of realms being used (the normal situation) all reponses are stored with the assumed realm '\'. A typical example of the use of a realm is a 360degree survey where a review of staff is completed many times for the same instance by the same person, but about different people. In this case the realm can be used to separate the responses about each person while preserving the same instance. (Note: There are other ways to do this using instances and/or properties.)
====Input and Responses====
=====Data Types=====
Almost anything can be a valid input (response) to a question in a survey. Where there is not a builtin facility for the desired input, customisable extensions are supported.
Input types include:
*Numeric (integer and floating point) range restricted responses
*Date range restricted responses
*Date in multiple formats - text, numeric, pick box, drop list, etc.
*Selectable lists (in many formats radio buttons, drop lists, lists, link sets, button, etc)
*Text - single line, multi line text, multi-line WYSIWYG editor
*File upload (any data type, any size up to 4GB)
=====Range Checks, Edit Checks and Exceptions=====
A number of methods for range and edit checks are supported. Range checks can be set to be validated on the client, or on the server. On the server out of range value handling can be customised through rules.
For compliance purposes, there are optional exception values held in each question so that exception reports can be easilly generated when a response falls into the exception range. The rules engine can be used to define specialised exception handling.
====The Role of Instances====
A survey always must have an instance in order to be available to a user (responder). Responses are stored recorded by an instance of the survey. A survey can have one of more instances attached to it.
An instance can be identified by any string you like and mean anything you like, but typically instances are things like January, February, Monday, Tuesday, Week01, Week02. Instances are grouped into groups of instances like adhoc, months, or years, or quarters, or days or weeks, etc. Thus one survey can be created and then it can be attached to many user defined instances.
A user completes an instance of a survey. If multiple instance of a survey have been published to the user the instances are made available in the order in which they are included in their instance group. Completing a survey instance does not automatically kick the user onto the next instance unless the auto-lock property is set to True. A user will move onto the next instance when the survey manager locks the preceding instance, unless one of the auto-locking options are chosen.
Instances allow different questions to be asked depending on a survey instance. Each question can be attached to specific instances or instance group, so you can have a question that only appears in January or only for instances that are quarters, etc.
Where you just want a single instance of a survey, and you don't want any specific instance control, just publish the survey to the "default" instance.
====People, Survey Deployment and Survey Publishing====
People can only access the system via the orgs to which they are attached. So a person must both exist in the database and be attached to an organization before they can do anything in that organisation.
Before a person can respond to a survey, the survey must be published to them AND one or more instances of the survey must also be published to them. The concept of attaching a person to a survey is called publishing, and the concept of attaching an instance of a survey to a person is called publishing the instance. An instance is a user defined, arbitrary identifier that allows the same survey to be responded to by a person uniquely of one of more occassions. Instances might be the names of months or week numbers, or years or anything else you wish. You can have as many instances of a survey as you like.
In databases with more than one organisation and additional option arises. When a survey is created it is created in an organization, it can then be deployed to any other organizations in the database so every organisation has the same survey. the act of duplicating the same survey to multiple organisations is called deployment.
It can be deployed with or without the instances attached. Lets assume it is deployed with the instances. Now after deployment the survey must be published to a list of users attached to each organisation, AND each user is granted access to all or some of the instances. If the same user is attached to multiple organisations, they can then get multpiple instances if the same survey to complete, but the responses are unique to each organisation and held with respect to each organisation.
When a survey is deployed, the survey questions do not have to be deployed with the survey. If not the original organisation's questions are used (and the local organisation administrator can not alter them) while the header can be customised in each organisation. This way each organisation can distribute the same survey to it's users, but with customised layout and livery. Reports can then deliver database, region and organisation specific views of the survey.
====Properties and Filters====
Amost every object (listed above) has “properties”. A property is a user (survey designer) defined defined storage loacation that can store any value less than 2000 characters long. Some properties are builtin (ie. have reserved names with special meaning) but as there is no limit on the number of properties per object there is plenty of scope for user defined properties. Properties can be displayed by inserting a tag that matches the property name in surveys and questions.
The property tables form a cascading tree that sits along side the users, questions, questiongroups, surveys and organizations. Each property has a user defined name with an instance and a value (which can also be changed by certain questions). So for the same property name a user may have a different value in different instances (eg in January versus February), and the property may have a value in a survey that is over written by a value in a specific question, etc. An example of such an property is the “Show Last Answer” property which shows the response the user entered to a question last time they answered it, which might be false for the survey as a whole, but true for a specific question. When accessed in a question text, the property for the question will take precedence over the same property defined at the survey level.
In additition to properties surveys, users and questions can have “Filters”. The filter tables are like a lock and key. A survey or question with a filter will only display to users that have the corresponding filter in their filter list – so the same instance of a survey can be delivered to both a manager user and a general staff user and they might see different questions. Filters can alsow be instance specific.
Properties and filters can be applied to all instances of a survey by setting the instance value of the property to "default" which means the property or filter applies to all instances.
====Manual Publishing Versus Auto Publishing ====
Generally the concept is that unless you define a list of responders and publish your survey to that list, nobody will be able to access and respond to any instance of the survey. There are situations where you do not know who will respond. Sometimes you the database does not even know them yet, at other times they are known and members of your organisation, but you do not know that they will be responding to the survey ahead of time. In these cases you need a way for the survey to automatically publish itself to them when they try and access so they can answer it.
A survey, once created can be set to AutoPublish itself (by setting the organization or the survey’s AutoPublish property to true. In this case the survey does not need to be pre-published to a user, but will be automatically published to the user when they first attempt to respond to it. Similarly there are “auto’s” for other things like creating users, creating anonymous users, creating an automated instances of a survey to pre-existing users, etc – all all sorts of combinations of these things.
===Question Display Selection===
====Introduction====
Questions displayed in a survey are selected through a number of mechanisms, some of which we have already discussed. All mechanisms operate concurrently:
*The question must belong to the current survey
*The user must have the relevant survey instance available to them
*The question must belong to the current instance group (or the "default" instance)
*The question must belong to one of the current question groups being displayed on the current page (think: page number)
*The survey filter (if defined) must match one of the user's filters.
*The question filter (if defined) must match one of the user's filters.
*The organisation, survey, questiongroup, question properties must not include a property that hides the question (eg "Invisible")
*The Rules Script (if defined) must have selected the question for display. (See "The Rules Engine" below).
There are a few other ways that a question content may be hidden - such as using a content from a property in the question text that is not available to the current survey instance.
====The Rules Engine====
We won’t go into the rules engine part of the survey manager yet, except to note that it can analyse the reponses received per question for the current or any other question in this or any other survey in this or any other organisation, it has a natural language/pattern matching parser in it and that the rules engine can interact with plug-in libraries at the backend to send and receive responses to other systems. Any number of Rules can be defined on a per-question basis and the effect of executing the rules can be to modify the response, update another system and decide that the questions to be displayed on the next "page" a user sees. The rules work on the responses received in the current survey and other surveys.
Lastly, because of the property structure, filters, question level instances, question level exceptions and variety of input/reponse question types and other capabilities, you can gat a dynamically structrured survey running easilly without ever actually writing a rule. So they are completely optional.
==Interfacing and Distribution==
===Distribution===
The SurveyEngine is designed as a distributed database, so it can talk with other SurveyEngines. In fact the desktop client works by using the distribution capability of the survey engine. You therefore can have a test database on a PC in which you design surveys and then distribute the designed and tested survey to one or more publishing servers, and then use the distribution mechanism to retrieve the results, publish the surveys and update the surveys.
===Interfacing===
The survey engine itself has an API that can be called to perform a large number of functions, but further the engine supports a plugin API definition that is accessible via commands in the rules engine script that allows a library that matches the API to be dynamically loaded and accessed. The plugin architecture allows response data to be passed to the dynamically loaded library and results retrieved from it. Depending on the rules engine command used, the returned values may be written to the response table or simply tested against some value, and decisions about what questions to display on the next page made thereon.
==Survey Layout==
When we talk about a survey we can mean both what you would expect as a survey, and also just about any kind of web page you can dream up – whether it is notionally a survey or a blog, or even a menu screen that simply allows the user to select a survey from a list of surveys they wish to answer. One layout method for a survey actually allows you to lay up a report style layout and select the appropriate word in a sentence such as “I do/do not think this sounds simple.”
There default layout is a simple question-response table layout. There are also number of built-in layouts that arrange the questions and response in either a table, a grid or a custom format. Multiple layouts can be used in the one survey, and indeed on the same display page.
There is also support for a fixed survey that is essentially a MS Word document saved as an input form with the inputs tied into the survey engine response tables, but you lose a lot of the capability of the survey engine in this form (as the layout is fixed by the word document format), so it is not encouraged.
A particularly interesting layout is the what we call the "management report" layout. In this form the question text is a statement with selectable words/values/etc embedded in the text - rather than a question. You embedd the response part of the question in the statement using special tags and the survay appears to the user as a series of paragraphs, each with mutliple statements and each statement with one or more selectable responses that effectively construct a sentence. Since a survey question can contain both responses and response analysis from other surveys, it is possible to essntially template a management report by constructing a survey in this way that is the report that presents the results of other surveys in the survey text and invites the user to "complete the sentence" or "cross out the word not applicable".
==Reporting==
===Standard Reports===
You do not have to know anything about the database to get a report. Every survey automatically has a number of reports and groupings available without you doing anything. The reports use the survey layout to deliver their output. These reports are:
*Individual responses by question and person
*Response count by question
*Responses by question
*Responder's name by question
*Count breakdown of responses by question
*Percentage breakdown of responses by question
*Percentage breakdown pie-chart by question
There are a number of predefined views that feed these reports and provide various groupings by survey:
*By user
*By Organisation
*By Region
*By Database
===Special Reports===
====Individual Question Reports====
In addition to the standard reports available for the survey as a whole, each type of report can be extracted for an individual question in any survey in an organisation by enbedding special report tags into the content of a question text, or a script command in the rules engine.
====Data dump====
In the event that a user wishes to extract the responses from the engine for analysis in another system, the data can be extracted using one of the views into a CSV file.
==Archiving==
An timestamped archiving table is available so that responses can be shifted out of the main response table into an archive.
==User Access Control==
User ID's are shared across all organisations in the database, but must have been granted access to an organisation before they can access anything in that organisation. Further before the user can answer a survey, they must have been granted access to both the survey and at least one instance of the survey.
Users have rights. The user rights are defined in terms of roles. A user has a global role and an organisation role. So a user may be a survey administrator in an organisation but only a responder at the global level.
The default built-in rights are:
*Super administrator - database level administration rights
*Region coordinator - rights to administer a group of organisations
*Survey administrator - organisation level administration rights - including the right to create a survey.
*Data entry - survey specific rights to enterr data in surveys on behalf of responders
*User - rights to enter data in a survey too which they have been granted access.
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
672f559ee9d9ce8420e0ef50e6e56780ae0708f1
BPC SurveyManager - Creating Surveys - Layout and Markup Tags
0
423
618
2019-09-10T15:35:53Z
Bishopj
1
Created page with "=The BPC SurveyManager SMHTML TAG Library= ==Introduction== If you are using one of the advanced BPC survey manager clients, such as BPc RiskManager or BPC SMWebuilder, you..."
wikitext
text/x-wiki
=The BPC SurveyManager SMHTML TAG Library=
==Introduction==
If you are using one of the advanced BPC survey manager clients, such as BPc RiskManager or BPC SMWebuilder, you can create a survey from scratch without ever using a tag. This is because the survey manager engine and its clients have default values with a common tag markups stored in them which are automatically used when you do not define anything yourself. If, however, you are going to customise an autogenerated survey, create a non standard reminder (survey invitation), perform advanced question layout, or really harness the power of BPC SurveyManager, you are going to need to have an understanding of the BPC SurveyManager tags and properties. This section provides a summary of the BPC SurveyManager tags.
BPC SurveyManager uses a large range of custom tags of the form:
<pre>
<#tagname tagproperties >
Where:
tagname = One of the tags listed on this page.
tagproperties = { tagproperty { tagproperties }}
tagproperty = tagpropertyname "=" tagpropertyvalue
tagpropertyname = A string of alpha-numeric characters without punctuation or spaces
tagpropertyvalue = A string of the form detailed described for the appropriate tagname and tagpropertyname on this page.
</pre>
For example:
<#surveybody SID=MySurveyID >
represents the surveybody tag with the "SID" tagproperty assigned the tagpropertyvalue "MySurveyID".
SYNTAX NOTE:
In the grammar above, yyy = { xxx { yyy } } reads yyy is replaced by nothing or xxx and optionally another yyy, which in turn is replaced by nothing or xxx and optionally another yyy, etc
So for example:
<#surveybody SID=MySurveyID OID=MyOrgID >
Is also a valid tag, which represents the surveybody tag with the "SID" tagproperty assigned the tagpropertyvalue "MySurveyID" and the "OID" tagproperty assigned the tagpropertyvalue "MyOrgID".
BPC SurveyManager tags can be inserted in the text of the survey layout field, the reminder layout field, and question & question HTMLLayout fields, mixed along with normal DHTML tags. The BPC Survey Manager tags are processed and expended by the SurveyManager engine before a page is served to a web browser and cause the insertion of data in various forms. If a tag is not recognised it is ignored. Just as HTML extended with the tags and syntax of Dynamic HTML is called DHTML, we call the extensions made to DHTML by the BPC SurveyManager tags: "SMHTML".
There are three places where BPC SurveyManager tags can be used:
# The Survey Layout. This is stored in the survey header and determines the overall appearance of each survey page.
# The Survey Reminder/Invitation(s). These are stored in table attached to the survey header. These are emailed to respondents when required. Any number of invitations can be created, being automatically used sequentially until the reponder commences a survey.
# The Survey Question(s). The survey body is made up of lines (called Questions). Each "line" in the body of the survey that is not part of a header or a footer is stored in the Question table. Although we call them questions for historic reasons, a question can in fact be a section heading, a normal heading, a block of text, a frame into another web page, a picture, or even a question! Every one of these lines can have its own markup string comprising text and layout tags. Questions have two fields that take tags, one is the question text and the other is the question layout. The layout field is treated as a wrapper for the question text field - but both can have HTML and SMHTML.
In BPC SurveyManager you can use DHTML anywhere where you use the special BPC SurveyManager tags (SMHTML). The BPC SurveyManager tags are therefore just an extension to normal DHTML. An SMHTML page can be read by any standard bowser, but the SMHTML tags will only be properly expanded and replaced if the page is served to the browser by the BPC SurveyManager engine.
==Survey Layout Tags==
===Survey Tags (Survey Header Layout Field)===
Each survey header record stores a layout field which represents the envelope of every page. If you do not define the layout a default layout is automatically created and stored when you create a new survey. The description of the default layout is a configurable item for the database. In its simplest form the layout field could be:
<pre>
<HTML>
<HEAD>
<#JVScriptLib1>
</HEAD>
<BODY id="pagecontainer" >
<#SurveyName >
<br>
<#surveybody >
</BODY>
</pre>
Now this would be a pretty plain page. The default blue web page layout of version 5 and above is achieved with this layout:
<pre>
<HTML>
<HEAD>
<#JVScriptLib1><#CSSSheet ><#csslib1 >
<style type="text/css">
body {
scrollbar-3d-light-color : #AF75EA;
scrollbar-arrow-color : #0033cc;
scrollbar-base-color : #9999CC;
scrollbar-dark-shadow-color : #000000;
scrollbar-face-color : #9999CC;
scrollbar-highlight-color : #ffffff;
scrollbar-shadow-color : #1B0037;
background-color: #CCCCFF;
}
.style22 {color: #FFFFFF; font-weight: bold; font-size: 14px; font-family: Arial, Helvetica, sans-serif; }
.style24 {color: darkblue; font-weight: bold; font-size: 40px; font-family: Arial, Helvetica, sans-serif; }
</style>
</HEAD>
<BODY id="pagecontainer" >
<table width=100% >
<tr><td>
<table>
<tr>
<td></td>
</tr>
</table>
</td>
<td align=center >
<br >
<span class=style24 ><#SurveyName></span >
</td>
<td align=right ><#fpropimage FPROP="SIDLogo" width="110" height="120" >
</td>
</tr>
</table>
<hr>
<BPCDEBUG >
<table width=100%>
<tr ><td ><font color="red" ><i><#errormessage ></i ></font ></td >
</tr >
</table >
<span class="SurveyIntro" ><#SurveyIntro ></span >
<#SurveyBody >
<hr>
<p><span class="PageProgress"><#PageProgress ></span></p>
<p><input type=button name="cmdPring" value="Print Page" onClick="PrintMePage('pagecontainer')" ></p>
<p align=center ><i>Note: You may use your back or forward browser buttons and reenter information. - Just remember to press the continue button at the end of each page for which you have changed your response or the changes will not be recorded.</i></p>
<span class="SurveyHelp" ><#SurveyHelp></span ><#HelpLink ><br>
<span class="SurveyPrivacy" ><#SurveyPrivacy></span ><#PrivacyLink ><br >
<BPCDEBUG >
</BODY>
</HTML>
</pre>
The tags here are a mixture of predefined SMHTML tags and SMHTML tags that reference property values of the survey. (See BPC SurveyManager Properties for a list of the predefined properties). So not everything that looks like a SMHTML tag is going to be in the list of builtin tags that follow below. An important observation to make here is that the SMHTML language is inherently extensible. If an SMHTML tag is not recognised as a builtin "reserved" tag name, the property lists are automatically searched. Depending on which part of the survey the engine is processing at the time of the tag expansion the tag could be replaced by the value of a property held by a person, a question, a question group, a survey instance, survey an organisation or the database (in that order). If the tag is not found in either the builtin or cascading property lists, it is simply ignored, to it is entirely safe for you to use a layout page with tags for which you have no definition.
Since you can design a survey to include questions whose responses are capable of modifying a property, you can make the layout of the survey and content of the current survey (or infact another survey) change via the properties and tags that match those property names. Obviously, this is very advanced BPC SurveyManager design.
Now, an obvious question, is "Where is the actual Survey?" The survey questions for each page are all inserted where the <#surveybody > tag appeared.
These Tags in addition to normal DHTML are allowed in a survey layout field.
*<#surveybody > - Insert the main survey body here. Virtually all survey layouts will have one of these.
*<#surveybody SID QGroup QScript *SurveyBodyStandardTags* > - surveybody with (at least one of..) defined SID source, Question group name, Question selector script (see below) and (optionally ) SurveyBodyStandardTags - see below
*<#sitelogo ALTMESS *imagetags* > - Retrieve the site logo, using ALTMESS message if the image is not found and include all standard HTML image tags.
*<#orglogo ALTMESS *imagetags* > - Retrieve the image held in the current org folder from OrgLogo.jpg, using ALTMESS message if the image is not found and include all standard HTML image tags.
*<#fimage FSRC OID ALTMESS ALTMESS *imagetags* > - Retrieve an image from the folder FSRC held by OID (default - current org id), using ALTMESS message if the image is not found and include all standard HTML image tags.
*<#scriptlib1 > - insert the old disk addressed javascript library (DEPRACATED - DO NOT USE)
*<#jvscriptlib1 > - insert the standard SM javascript library
*<#csslib1 > - insert the standard CSS library
*<#csslib2 URL FSRC > - insert a user defined CSS sheet specified in URL or if URL is not defined use a folder referenced in FSRC or if FSRC is not defined, look for a property called csslib2 for the current user/survey/org and use the url defined in that.
*<#csslibprint URL FSRC > - insert a user defined print only CSS sheet specified in URL or if URL is not defined use a folder referenced in FSRC or if FSRC is not defined, look for a property called csslibprint for the current user/survey/org and use the url defined in that. The style sheet will only be used if "PrintMeValue=True"(ie. a printable survey version has been requested.)
*<#userscriptlib LANG URL FSRC OID PID > - User defined LANG (default javascript) script library from URL 'source' (normal web reference) or FSRC (SM Folder reference) and OrgID OID (default - current orgid) and PID (default - current PID)
*<#errormessage > - The place where page level error messages appear on the page. There are pepoerty settings to make error messages appear near a question where the question level error has occured
*<#bpcdebug > - Insert debug data flag. When used this tag is generally used twice in the layout. Once before the surveybody tag and once after. Different information is displayed in each case.
*<#orgdescription > - Insert the orgname
*<#surveyhead OID URL SRC From=Disk NOTAG=True > - Insert the tag expanded header text stream or non tag expanded (if NOTAG=True) contained in the file located at the URL/SRC from the SMFolder for OID (or current org, or default org, if not found in either) OR from disk (if From=Disk)
*<#surveyfoot OID URL SRC From=Disk NOTAG=True > - Insert the tag expanded footer text stream or non tag expanded (if NOTAG=True) contained in the file located atvthe URL/SRC from the SMFolder for OID (or current org, or default org, if not found in either) OR from disk (if From=Disk)
*<#moduleid > - Return the module ID. This is the current SurveyManager library name. This would normally be used in constructing a clickable link.
*<#insfile OID URL SRC From=Disk NOTAG=True > - Insert the tag expanded text stream or non tag expanded (if NOTAG=True) contained in the file located at the URL/SRC from the SMFolder for OID (or current org, or default org, if not found in either) OR from disk (if From=Disk)
*<#hfile OID URL TXT PID > - anchored link <a....>txt</a> referencing an smfolder file found in OID (or the current OrgID if OID is not defined, or 'default' OrgID if not present in the current OrgID)
*<#srcfile OID URL PID > - Create and insert the string SRC="xxx" using smfolder reference found in OID (or the current OrgID if OID is not defined, or 'default' OrgID if not present in the current OrgID) and PID (default - current PID). Used for creating HTML references for iframes, or images etc that reference the smfolder.
*<#file OID URL PID > - straight printed url reference for an smfolder reference found in OID (or the current OrgID if OID is not defined, or 'default' OrgID if not present in the current OrgID) and PID (default - current PID).
*<#fsidimage OID FSRC ALTMESS *imagetags* > - Insert an image assuming the folder root is the SID and FSRC contains the logo path(excluding the SID root folder portion) and Insert '' if FSRC param is empty, include any other standard HTML display tags listed, display ALTMESS message if the image is not found
*<#fpropimage OID FPROP ALTMESS *imagetags* > - Insert an image assuming the FSRC for the logo path is held in a property as named in FPROP or if FPROP='' look for a property called SIDLogo instead and Insert '' if nothing is found, include any other standard HTML display tags listed, display ALTMESS message if the image is not found.
*<#qry name field> - Insert the value of any current dataset name / field value. Current means the current active record as determined by the org (OID), survey (SID), person (PID), instance (IID) index settings. The NOQRY property can be used to disable the use of this facility for a person, survey, organisation, database by setting it to "True".
*<#smq SID QUES QRL NoGen AsHeading *tagparams* > - Manual insertion of a question reponse field into the page and question list for the page. SID is the survey ID, if not defined, the current survey is used. QUES is the fake QID, if not defined the QRL is used. A QRL is a question resource locator (see Question Resource Locator). If NoGen is NOT defined, and associated input control will be generated using all the remaining *tagparams*. If AsHeading=True the input control will be inserted with in its own table. SMQ is typically used where the surey form is not generated by surveymanager but loaded into the surveymanager layout field in its entirety (or referenced there). Essentially this allows a question that does not exist in the current survey to be manually created and inserted into the page and response stream as if it were a pre-defined question. Any response received will be stored in the response table (but will not be able to be retrieved with question text - obviously), unless that text has been separately created in the question pool with the same QUES id supplied in the smq tag. The NoGen param is also a property of the current survey instance can be used to disable the insertion of the associated input controls by setting it to "True", although it will still be constructed as a question in the question list and stored as a response. The classic use of this facility is in a survey that is an MS Word document, where the question text is stored as a block in the survey layout field and only exists as a response as far as the survey engine is concerned. Blocking the control generation is essentially saying that the input control for this response is already in the layout document. In this case the input control in the document must have the ID and Name set to the same value as "ques".
*<#smformstart DBI SID SIDO QGRPO OID QOID PID RPID RK IID RID RIDO PostingType PostingMethod PostingAction PrintMeValue EOSAction PageTimeStamp > - Manual insertion of surveymanager form start. This is normally constructed automatically by the engine, but can be manually created using this tag in a survey layout. If tag params are not defined the defaults will be applied. Generally your would need at least the OID and SID and possibly the PID/RK depending on from where you were invoking the survey. This is advanced usage, and if you do not already understand the params you should not be using this tag. It must be paired with am <#smformend > tag.
*<#smformend NoContinue ContinueButton > - The closing tag of a manually inserted smformstart. The NoContinue ContinueButton tag params are also properties of the current survey instance and if not defined the property values will be used instead. See properties.
*<#news PID OID ITM width height > - Survey manager has a built in news engine. This tag will cause a news item to be displayed in a scrollable frame. The ITM param may be "any", "hdln" or "all". If not defined, "any" is used. The "any" option causes a randomly selected news item to be displayed with each page refresh. "hdln" causes all the news headlines to be listed, while "all" causes all headlines and content to be listed.
*<#surveyintro > - Display the content of the person, survey or organisation's "surveyintro" property. Will ONLY display on the first page of the survey, otherwise ignored.
*<#privacylink > - Display the content of the person, survey or organisation's "privacylink" property on every page in a link tag with the class set to of "privacylink".
*<#helplink > - Display the content of the survey or organisation's "helplink" property, or if not defined generate a clickable link using the survey owner email address. The link is generated as a "mailto" reference providing a clickable emailing link. The link tag with the class set to of "SurveyContactText". If the helplink property is defined, the text is used exactly as stored.
*<#xhlp name=prop|help item=rowval tprm="border=1" rprm hprm cprm cstrt cend htmlsafe=true hdng=true htmlnl=true > - Display the content of the prop (Properties Help) or help (general help) tables in a table format with hdng (headings) htmlnl (CRLF conferted to HTML new line) and htmlsafe ( HTML characters converted to safe HTML characters). item optionally selects only a specific row based on the string in the first column (omitting it will deliver the entire table) and tprm is the standard HTML attributes for a table tag eg: "border=1". The other fields are similar to the tprm but for the other parts of a table and can be ignored: rprm (row params), hprm (heading cell params), cprm (normal cell params), cstrt (cell start tag string eg: <nowiki><b></nowiki>), cend (cell end tag string eg: <nowiki></b></nowiki>). The cell start and end are not applied to heading cells.
*<#pageprogress > - Attempt to calculate and display the survey progress based on the current page. This is subject to the number of "pages" on a single page (remember quesgroups are not necessarilly a page at a time), and the rules in the rules engine. So it is not necessarilly correct. Another way to achieve this is to define an info_op (heading/text only display question) in each question group that displays the progress, or use the "pagemessage" tag.
*<#pagemessage > - Look up the current ques group for a property called "pagemessage" (last on page if after surveybody, or first on page if before surveybody) for an end of page message - eg "You have 5 minutes left..". Intended normally as an ques group progress message, but anything could be put here.
*<#..survey field..> - survey table fieldname. Survey Field names are searched before (and therfer take precedence over) property names.
*<#..property..> - survey property reference. See Property Help
SYNTAX NOTE:
In the list above and elsewhere <#..xxxx..> means ..xxxx.. represents a alphanumeric string like "SurveyName" (without the quotes).
In the list above and elsewhere the tagpropertyname (eg. OID, URL, PID, etc) means OID=xxxx URL=yyyy PID=pppp, etc. The TagPropertyName is listed without the "=xxxx" part for clarity and simplicity.
====Tag SurveyBody Tag Property Notes====
=====TagProperty QScript: Question selector script=====
The QScript tagproperty if a property of the surveybody tag. It defines the script to be used to load the question list for a survey page. The survey engine works by loading an internal question list as it processes each page. The initial survey page is loaded by reading the question script (provided in either a QScript tagproperty or, if that is empty, question script field of the survey header, and if that is empty the default page size field of the header is used. The script is then executed and the result is a list of questions for the next page. That list is then passed to the page assembler which loads and preprocesses each question in the list. On subsequent pages the question list is first assembled from the rules attached to each question of the previously posted question list, and if empty, then the QScript tagproperty and if that is empty the question script field in the header, and if that is empty, the default page size defined in the survey header is used.
The QScript tagproperty uses the same script syntax as the QSCript survey header field.
Format: QScript="value" where value is:
*@(.[QuesGroupName]) - List all questions with this question group id
*@(.[*]) - List the questions in the question group (alphabetically) after the last question. QuestionGroups are sorted alphabetically.
*@(.[. for num]) - List the next 'num' number of questions from the current question (the last question served to the browser - questions are sorted numerically by the order field, then alphabetically by the QID field)
*@([!P]) - get the string from a property of the person called SID + '_SCRIPT'
*@([!]) - get the string from a property with the same name as the SID
*[!D] - get the string from the default script for the survey (WARNING: Do not use this in a surveyheader default script field)
Note in somes circumstanses the following would work-
*@(SID.[*]) - List the questions in the question group after the last question for survey SID
=====SurveyBodyStandardTags (May be left undefined)=====
Format: SBStandardTag="value" where SBStandardTag is:
*SIDO= Return to this SID, at end of this survey
*QGRPO= Return to this Question group, at end of this survey
*OID= Organisation Identifier
*QOID= Return to this OrgId, at end of this survey
*PID= Person identifier
*PIDO= Return to this Person identifier
*RK= Person identifier matching key
*RKO = Return to this key, at end of this survey
*IID= Instance identifier
*IIDO = Return to this Instance, at end of this survey
*REALMID = The Realm ID (Survey Fork). In addition to instances survey responses can be distinguished for the same instance by a Realm value (default="\"). It is generally wiser to leave this undefined unless you really understand how realms work. The classic use if in a 360degree survey.
*REALMIDO = The return Realm ID.
*DEM = Data entry mode: online (default), mail, phone
*PageTimeStamp = PageTimeStamp (default is 'now')
*PrintMeValue = 'True' if a printable version of the survey is requested.
*EOSAction = The posting URL for the form's action - allows a BPC SurveyManager form to be posted to any web application (URL)
*AllOnOne = Script Action to perform at survey end
=====SpecialSurveyBodyStandardTags (May be left undefined)=====
Format: SBStandardTag="value" where SBStandard is:
*PostingType= Either 'multipart' or any valid HTML ENCTYPE (Default - A. look up 'PostType' property for current SID, else (default - ignored)
*PostingMethod= Either 'post' or 'get' (Default - A. look up 'PostMethod' property for current SID, else B. 'post' )
*PostingAction= Either a valid HTML form action (URL), or if blank then if Posting Type is blank, lookup the 'PostAction' property for current SID, else assume the URL of the current survey.
*DBID= Database identifier. Very rarely, if ever, used. This allows chaining of BPC SurveyManager databases and is for extremely advanced use. Contact BPC directly for instructions on how to use this tagproperty. Primarily intended for sensor system integration and complex distributed database applications. Requires additional configuration with in the database to operate.
=====SurveyBodyTags that are not allowed as TagProperties=====
*QuesList = List of questionIDs on this page
*LastQuesID = Last Question on this page
*LastQuesGrp = Last Question Group on this page
*FirstQuesID = First Question on this page
*smFNField
===Word Document Survey Specific Tags===
Although BPC SurveyManager is primarilly designed to be used where surveys are dynamically generated by the engine, it is also possible to use it where the survey is created in a MS Word document and saved as HTML. In this case you lose the dynamic question selection capabilities of survey manager, but not the ability to save the responses into the BPC Survey Manager database. There are some special tags designed to link the word format document back into the survey manager question database.
The following survey tags are ONLY used with HTML survey pages where the question page is NOT generated by SurveyManager (allows a word document saved in HTML format to be quckly converted into a survey).
In the external HTML editor (eg MS Word), the tags should be written as: [#...#] where "..." is smformstart, smq or smformend. Questionlists, and first and last question markers are automatically generated. Rules are allowed in these surveys, but the questions generated are only inserted if a <#surveybody > tag is included, else questions generated by rules will be ignored. The [# ... #] tag format is automatically converted by the BPC SurveyManager Desktop document importer into BPC SurverManager tags <#...>, so obviously you *must* use the BPC SurveyManager Desktop client to create these types of surveys.
*<#smformstart SID *SurveyBodyStandardTags* *SpecialSurveyBodyStandardTags* > - Insert an smform header. Use ONLY with HTML pages NOT generated dynamically in survey manager (Also requires *<#smq > and <#smformend > tags ) .
*<#smq sid ques qrl NoGen=True AsHeading=True *QuesStandardProperties* > - Insert a question & response control from SID & QUES (OR QRL - of the form sid.quesscript or sid.ques). If 'NoGen=True do NOT generate the control, just record the quesID for page control. If AsHeading=True insert the quesgroup heading (use the TH property format). (Also requires <#smformstart > and <#smformend > tags ) .
*<#smformend NoContinue ContinueButton > - Insert the form end marker excluding a Continue button (if NoContinue=True) or using a Continue button of the form defined in ContinueButton, or, if blank use the values set for the SID properties of that name. (Also requires <#smq > and <#smformstart > tags ) .
===Question Layout Tags (QuesHTML Field)===
In addition to normal DHTML tags, these Tags are allowed in a QuestTable QuesHTML Layout field. They combine with the Question Tags to display HTML features of the question/info display
*<#question > - Insert the text of the question here (i.e. the Question field of the current row of the surveyquestion table)
*<#..questable field..> - Insert the value of an arbitrary field of the current row of the question table
===Question Tags (Question Field)===
These Tags are allowed in a QuestTable Question Field. They control features of the question/info part of the question/response display
*<#answer QRL SID QID PID IID VAL=val/text/value/type/gid/id > - Embedd a question report into the current question text. Refer [[BPC SurveyManager - The Built In Reports#Answer Tag Definition and Syntax|Answer Tag Definition and Syntax]] for a detailed explanation, syntax and examples of the Answer tag.
*<#jsmbutton MOHint=True/False MOHintTXT="My Hint" JMPTXT="Click Here" JMPACTN="MySID&QGRPO=Page01" > - Insert a button that can jump to an SM survey and return to this survey at end.
*<#input > - Automatically embeds the input response control into the question text (rather than the normal handling of placing it into a separate column). Ideal for report or unstructured surveys. List/selector inputs are better set as drop lists rather than radio buttons in this style of survey.
*<#optype > - identical to the <#input > tag.
*<#user > - User name
*<#ruser RPID > - Normally the same as the user name, except where the survey is a being completed by a "reviewer". In this case it is the reviewer's name. This kind of survey is one where two or more people will progressively complete the survey, possibly seeing different questions. An example would be where a survey is initially completed by a borad member and their answers reviewed by legal counsel and comments added. If the RPID attribute value is not assigned the current RPID for the survey will be used (the normal situation).
*<#orgdescription > - orgname
*<#qry name field > - any current dataset name / field value
*<#result name field > - same as <#qry >.
*<#rmndr OID SID RNUM XPND PLAIN> - insert the text of a reminder with LF mapped to <br > or PLAIN (if =true) (populate if XPND=True ..Not implemented yet) defaults to cur SID, OID & 1
*<#insfile OID URL > - inserted file as text stream
*<#hfile OID URL TXT > - anchored link <a....>txt</a>
*<#srcfile OID URL > - inserts the string SRC="xxx" using smfolder reference
*<#file OID URL> - straight printed url reference
*<#hint > - insert a question button that causes the value defined in the questions hint property to be displayed when clicked.
*<#exe plg="myplugin" action="LookUpContact" param="a,b,c" > - Insert the result (return value) of a call to a plugin with the action (command in the plugin dll) and the param (parameter list). Params should be valid rule parameters for the rules scripting language.
*<#pexe PNAME > - Similar to <#exe >. Insert the result of a call to a plugin defined in a property of the user/ques/quesgroup/survey/org. If PNAME is omitted the property should be called 'pexe' and contain a string of the form 'exe( "myplugin", "myAction", a,b,c ). If PNAME has a value, that value is the name of a property containing the text eg: 'exe( "myplugin", "myAction", a,b,c )'
*<#..property..> - person/question/questiongroup/survey/organisation/database property
==Email tags==
Reminders are email messages. A reminder is parsed by the survey engine prior to sending and an also contain tags. If the email format is HTML than DHTML tags may be used as well as these. Whether the reminder format is text or HTML, these additional special tags are allowed in an Email reminder letter:
*<#surveyname OID POID SID PID XTRA > - survey visible address (see below)
*<#surveyhlink TXT OID POID SID PID XTRA > - survey anchored link <a....>txt</a> (with overriding fields POID replaces OID with a property value, XTRA allows any other tag string to attached)
*<#surveytitle > - survey title
*<#insfile OID URL > - inserted file as text stream
*<#insfilewexp OID URL > - insert file as text stream then parse as an html message (ie recurse)
*<#hfile OID URL TXT > - anchored link <a....>txt</a>
*<#file OID URL> - straight printed url reference
*<#answer QRL OID SID PID QID IID > - insert the value of a survey question response
*<#owner > - survey owner
*<#owneremail > - survey owneremail
*<#sender > - survey sender name
*<#senderpid > - survey sender pid
*<#senderemail > - survey senderemail
*<#addressee > - survey target PID name
*<#pid PID > - survey responder id
*<#pwd PID PWD OPT=P,D, > - survey responder password
*<#..property..> - person or survey property
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
e526066eb0a5cf903d9d8854d2851aae1baaa44d
BPC SurveyManager - Creating Surveys - The Page Script
0
424
619
2019-09-10T15:37:16Z
Bishopj
1
Created page with "=Introduction - Purpose and Usage= Pages in BPC SurveyManager are not pre-stored the content of pages are dynamically selected and displayed according to the default questio..."
wikitext
text/x-wiki
=Introduction - Purpose and Usage=
Pages in BPC SurveyManager are not pre-stored the content of pages are dynamically selected and displayed according to the default question script or question level rules scripts. Questions to be displayed on a page are selected and loaded into an internal Question List using (in order):
# The rules stored with the question of the previous page submitted to the survey engine, if no questions selected (or no previous page)
# A QScript tag value passed to the survey in the initial survey call, if no tag value
# A QScript property held in the survey property list, if no property value
# A property for the current user named as SID + _SCRIPT, if no property value
# The "default page script" which is a field stored in the survey header, if no script defined
# An auto generated script that returns the default number of questions in a page as defined in the survey header, or if no default page value
# The next single question in the question list after the last one answered.
The QScript tagproperty if a property of the surveybody tag. It defines the script to be used to load the question list for a survey page. The QScript tagproperty (QScript="value") uses the same script syntax as the QSCript survey header field, and properties that store QScripts.
When you script the rules engine you define the list of questions using the QScript syntax explained here.
=The QScript Engine=
The survey engine works by loading an internal question list as it processes each page. The initial survey page is loaded by reading the question script (provided in either a QScript tagproperty or, if that is empty, question script field of the survey header, and if that is empty the default page size field of the header is used). The script is then executed and the result is a list of questions for the next page. That list is then passed to the page assembler which loads and preprocesses each question in the list. On subsequent pages the question list is first assembled from the rules attached to each question of the previously posted question list, and if empty, then the QScript tagproperty and if that is empty the question script field in the header, and if that is empty, the default page size defined in the survey header is used.
=The QScript Syntax=
A QScript is a list of display scripts, seperated by commas and is of the form:
command(ssss,ssss,...), command(ssss,ssss,...), ...
* where ... means 'repeat indefinately' (don't put it in your script - it is a grammar symbol for use here only)
* where command is @ (means ask), ask, show, showpage -- all of which mean ask the things in the brackets. @, ask, show are all the same and work for any QRL list, showpage assumes the QRL list is a list of question groups.
* where ssss might contain (one or more of):
*.[QuesGroupName] - List all questions with this question group id
*QRL1,QRL2,QRL3,...QRLn) - List of Question Resource Locators separated by commas
*.[*] - List the questions in the question group (alphabetically) after the last question. QuestionGroups are sorted alphabetically.
*.[. for num] - List the next 'num' number of questions from the current question (the last question served to the browser - questions are sorted numerically by the order field, then alphabetically by the QID field)
*[!P] - get the string from a property of the person called SID + '_SCRIPT'
*[!] - get the string from a property with the same name as the SID
*[!D] - get the string from the default script for the survey (WARNING: Do not use this in a surveyheader default script field)
=Standard and Typical QScripts=
As the only command currently supported is "@" (although ask and show are also supported but mean the same thing), allowed QScript formats are therefore :
*@(.[QuesGroupName]) - List all questions with this question group id
*@(QRL1,QRL2,QRL3,...QRLn) - List of Question Resource Locators separated by commas
*@(.[*]) - List the questions in the question group (alphabetically) after the last question. QuestionGroups are sorted alphabetically.
*@(.[. for num]) - List the next 'num' number of questions from the current question (the last question served to the browser - questions are sorted numerically by the order field, then alphabetically by the QID field)
*@([!P]) - get the string from a property of the person called SID + '_SCRIPT'
*@([!]) - get the string from a property with the same name as the SID. This is commonly uses with rule jumpfunctions PADD/PSET/PCLR existing in other surveys to define a list of questions to display for a specific user. See:[[BPC SurveyManager - Creating Surveys - Rules Scripting]]
*[!D] - get the string from the default QScript field of the survey (WARNING: Do not use this in a surveyheader default QScript field)
Note in somes circumstanses the following would work-
*@(SID.[*]) - List the questions in the question group after the last question for survey SID
In all cases the script is processed for [!P], [!D]. [!] is a special form used for annotation surveys in the default script field of the survey header. It allows a property of the user to contain a list of QRL's (usually questions/or info blocks) to be displayed (such as in a second survey embedded in a left hand column, representing the list of things selected by rules of the primary survey in the right hand column). See: [[BPC SurveyManager - Creating Surveys - Rules Scripting]]
Most of the BPC SurveyManager clients will automatically insert the default script "@(.[*])" to the default script field of the surveyheader when you create a survey. This is the best general script for a conventional survey as it essentially tells the survey engine to retrieve the next quesgroup and display all of its questions. This orients the survey to treating quesgroups as pages. Obviously you can override it.
=About Question Resource Locators (QRL)=
A QRL (Question Resource Locator) of the form:
SID.QID:RID which means SurveyID.QuestionID:RuleID
Where the RuleID MUST be a number.
In QScripts the RuleID portion does not make sense, so it should be omitted, thus:
SID.QID
QRL's are designed so that if a portion is omitted they are automatically expanded with the current value for the omitted value. So if the current survey is XYZ001, the QRL ".Q001" would be expanded to "XYZ001.Q001"
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
b1d3e5680a5ca83b76934a1b90a3ada85b55365b
BPC SurveyManager - Creating Surveys - Rules Scripting
0
425
620
2019-09-10T15:38:40Z
Bishopj
1
Created page with "=BPC SurveyManager Rules Scripting Language= ==Introduction== BPC SurveyManager builds every page displayed on your browser dynamically. In most cases a basic assumed rule..."
wikitext
text/x-wiki
=BPC SurveyManager Rules Scripting Language=
==Introduction==
BPC SurveyManager builds every page displayed on your browser dynamically. In most cases a basic assumed rule is used each time you post a page of answers to questions which essentially says "display the set of questions listed for next page group". So many users will never need to use the rules engine and scripting capabilities, but in reality SurveyManager wants to build each page by selecting the questions to display based on the answers given to the questions so far. To do this it has a small but powerful scriptig language for rules.
The Rules Scripting language defines how a rule executes. Each question can have (optionally) an indefinite number of rules, or none at all. When a Survey form is submitted (posted), the question responses for the entire page (form) are first written to the response table. Then the rules engine is run for each question in the current (submitted) form's question list.
This bahaviour means that all the responses for every question on the current page are available to every rule of every question on the page when rules are evaluated. So while rules are notionally attached to a question, in fact you could design your rules so that only one question on the page handled all rules for every question, and the question with the rule list could in fact be the first question on the page.
Further, rules can access the reponses from every question answered to date on the survey, and indeed any response from any question recieved for any survey in any organisation by the current user to date..
Rules provide a number of facilities from validation that has not been performed on the browser already, to selecting follow-on questions to ask based on the response to a question (either thse current question or any previous question in this or any other survey), to calling plugin libraries, etc. The rules provide the flow of control for a survey.
As the rules engine processes the form's rules, the question list for the next page is progressively populated by the rules. If after all the rules have been run, there are no questions in the question list, the Question Script for the QScript surveybody tag is used, or, if none, the QScript field vlue of the surveyheader, or if none the default page size is used to determine the next questions in the question list.
It is quite possible to make both recursive and looping survey question lists with the rules engine, so some thought is required when designing the rules. Simple surveys can be built without ever considering the rules engine (or defining any rules).
Note that a question included in the question list is still not guaranteed to be displayed to a given user. The question list is further filtered by the current instance and any filters attached to the question that must also be held by the current user in order to display.
*NOTE FOR BPC SurveyManager V7 library users - You must list only 1 JumpFunction + followon command on for each of the true/false parts of a rule.
*NOTE FOR BPC SurveyManager V8 library users - You may list many JumpFunction + 1 followon command on for each of the true/false parts of a rule, separated by commas.
==Defining Question Rules==
Question rules are stored in a dedicated rules table and linked back to the individual questions by OrgID , SurveyID and Question ID. Thus while questions can be shared across multiple surveys, the question rules are specific to the instance of a question in a specific survey.
The BPC Survey Manager Rules Script is similar to prolog or expert systems syntax, where each rule is essentially a boolean test with a part to execute if the boolean test is true and a part to execute if the test is false. The True and False parts can either identify other rules to execute, add questions to the question list, or invoke a plugin library.
Central to the operation of rules is the concept of the QRL (Question Resource Locator). This is analagous to a URL, only for questions. It uniquely identifies a survey and question or group of questions and a rule or all rules for a question.
'''''The basic form of a rule is:'''''
"RuleQRL = If Boolean_condition then { If_true_actionlist } else { If_false-actionlist }"
'''''A RuleQRL is a QRL (Question Resource Locator) of the form:'''''
SID.QID:RID which means SurveyID.QuestionID:RuleID
Where the RuleID MUST be a number. For a given question, rules are executed in RuleID order (think line number in a program file).
Rules for an SID.QID are executed in numeric order according to the RID.
When entering rules into the BPC SMDeskTop V7, BPC RiskManager Survey Centre or BPC SurveyManager Web Client the 'If' 'then' 'else' '{' '}' symbols are assumed and should not be entered into the fields. In this document we will use the shorthand:
"RuleQRL Boolean_condition { If_true_actionlist } , { If_false-actionlist }"
'''''Example Rules Are:'''''
* Simple always True test:
S0001.QID1:1 True { next } , { end }
Read this as: For Survey S0001, question ID QID1 rule 1, if True do the next rule else do stop processing rules for this question. Since the value of True is always True, this rule
will NEVER execute the false part, and will always execute the next rule in the question if one exists.
* Is the value provided by the user is greater than 10
S0001.QID2:1 gt(value,10) { .QID4, end }, { .QID3, end }
Read this as: For Survey S0001, question ID QID2 rule 1, If the response value for this question is greater then 10, do the rules in Question ID QID4 of this survey, else do the
rules for Question ID QID3. In either case stop processing rules for this question.
* Is the value provided by the user outside a range of numbers
S0001.QID3:1 or(lt([.],0), gt([.], 11)) { @(.QID4), next }, { .QID3.2, end }
Read this as: For Survey S0001, question ID QID3 rule 1, If the response value (note the alternate way of referencing the current question response) for this question is
less then 0 or the current response is greater than 11, then ask Question QID4 from the current survey and process the next rule, else do the rules starting from
Question ID QID3 rule 2 of this survey, and end rules processing for the current question.
* Check if the value provided by the user is between two numbers, a equal to a number or the value of a different question equal to a number. Select some questions to display
S0001.QID3:2 or(and( gteq([.], 11), lteq(value, 20)), eq([.], 1), lt([.QID6],0)) { @( .QID24, .[PAGE2]), next }, { @(S0002.QID3), next }
Read this as: For Survey S0001, question ID QID3 rule 2, If the response value for this question is greater than or equal to 11 or less than or equal to 20, OR equal to 1 OR the
response to question QID6 of the current survey is less than 0, then ask Question QID24 and all questions in QuestionGroup PAGE2 from the current survey and process
the next rule for this question, else and ask the question in S0002 Qusetion ID QID3, and process the next rule in the current question.
Note: .QID24 and .[PAGE2] are QRL's. QRL's are the universal addressing mechanism used throughout SurveyManager to locate a survey, question, question list or rule.
See the section on QRL's (below) for a complete breakdown of the format of a QRL.
NOTE ALSO: While gt, lt, gteq, lteq, eq, set all take TWO arguments, 'and' and 'or' can take an indefinite list of arguments.
NOTE ALSO: While the rules engine, and the QList generator can generate a list of questions from mixed surveys, the page builder does not yet handle this, so in this case
the @(S0002.QID3) would actually present QID3 from S0001 to the survey user.
* Override the response value for a question
S0001.QID4:3 set(.QID3, 1) { prev }, { end }
Read this as: For SID S001 QID QID4, set the response value for question QID3 to the value 1 and re-process the previous rule.
* Select a list of questions to ask
S0001.QID4:1 gt(value, 10) {next }, {@(.QID6, .QID8, .QID9), end }
Read this as: Here we ask a simple list of questions if the current response was greater than 10.
==About Boolean Conditions==
The Boolean Condition expression is a test of true or false. Boolean tests can be nested with brackets to an indefinite number of levels. If the result is 'true' the true
part of the rule is executed (the if_true_actionlist) , while if the result is False, the false part is executed (the if_false_actionlist).
A boolean expression is a boolean function followed by the argument(s) (if any) that the function uses in its test. So the simple boolean expression Grammar is:
BooleanExpression = BooleanArg | BooleanConstant | UnaryBooleanFunction( BooleanExpression ) | BinaryBooleanFunction( LeftHandBooleanArg, RightHandBooleanArg ) | ListBooleanFunction( BooleanExpressionList )
BooleanExpressionList = BooleanExpression, BooleanExpression, ...
Where = means 'must be one of' and '|' means 'or' and ... means 'repeat as needed'.
Therefore a boolean expression can be a single Boolean argument, single boolean constant with no arguments (True or False) or a Unary function with one argument, or a binary test with two arguments or a boolean list function of other boolean expressions. In all cases, a boolean expression must ultimately evaluate to True or False
We will consider each in turn:
1. BooleanArg
* BooleanArg : value - The value of the current question's response for this user
* BooleanArg : [.] - The value of the current question's response for this user
* BooleanArg : [QRL] - The value of the QRL defined question's response for this user of the form 'SID.QID'
* BooleanArg : pid - The PID (Person ID) of the current user
* BooleanArg : pname - The Name matching the PID (Person ID) of the current user
* BooleanArg : pemail - The email address matching the PID (Person ID) of the current user
* BooleanArg : prole - The role (User/Guest/Admin) matching the PID (Person ID) of the current user in this org
* BooleanArg : pgrole - The role (User/Guest/Admin) matching the PID (Person ID) of the current user in this database
* BooleanArg : oid - The OrgID of the current survey
* BooleanArg : sid - The SID (Survey ID) of the current user
* BooleanArg : qid - The qID (Question ID) of the current user
* BooleanArg : qgrp - The SID of the current user
* BooleanArg : iid - The IID (InstanceID) of the current user
* BooleanArg : "***" - where *** is some quoted string.
* BooleanArg : prp( "propertyname" ) - Returns the Current value of a Temporary, Person, Survey, QuestionGroup or Question property (in that order).
* BooleanArg : xfl( [QRL] ) - Returns the current exception trap value.
* BooleanArg : qry( tablename, field ) - Returns the current value of a field in a dataset (for current OID,SID,QID,RID,PID,).
2. BooleanConstant
* booleanfunction : True - Always true
* booleanfunction : False - Always false
3. UnaryBooleanFunction( BooleanExpression )
* booleanfunction : Not( BooleanExpression ) - True if ( BooleanExpression ) is false.
* booleanfunction : Do( QRL ) - True if the boolean expression in the rule identified in the QRL is true. A QRL in this case is ALWAYS interpreted as a rule reference. If the rule number is ommited, rule 1 is assumed.
4. BinaryBooleanFunction( LeftHandBooleanArg, RightHandBooleanArg )
* booleanfunction : lt(BooleanArg,10) - True if value is less than 10
* booleanfunction : lteq(BooleanArg,10) - True if value is less than or equal to 10
* booleanfunction : gt(BooleanArg,10) - True if value is greater than 10
* booleanfunction : gteq(BooleanArg,10) - True if value is greater than or equal to 10
* booleanfunction : eq(BooleanArg,10) - True if value is equal to 10
* booleanfunction : tstXf(BooleanArg) - True if value is outside of exception trap range. The Exception trap is stored on a question.
* booleanfunction : sx(BooleanArg,"willy") - True if value sounds like "willy"
* booleanfunction : pm(BooleanArg,"I * like ? chocalate.") - True if value is sounds like the right hand string (where * matches any word(s) and ? matches any word.)
5. ListBooleanFunction( BooleanFunctionList )
* booleanfunction : and( booleanfunction , booleanfunction, ... ,booleanfunction ) - True if all booleanfunction's in the list are true else false
* booleanfunction : or( booleanfunction, booleanfunction, ... ,booleanfunction ) - True if one of the booleanfunction's in the list is true else false
* booleanfunction : sxl(value,"willy","wonka","wilbur" ) - True if value sounds like "willy or wonka or wilbur"
* booleanfunction : pml(value,"I * like ? chocalate.","I * like ? cake.","I * like ? teeth.") - True if value is sounds like any of the right hand quoted strings (where * matches any word(s) and ? matches any word.)
* booleanfunction : exe( plgname, plgaction, plgargs...) - true if a Call to "plgaction" method in Plugin "plgname" with a comma separated list of plgargs returns true. Plugins are Plugin DLLs following the BPC-SM API standard.
==About Boolean Action Lists==
===The Command Syntax===
There are two possible outcomes of a boolean condition: True or False. Each of these possible outcome has its own action list. The "true" action list is therefore a list of actions to take if the boolean condition evaluates to true, and the "false" action list is exeuted if the boolean evaluates to false.
Note that in the boolean conditions QRL references were enclosed in [], to distinguish them from strings. In the action section strings are not generally legal (except in sadm, exe and set) so with the exception of those three functions, QRLs should NOT be encased in []. In fact in the action section the [] has a different meaning. A .[quesgroup] refers to a question group, NOT a question, and .[. for 20] means "the next 20 questions from the last question asked".
The basic form of an action list is:
JumpFunction, JumpFunction, ..., JumpFunctionFollowOn
The term "JumpFunction" is essentially historic, as the first action lists were just lists of questions to ask on the next page of the survey, and therefore implied that the user was "jumping" to a specific list of questions. Over the years the power of the JumpFunction has been expanded beyond merely "jumping", but the term has remained.
===Action List Execution Order And Behaviour===
Action lists are evaluated left to right, and immediately they are encountered by the rules engine. They are essentially a comma separated list of actions.
To understand how the behaviour of the rules engine with action lists, it is best to think of it the list as a mix of:
* questions to ask the user (that effect the user's display AFTER all rules on this page have been evaluated),
* classic programmer command jumps which redirect the flow of rules execution immediately they are encountered,
* calls to external plugins that return to the action in the list immediately after the call has completed
Consequently, it is possible to construct an action list where some members of the list will never be executed, for example, if the action list contains an action that causes the rules engine to jump to another rule, no actions in the list after that action will be executed.
===JumpFunctions===
A JumpFunction is one of:
''Select A Rule To Execute''
* jumpfunction : next - do next rule or ques
* jumpfunction : prev - do prev rule or ques
* jumpfunction : end - stop processing rules for this question
* jumpfunction : ninl - do "next if no list" this is a rarely used option, primarilly used internally. It issues a "next rule line" command after the current jump function only if the current jumpfunction is the last in a comma separated list of jumpfunctions. It is safe to ignore this as you will probably never need it. It is most useful where you have written a plugin library and wish to return jumpfunction instructions from the library. In that case you can safely return the NINL command in the jumpnode with an "ask list" to display and trigger a "next" jump when you otherwise don't know if the next would be issued. The next will only occur if the if the calling jumpnode is also the last (right most) in a list of jumpnodes.
* jumpfunction : QRL - jump to question rule indicated by the QRL in this survey
* jumpfunction : g(QRL) - Go To the rule identified in the QRL (Same as the simple QRL option above)
''Decide which questions to ask''
Multiple versions of the 'ask' command are allowed to support various rules definition interfaces. This is for syntactic convenience rather than semantic effect. Some users find the work 'ask' or 'askpage' easier then '@'. Programmers tend to prefer the shorter form '@'.
* jumpfunction : @( QRL, QRL,.. ) - ask questions identified by the QRL list (containing 1 or more QRL's) on the next survey page
* jumpfunction : ASK( QRL, QRL,.. ) - (Same as @) ask questions identifiedby the QRL list (containing 1 or more QRL's) on the next survey page
* jumpfunction : ASKPAGE( QRL, QRL,.. ) - (Similar to @) ask questions identifiedby the page/QGroup list (containing 1 or more QGroup names) on the next survey page. The purpose of this command is to allow scripts that hide the QRL details. Eg: ASKPAGE( Page01, Page02) as opposed to @(.[Pape01],.[Page02]) - which does the same thing but looks more complex.
* jumpfunction : SHOWPAGE( QRL, QRL,.. ) - (Same as ASKPAGE) ask questions identifiedby the page/QGroup list (containing 1 or more QRL's) on the next survey page
''Do something with responses received''
* jumpfunction : SADM(a,b,c,d,e) - perform admin ops - see below for an explaination of the arguments.
* jumpfunction : EXE( plgname, plgaction, plgargs...) - Execute a "plgaction" method in Plugin "plgname" with a comma separated list of plgargs returns true. Plugins are Plugin DLLs following the BPC-SM API standard.
* jumpfunction : SET( QID, somevalue ) - Assign some value to the SID.QID response for this person. Note - it is possible to assign values to another survey.
* jumpfunction : CLR( QRL, QRL,.. ) - Clears the response(s) for the current SID.QID response for this person. Note - it is possible to clear responses for another survey. Clearing is effected by deleting the response entirely. The QRL is a full QRL, which means you can use all the range commands available in a QRL (see the 'ask' discussion) - including clearing an entire survey response set, or a QGroup/Page at a time. It is locked to the current OID, IID, PID and REALMID. If all you want to do is clear a value while keeping a response record, use SET instead.
* jumpfunction : CLRPAGE( QGroup, QGroup,.. ) - Clears the response(s) for the current SID.[QGroup] responses for this person. Note - it is possible to clear responses for another survey. Clearing is effected by deleting the responses entirely. The QGroup is a simplified QGroup/Page name - Not a QRL - (see the 'askpage' discussion) - and clears an entire QGroup/Page response set at a time. It is locked to the current OID, IID, PID and REALMID. If all you want to do is clear a value while keeping a response record, use SET instead.
* jumpfunction : PCLR( property_name ) - clear the current value of the named property of the current user. Property names are any string surrounded by " ".
* jumpfunction : PADD( property_id, somevalue ) - append a value to the end of the current value of a property of the current user. If the the property already has value(s), a ',' is added followed by the new value, making a comma separated list.
* jumpfunction : PSET( property_id, somevalue ) - set the property of the current user to a value replacing the previous value.
''Depracated - do not use''
* jumpfunction : E( QRL, QRL,.. ) - same as '@' + GLV property, ask questions identified by the QRL list (containing 1 or more QRL's) on the next survey page displaying the previous response. Use the GLV property instead. (DEPRACATED - Do Not Use)
===Complex Boolean Action List Commands Explained===
====SADM====
<pre>
The SAdm command looks like: SADM( sid, cbxlist, action, instance) where
sid is a qrl is as above (but refers to the quest response) and
cbxlist is a qrl with the cbxuser list in its sub parts and
action is either publish | distribute | lock | unlock
instance is either all | current | an instanceid
</pre>
All values in the SADM command may be retrieved from responses, or porperties, etc as well as hard-coded.
====Set====
The Set command parameters look like:
"Set(First_Arg, Second_Arg)"
<pre>
The First_Arg is a QRL address (Constrained to the current PID)
The Second_Arg is one of:
Second_Arg is a bracketed expression of the form opval,responstr, ques optype. Eg. "( 2, 'Fred', selectop)"
Second_Arg is a bracketed expression of the form opval,responstr. Eg. "( 2, 'Fred') - use existing or ques optype"
Second_Arg is an unbracketed espression of the form value. Eg. "value" meaning current question response's value set
Second_Arg is an unbracketed espression of the form sss.qqq. Eg. "SURV1.QID002" meaning that question's response value set
</pre>
====PADD, PSET====
The Property Set commands have similar parameters to the Set command and look like:
"PSet(First_Arg, Second_Arg)"
<pre>
The First_Arg is interpreted as a property_id (property name) if it is surrounded by " " and a QRL address (Constrained to the current PID) whose response string value contains a property name to use, otherwise.
The Second_Arg is one of:
Second token is a quoted expression of the form "fred" or "sss.qqq" Eg. "SURV1.QID002" meaning that question -
not its response value.
Second token is an unquoted string with the word "value". Eg. "value" meaning current response value (responsestr)
Second token is an unbracketed expression of the form sss.qqq. Eg. "SURV1.QID002" meaning that question's response value (responsestr)
</pre>
There is a subtle but important use of the PSET/PADD commands. Apart from the obvious use of setting property values from survey questions, the commands can be used to populate a property value with a list of question QRLs. This is the main use of the PADD command which forms a comma separated list of values. When this list contains QRLs it becomes a QRLList. The QRL List can be read by the annotation survey layout to provide the list of questions to display for a given user.
Apart from the obvious use of displaying a list of questions to be answered for a survey page, the technique can be used to display a list of opinions, instructions or advices that change depending on a user's responses. Remember that while we call everything the BPC SurveyManager handles "a survey" and its content "questions" - a survey can in fact simply be a list of statements like a database of instructions, or information extracts (ie. not questions, at all). The PADD property command can therefore be used to decide what pieces such information to display by building a property list that is read by another survey.
Since a survey layout can be easilly designed to display multiple surveys on a page at once, you can have a dynamically changing commentary running in, say, a left hand panel, while a conventional survey is run in a right hand panel.
===JumpFunction Examples===
Some Examples include:
* jumpfunction : next //do next rule or ques
* jumpfunction : prev //do prev rule or ques
* jumpfunction : end //stop processing rules for this question
* jumpfunction : .QID1 //jump to question qid1 rule 1 in this survey
* jumpfunction : .QID2:1 //jump to question qid2 rule 1 in this survey
* jumpfunction : S001.QID1 //jump to question qid1 rule 1 in survey s0001
* jumpfunction : S001.QID1:2 // jump to question qid1 rule 2 in survey s0001
* jumpfunction : @(S002.QID5) // ask ques of SurveyId
* jumpfunction : @(S002.[.QID5 to .QID14]) // ask ques of SurveyId from qid5 to qid14
* jumpfunction : @(S002.[QS1]) // ask questions in QuesGroupId QS1 of SurveyId S002
* jumpfunction : @(.QID5) // ask ques qid5 in this survey
* jumpfunction : @(.QID5, .QID6, S002.QID7), end // ask questions in list and finish processing rules
* jumpfunction : @([QS1]) // ask questions in QuesGroupId in this survey
* jumpfunction : @(.) // ask this question (again)
* jumpfunction : g(S002) // evaluate the first rule in SurveyId S002
* jumpfunction : g(S001.QID1:2) // jump to question qid1 rule 2 in survey s0001
* jumpfunction : SADM(a,b,c,d,e) // perform admin ops
Where:
@g = goto style jump - continue with target's QID list. (set current eosjump to eosjump - ie. no change)
===About The JumpFunctionFollowOn===
The last item in the Boolean Action List is the JumpFunctionFollowOn command. Its purpose is to tell the rules engine what rule to exectute next (after it has done whatever it was told to do in the action list. There are three follow on commands.
A JumpFunctionFollowOn is one of:
* jumpfunction : next //do next rule or ques
* jumpfunction : prev //do prev rule or ques
* jumpfunction : end //stop processing rules for this question
If no followon is specified, next is assumed.
==Access To Properties==
Rules property retrieval can access most properties in the database and property setting commands can access those specific to the active user, but some properties are specifically restricted regardless. These Page Properties can not be accessed from rules:
hidden - lastques, nextques, etc...eopjump, eosjump
==Properties that Effect Rule Execution==
By default, when a page/form is recieved from a user a rule is only executed for a question on a page that actually has a response (either just posted, or previously posted). This means that an "infoop" question (a text display with no reponse capability) with a rule attached will never execute - because the survey engine will never get a response from an infoop! Yet, survey manager allows you to attach a rule to these questions. To find out why, read on...
Sometimes you want the rules attached to a question to ALWAYS execute, regardless of whether a response was received for that question. One such case might be where you attach rules to infoop questions. Your can force this by setting the "AlwaysDo" property of the question concerned to "True". When this question is displayed on a users browser, and they post the page back to survey engine, the rules of any question - including infoops with the "AlwaysDo" property set to true will always be executed.
This can be a very useful facility, partcicularly where you set the property on a section heading, pagebreak or other static text area, and design your pages so that all questions in a question group (think "page" for simplicity - but note that many question groups can in fact be displayed on a single page at once) are displayed as a group (rather than cherry picked to just display some questions from a group). In this case, rather than attaching rules to each question, you might attach all the rules for a group to the section heading "question" instead as these rules can range over all the data received for the group regardless.
==Notes on designing question rules==
You need to think carefully about the way you want your survey to work when designing the rules AND when deciding to which question you will attach the rules on any given page/question group.
===1. Where should I put my rules?===
The simplest scenario is that a rule deals with the response to the question to which it is attached, and there are a number of short forms in QRLs that are designed to make that especially favourable and simple to do - eg "[.]" or "value" refers to the response to the question to which the rule has been attached.
However, in many situations the ordering and selection of questions to display next may depend on the answers to several questions on the current page (or even previous pages or other surveys). In this case it might be easier to attach all the rules for a page or question group to one question that will always be displayed for that page or question group - such as a section heading with the AlwaysDo property set to True.
The downside of this is that in order to look at a response value you will have to refer to the questions by their ID in a QRL eg "[.Ques002]", rather than "[.]" the anonymous shortform. So if you chage the question id later, the rules for the group will be silently invalidated. But it does simplify the rules authorship a bit.
The other issue related to this is that when the QSort order is set to no sort (the default and recommended scenario) questions will be displayed in the order they are added to the questionlist for the next page by the rules. This means that the order in which rules execute might be important to you. Notionally, rules are executed in the order in which they were displayed on the submitted page. This is usually exactly what you want, BUT some times this results in an undesirable sort order for questions on the next page. Attaching all the rules to one question in a group is the easiest way to control with certainty the order in which questions are added to the question list. There are other options, but they are more complex to handle.
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
e966707aef1857e62cb514e6d3a12c597640b8bb
BPC SurveyManager - Questions and Input Controls
0
426
621
2019-09-10T15:40:34Z
Bishopj
1
Created page with "<table Width="100%" BgColor="White"><tr><TH BgColor="darkred" ><font color="white">Input Control</font></TH><TH BgColor="darkred" width="30%" ><font color="white">Desciption</..."
wikitext
text/x-wiki
<table Width="100%" BgColor="White"><tr><TH BgColor="darkred" ><font color="white">Input Control</font></TH><TH BgColor="darkred" width="30%" ><font color="white">Desciption</font></TH><TH BgColor="darkred" ><font color="white">Example</font></TH></tr>
<tr name="Example0001Q001a_Row" id="Example0001Q001a_Row" >
<td BgColor="white" ></td>
<td BgColor="white" align=center colspan=2 ><br >
<p align=center ><H2 ><b> This page provides examples of various types of input controls.</b ></H2 ><br >
The best way to view these input devices is using BPC SurveyManager itself.
Click on this link to launch the BPC SurveyManager help:[http://cool.bishopphillips.com/sm/OTXSurveyManager1.dll/DoSurvey?SID=Example0001&SIDO=S001 View Using BPC SurveyManager]
<H3 >This section provides examples of INFOOP controls</H3 >
<i >Infoop controls are controls that do not receive an input.</i ><br >
<tr name="Example0001Q001b_Row" id="Example0001Q001b_Row" ><td BgColor="silver" >dlabel
<td BgColor="silver" align=center
colspan=2
>This control, is an example of a double column infoop (one that displays a text block over two columns, but does not require an input). These are good for headings and sub headings or embedding explanitory blocks in the text. The control also demonstrates how to get mulitple lines to appear in the question column.<br >The BPCSurveyManager Engine will automatically interpret a line break in a question field as a line break in a browser.<br ><br >An infoop is literally an "information operation". To use this set:<br >Input=infoop<br >DisplayType=dlabel<br ><br >
<tr name="Example0001Q002_Row" id="Example0001Q002_Row" ><td BgColor="white" >dlabel
<td BgColor="white" align=center
colspan=2
><div width=100% align=left ><i ><b >You can override the default 'centring' behaviour in mutliple ways. One is just to do it at the individual question level. <br >You can also override the font style (in this case to get bold (b) and italics (i) ) by inserting:<br >"<div align=left ><i ><b ><#question > </b ></i ></div >"<br >in the LayoutHTML field.<br ><br ></b ></i ></div >
<tr name="Example0001Q003_Row" id="Example0001Q003_Row" >
<td BgColor="silver" >label
<td BgColor="silver" >This is an example of a single column infoop. (A text/HTML block that runs for a single column)<br ><br >To use this set:<br >Input=infoop<br >DisplayType=label<br ><br >You can also embed ordinary HTML in a question field. For example the following will insert a dot point list:<br ><ul ><br ><li > This is the first dot point.<br ><li > This is the second dot point<br ><li > This is the third dot point.<br ></ul > <br >Of course this means that you can not just put a < or a > in the text, because HTML uses this character to indicate an HTML command. You can, however insert a < or a > by entering:<br >"& l t ;" for < and a "& g t ;" for > <br ><br >
<td BgColor="silver" >
<tr name="Example0001Q004_Row" id="Example0001Q004_Row" >
<td BgColor="white" >
<td BgColor="white" align=center colspan=2 >
<H3 >This Section provides examples of TEXTOP controls.</H3 >
<i >Textops are controls returning text responses.</i ><br >
<tr name="Example0001Q005_Row" id="Example0001Q005_Row" >
<td BgColor="silver" >text
<td BgColor="silver" >This is an example of a single line textop. The size of the edit box in characters can be set to whatever size you like. In this case it has been set to 40.<br ><br >The size controls only the number of characters displayed in the box. The field will actually accept up to 4000 characters (truncating after that limit is reached).<br ><br >To use this set:<br >Input=textop<br >DisplayType=text<br >EditSize=A number representing the number of characters to display.
<td BgColor="silver" >
[[IMAGE:TEXOP_text_Exampl1.gif]]
<tr name="Example0001Q006_Row" id="Example0001Q006_Row" >
<td BgColor="white" >textarea
<td BgColor="white" >
This is an example of a single column multi-line textop. The size of the edit box in characters can be set to whatever size you like. In this case it has been set to 40 columns and 20 rows.<br ><br >The size controls only the number of characters displayed in the box. The field will actually accept up to 4000 characters (truncating after that limit is reached), automatically adding a scroll bar as required.<br >To set the dimensions of the box put enter a string of two numbers separated by a "." (a string of the form "rows"."columns")<br ><br >To use this set:<br >Input=textop<br >DisplayType=textarea<br >EditSize=rows.columns
<td BgColor="white" >
[[IMAGE:TEXOP_textarea_Exampl1.gif]]
<tr name="Example0001Q007_Row" id="Example0001Q007A_Row" >
<td BgColor="white" >textarea / dtextarea - WYSIWYG
<td BgColor="white" colspan=2 >
<table width="100%" ><tr >
<td BgColor="white" >This is an example of a single column multi-line textop with the WYSIWYG editor enabled. The editor supports simple formatting such as bold, italics, dot points, links, justification as well as copying and pasting. Picture insertion is also supported but you should use HTTP urls or the picture reference will be local file refernces only - images are not uploaded automatically. The size of the edit box in characters can be set to whatever size you like.<br ><br >The size controls only the number of characters displayed in the box without scrolling. The field will actually accept up to 4000 characters (truncating after that limit is reached), automatically adding a scroll bar as required.<br >To set the dimensions of the box enter a string of two numbers separated by a "." (a string of the form "rows"."columns"). Both the dtextarea and textarea input controls support the WYSIWYG editor<br ><br >To use this set:<br >Input=textop<br >DisplayType=textarea<br >EditSize=rows.columns<br >
property "WYSIWYG"="True"
</td>
<td BgColor="white" >
[[IMAGE:TEXOP_dtextareaWYSIWYG_Exampl1.gif]]
</td></tr></table>
<tr name="Example0001Q007_Row" id="Example0001Q007_Row" >
<td BgColor="silver" >dtextarea
<td colspan=2 ><table width="100%" ><tr ><td BgColor="silver" >This is an example of a multi column, multi-line textop. The size of the edit box in characters can be set to whatever size you like. In this case it has been set to 60 columns and 20 rows.<br ><br >The size controls only the number of characters displayed in the box. The field will actually accept up to 4000 characters (truncating after that limit is reached), automatically adding a scroll bar as required.<br >To set the dimensions of the box put enter a string of two numbers separated by a "." (a string of the form "rows"."columns")<br ><br >To use this set:<br >Input=textop<br >DisplayType=dtextarea<br >EditSize=rows.columns<br ><br >
</td>
<td BgColor="silver" >
[[IMAGE:TEXOP_dtextarea_Exampl1.gif]]
</td></tr></table>
<tr name="Example0001Q008_Row" id="Example0001Q008_Row" ><td BgColor="white" >numeric
<td BgColor="white" >This is an example of a numeric 'only' text field. Only numbers are accepted (if JavaScript is enabled) on the browser. Other wise it is the same as the single line text input control.<br ><br >To use this set:<br >Input=textop<br >DisplayType=numeric<br >EditSize=A number representing the number of characters to display.<br ><br >
<td BgColor="white" >
[[IMAGE:TEXOP_numeric_Exampl1.gif]]
<tr name="Example0001Q009_Row" id="Example0001Q009_Row" >
<td BgColor="silver" >password
<td BgColor="silver" >This is an example of a password text field. Characters entered are not echoed. Otherwise it is the same as the single line text input control.<br ><br >To use this set:<br >Input=textop<br >DisplayType=password<br >EditSize=A number representing the number of characters to display.<br ><br >
<td BgColor="silver" >
[[IMAGE:TEXOP_password_Exampl1.gif]]
<tr name="Example0001Q010_Row" id="Example0001Q010_Row" >
<td BgColor="white" >property
<td BgColor="white" >This is an example of a property text field. Characters entered are applied to a property of the user. Typically this might be a "title" or "age" value, or any other value that is a property of the user entering the data. <br >Otherwise it is the same as the single line text input control.<br ><br >The current value for the TITLE property entered here is: <br >To insert a property value into a question text simple enter <property-name > in the body of the question.<br ><br >To use this set:<br >Input=textop<br >DisplayType=property<br >EditSize=A number representing the number of characters to display.<br >OpGroupID=The property to edit.<br ><br >
<td BgColor="white" >
[[IMAGE:TEXOP_property_Exampl1.gif]]
<tr name="Example0001Q011_Row" id="Example0001Q011_Row" >
<td BgColor="silver" >file
<td BgColor="silver" >This is an example of a file upload text field. The control allows for the browsing of a file on the users computer and the uploading of the contents to the folders table of the database. The path value is the directory listing by which the file will be stored in the folders table.<br >Otherwise it is the same as the single line text input control.<br ><br >The size field in this instance is used to indicate the width of the path text box.<br ><br >To use this set:<br >Input=textop<br >DisplayType=file<br >EditSize=A number representing the number of characters to display.<br ><br >
<td BgColor="silver" >
[[IMAGE:TEXOP_file_Exampl1.gif]]
<tr name="Example0001Q012_Row" id="Example0001Q012_Row" ><td BgColor="white" >admlist - user options
<td BgColor="white" >This is an example of an admlist text field. The control allows for a 'temporary' selection list. There are multiple ways for seleion list style input controls to be built. This is the simplest.<br ><br >When using this control to get a simle drop down list of selectable options, set the question properyy ("admlist" ) to a list of the options you want to apear in the selector box. The return value will be the string selected OR "Nil", which will be interpreted by the SM engine as a "no response".<br ><br >The list members should be separated by spaces or commas. If you want to include spaces in a list option, surround it with "".<br ><br >Otherwise it is the same as the single line text input control.<br ><br >The size field in this instance is used to indicate the number of options to display in the list box. A value of 1 creates a drop-down list, while any value greater than one creates a scrolling list..<br ><br >To use this set:<br >Input=textop<br >DisplayType=admlist<br >EditSize=A number representing the number of characters to display.<br >Question property "admlist"=the list of options to display. eg:"My First Option","My Second Option","My Third Option"<br ><br >
<td BgColor="white" >
[[IMAGE:TEXOP_admlistDROPLIST_Exampl1.gif]]
<tr name="Example0001Q013_Row" id="Example0001Q013_Row" ><td BgColor="silver" >admlist - user options
<td BgColor="silver" >This is an example of an admlist text field. The control allows for a 'temporary' selection list. There are multiple ways for seleion list style input controls to be built. This is the simplest.<br ><br >When using this control to get a simle drop down list of selectable options, set the question properyy ("admlist" ) to a list of the options you want to apear in the selector box. The return value will be the string selected OR "Nil", which will be interpreted by the SM engine as a "no response".<br ><br >The list members should be separated by spaces or commas. If you want to include spaces in a list option, surround it with "".<br ><br >Here we have set the size field to a number greater than 1, which creates a list rather than a drop list.<br ><br >Otherwise it is the same as the single line text input control.<br ><br >The size field in this instance is used to indicate the number of options to display in the list box. A value of 1 creates a drop-down list, while any value greater than one creates a scrolling list..<br ><br >To use this set:<br >Input=textop<br >DisplayType=admlist<br >EditSize=A number representing the number of rows to display.<br >Question property "Admlist"=the list of options to display. Eg: "My First Option","My Second Option","My Third Option"
<br ><br >
<td BgColor="silver" >
[[IMAGE:TEXOP_admlist_Exampl1.gif]]
<tr name="Example0001Q014_Row" id="Example0001Q014_Row" ><td BgColor="white" >admlist
<td BgColor="white" >This is an example of an admlist text field populated with the reserved administration action list commands. This is a special kind of list used to manage surveys on line. In combination with appropriate "rules", the survey listing control and the user checkilst control, it allows commands to be issued to publish a survey to a group of users, to lock or unlock access to the survey, and to distribute the survey to those users.<br ><br >When using this control to get a simle drop down list of selectable options, DO NOT set the question properyy ("admlist" ). The default action is to insert this list of options and "Nil", which will be interpreted by the SM engine as a "no response".<br ><br >Here we have set the size field to a number greater than 1, which creates a drop list.<br ><br >Otherwise it is the same as the single line text input control.<br ><br >The size field in this instance is used to indicate the number of options to display in the list box. A value of 1 creates a drop-down list, while any value greater than one creates a scrolling list..<br ><br >To use this set:<br >Input=textop<br >DisplayType=admlist<br >EditSize=A number representing the number of characters to display.<br >Question property "admlist"=not set<br ><br >
<td BgColor="white" >
[[IMAGE:TEXOP_admlistDEF_Exampl1.gif]]
<tr name="Example0001Q015_Row" id="Example0001Q015_Row" ><td BgColor="silver" >admlist - user options, multi
<td BgColor="silver" >This is an example of an admlist text field with the question's Multiple property set to "True". The standard control allows only a single response, while with the multiple property set to true, multiple responses can be strored. When this is used, each response will occupy a seperate record in the respnse table. It is necessary to keep the combined questionID+reponse option string to less than 50 characters (or atleast to keep the first 50 unique in each option).<br ><br >The control allows for a 'temporary' selection list. There are multiple ways for seleion list style input controls to be built. This is the simplest.<br ><br >When using this control to get a simle drop down list of selectable options, set the question properyy ("admlist" ) to a list of the options you want to apear in the selector box. The return value will be the string selected OR "Nil", which will be interpreted by the SM engine as a "no response".<br ><br >The list members should be separated by spaces or commas. If you want to include spaces in a list option, surround it with "".<br ><br >Here we have set the size field to a number greater than 1, which creates a list rather than a drop list.<br ><br >Otherwise it is the same as the single line text input control.<br ><br >The size field in this instance is used to indicate the number of options to display in the list box. A value of 1 creates a drop-down list, while any value greater than one creates a scrolling list..<br ><br >To use this set:<br >Input=textop<br >DisplayType=admlist<br >EditSize=A number representing the number of characters to display.<br >Question property "Admlist"=the list of options to display. Eg: "My First Option","My Second Option","My Third Option"<br >Question property "Multiple"=True<br ><br >
<td BgColor="silver" >
[[IMAGE:TEXOP_admlistMULTI_Exampl1.gif]]
<tr name="Example0001Q017_Row" id="Example0001Q017_Row" >
<td BgColor="white" >
<td BgColor="white" align=center colspan=2 >
<H3 >This Section provides examples of DATEOP controls.</H3 >
<br >
<i >Dateops are controls returning text date responses.</i ><br >
<tr name="Example0001Q018_Row" id="Example0001Q018_Row" >
<td BgColor="silver" >date
<td BgColor="silver" >This is an example of an date text field. The control allows for the insertion of a date in the form "day / month / year". Invalid dates are rejected.
<br >
<br >Otherwise it is the same as the single line text input control.
<br >
<br >Combined with the dateop section of the response options table it is possible to create a named date range to define the valid range of dates.
<br >
<br >The size field is ignored.
<br >
<br >To use this set:
<br >Input=dateop
<br >DisplayType=date
<br >Response Options, dateop=range name, from date, to date
<br >
<br >
<br >
<td BgColor="silver" >
[[IMAGE:DATEOP_date_Exampl1.gif]]
<tr name="Example0001Q019_Row" id="Example0001Q019_Row" ><td BgColor="white" >datepick
<td BgColor="white" >This is an example of an date pick field. The control allows for the insertion of a date from a pop-up date pick box. Javascript is required on the browser, but no other applets or activex objects. <br ><br >Invalid dates are rejected. Combined with the dateop section of the response options table it is possible to create a named date range to define the valid range of dates.<br ><br >Otherwise it is the same as the single line text input control.<br ><br >The size field sets the length of the text field.<br ><br >To use this set:<br >Input=dateop<br >DisplayType=datepick<br >Response Options, dateop=range name, from date, to date<br ><br >
<td BgColor="white" >
[[IMAGE:DATEOP_datepick_Exampl1.gif]]
[[IMAGE:DATEOP_datepick_Exampl2.gif]]
<tr name="Example0001Q020_Row" id="Example0001Q020_Row" ><td BgColor="silver" >text
<td BgColor="silver" >This is an example of simple date text field. The control allows for the insertion of a date in the form "day / month / year" or "1 January, 1998". Invalid dates are rejected. <br ><br >Otherwise it is the same as the single line text input control.<br ><br >Combined with the dateop section of the response options table it is possible to create a named date range to define the valid range of dates.<br ><br >The size field sets the length of the text box in characters.<br ><br >To use this set:<br >Input=dateop<br >DisplayType=text<br >Response Options, dateop=range name, from date, to date<br ><br >
<td BgColor="silver" >
[[IMAGE:DATEOP_text_Exampl1.gif]]
<tr name="Example0001Q021_Row" id="Example0001Q021_Row" ><td BgColor="white" >
<td BgColor="white" align=center
colspan=2
>
<H3 >This Section provides examples of RATINGOP controls.</H3 >
<i >Ratingops are controls returning text numeric responses.</i ><br >
<tr name="Example0001Q022_Row" id="Example0001Q022_Row" ><td BgColor="silver" >numeric
<td BgColor="silver" >This is an example of an numeric text field. The control allows for the insertion of a numbers with range checking. Invalid values are rejected. <br ><br >Otherwise it is the same as the single line text input control.<br ><br >Combined with the ratingop section of the response options table it is possible to create a named number range to define the valid range of numbers, the maximum number of characters in a number and whether the number is integer only, or allows a floating point (decimal).<br ><br >The size field is ignored.<br ><br >To use this set:<br >Input=ratingop<br >DisplayType=numeric<br >Response Options, ratingop=range name, from number, to number, discrete (True/False), Max Size<br ><br ><br >
<td BgColor="silver" >
[[IMAGE:RATOP_numeric_Exampl1.gif]]
<tr name="Example0001Q023_Row" id="Example0001Q023_Row" ><td BgColor="white" >text
<td BgColor="white" >This is an example of an numeric text field. The control allows for the insertion of a numbers with range checking. Invalid values are rejected. <br ><br >Otherwise it is the same as the single line text input control.<br ><br >Combined with the ratingop section of the response options table it is possible to create a named number range to define the valid range of numbers, the maximum number of characters in a number and whether the number is integer only, or allows a floating point (decimal).<br ><br >The size field is ignored.<br ><br >To use this set:<br >Input=ratingop<br >DisplayType=text<br >Response Options, ratingop=range name, from number, to number, discrete (True/False), Max Size<br ><br ><br >
<td BgColor="white" >
[[IMAGE:RATOP_text_Exampl1.gif]]
<tr name="Example0001Q024_Row" id="Example0001Q024_Row" ><td BgColor="white" >
<td BgColor="white" align=center colspan=2 >
<H3 >This Section provides examples of CHECKOP controls.</H3 >
<i >Checkops are controls returning text checkbox(True/False) responses.</i ><br >
<tr name="Example0001Q025_Row" id="Example0001Q025_Row" ><td BgColor="silver" >checkbox
<td BgColor="silver" align=right
>This is an example of a simple checkbox field. The control allows for the insertion of a simple check box question. More complex forms of checkboxes and other selectors are available in in the selectops section.<br ><br >Otherwise it is the same as the single line text input control, except that, by default the question text is right justified so that the input checkbox sits beside the question text.<br ><br >The size field is ignored.<br ><br >To use this set:<br >Input=checkop<br >DisplayType=checkbox<br ><br ><br >
<td BgColor="silver" >
[[IMAGE:Checkbox_Exampl1.gif]]
<tr name="Example0001Q026_Row" id="Example0001Q026_Row" ><td BgColor="white" >
<td BgColor="white" align=center
colspan=2 >
<H3 >This Section provides examples of SELECTOP controls.</H3 >
<i >Selectops are controls returning selected response or performing various types of actions from a shared list of options.</i ><br >
The selectops group is probably the most heavilly used group of input controls. The common element to these controls is that they all use a subset of the response options tables called the selectops to find a list. Selectop lists are SHARED by all surveys in an organisation, which means that you need to be careful about the other surveys using the selectop set. This is done so that responses between questions and surveys can be compared.
<br ><br >
Contrast this to the admlist, which allowed for a locally (question specific) selection list. The admlist solution does not guarantee that succeeding quesitions will have the same spelling of the option returned. In a selectop, this is guaranteed, not just between questions in a survey, but between surveys themselves.
<br >
<br >
<tr name="Example0001Q027_Row" id="Example0001Q027_Row" >
<td BgColor="silver" >radio
<td BgColor="silver" >This is an example of a simple radio button field. The control allows a list of options found in the selectops response options table to be displayed as a radio button list.<br ><br >A variety of properties are available for use with radio buttons. Here we have used the "RadioFrmt" property with "width=100%" to spread the buttons horizontally across the full width of the input column. The default action is to left justify the buttons list, but this can result in some confusion if the buttons 'bunch up'.<br ><br >The size field is ignored.<br ><br >To use this set:<br >Input=selectop<br >DisplayType=radio<br >Property "RadioFrmt"="width=100%"<br ><br ><br >
<td BgColor="silver" >
[[IMAGE:SELOP_radioHNB_Exampl1.gif]]
<tr name="Example0001Q028_Row" id="Example0001Q028_Row" ><td BgColor="white" >radio
<td BgColor="white" >This is an example of a simple radio button field. The control allows a list of options found in the selectops response options table to be displayed as a radio button list.<br ><br >Here we have used the "RadioFrmt" property to add a border arround the controls, which can help if the option text is too long resulting in bunching of the words. We have also set the width to a fixed width so that the words fit on one line. As you can see the radio buttons are actually presented in a table.<br ><br >Many other format options relevant to HTML tables can also be applied<br ><br >The size field is ignored.<br ><br >To use this set:<br >Input=selectop<br >DisplayType=radio<br >Property "RadioFrmt"="border=1 width=400px align=center"<br ><br ><br >
<td BgColor="white" >
[[IMAGE:SELOP_radioHB_Exampl1.gif]]
<tr name="Example0001Q029_Row" id="Example0001Q029_Row" ><td BgColor="silver" >radio
<td BgColor="silver" >This is an example of a simple radio button field. The control allows a list of options found in the selectops response options table to be displayed as a radio button list.<br ><br >Here we have used the "RadioAln" property to convert the default horizontal display into a left justified vertical display .<br ><br >The size field is ignored.<br ><br >To use this set:<br >Input=selectop<br >DisplayType=radio<br >Property "RadioAln"="Vert"<br ><br ><br >
</td><td BgColor="silver" >
[[IMAGE:SELOP_radioV_Exampl1.gif]]
<tr name="Example0001Q030_Row" id="Example0001Q030_Row" ><td BgColor="white" >list
<td BgColor="white" >This is an example of a list using the previous radio button response options. The control allows a list of options found in the selectops response options table to be displayed as a drop down or fixed list.<br ><br >The size field determines how many items to display at once, 1 indicates a drop down list, while a number greater than 1 shows a fixed list of that length.<br ><br >Note, that as a list the nil 'no response' option is automatically added. This is to allow for the option of not selecting a list option, as with the radio buttons, where you can choose not to select a radio button at all.<br ><br >To use this set:<br >Input=selectop<br >DisplayType=list<br >EditSize=1<br ><br ><br >
<td BgColor="white" >
[[IMAGE:SELOP_droplist_Exampl1.gif]]
<tr name="Example0001Q031_Row" id="Example0001Q031_Row" ><td BgColor="silver" >list
<td BgColor="silver" >This is an example of a list using the previous radio button response options. The control allows a list of options found in the selectops response options table to be displayed as a drop down or fixed list.<br ><br >The size field determines how many items to display at once, 1 indicates a drop down list, while a number greater than 1 shows a fixed list of that length.<br ><br >Note, that as a list the nil 'no response' option is automatically added. This is to allow for the option of not selecting a list option, as with the radio buttons, where you can choose not to select a radio button at all.<br ><br >To use this set:<br >Input=selectop<br >DisplayType=list<br >EditSize=6<br ><br ><br >
</td>
<td BgColor="silver" >
[[IMAGE:SELOP_list_Exampl1.gif]]
<tr name="Example0001Q032_Row" id="Example0001Q032_Row" ><td BgColor="white" >droplist
<td BgColor="white" >This is an example of a droplist using the previous radio button response options. The control allows a list of options found in the selectops response options table to be displayed as a drop down list. This is the same as the list option with editsize=1.<br ><br >Note, that as a list the nil 'no response' option is automatically added. This is to allow for the option of not selecting a list option, as with the radio buttons, where you can choose not to select a radio button at all.<br ><br >To use this set:<br >Input=selectop<br >DisplayType=droplist<br ><br ><br >
</td>
<td BgColor="white" >
[[IMAGE:SELOP_droplist_Exampl1.gif]]
<tr name="Example0001Q033_Row" id="Example0001Q033_Row" ><td BgColor="silver" >button
<td BgColor="silver" >This is an example of a button list using the previous radio button response options. The control allows a list of options found in the selectops response options table to be displayed as a button list. <br ><br >To use this set:<br >Input=selectop<br >DisplayType=button<br ><br ><br >
<td BgColor="silver" >
[[IMAGE:SELOP_button_Exampl1.gif]]
<tr name="Example0001Q034_Row" id="Example0001Q034_Row" ><td BgColor="white" >label
<td BgColor="white" >This is an example of a label list using the previous radio button response options. The control allows a list of options found in the selectops response options table to be displayed as a list of selectable labels. <br ><br >To use this set:<br >Input=selectop<br >DisplayType=button<br ><br ><br >
</td>
<td BgColor="white" >
[[IMAGE:SELOP_label_Exampl1.gif]]
<tr name="Example0001Q035_Row" id="Example0001Q035_Row" ><td BgColor="silver" >checkbox
<td BgColor="silver" align=right
>This is an example of a checkbox list using the previous radio button response options. The control allows a list of options found in the selectops response options table to be displayed as a list of selectable checkboxes. <br ><br >To use this set:<br >Input=selectop<br >DisplayType=checkbox<br ><br ><br >
<td BgColor="silver" >
[[IMAGE:SELOP_checkbox_Exampl.gif]]
<tr name="Example0001Q037_Row" id="Example0001Q037_Row" ><td BgColor="white" >
<td BgColor="white" align=center
colspan=2 >
<H3 >This Section provides examples of SELECTOP JUMP controls.</H3 >
<i >Selectop Jumps are controls performing various types of jump actions from a shared list of options held in the selectops response table.</i ><br >
The selectops group is probably the most heavilly used group of input controls. The common element to these controls is that they all use a subset of the response options tables called the selectops to find a list. Selectop lists are SHARED by all surveys in an organisation, which means that you need to be careful about the other surveys using the selectop set. This is done so that responses between questions and surveys can be compared.
<br ><br >
The selectops entries used in the jumps are built as follows:
OpDisplayStr=The prompt to display (as for normal selectop list fields
OpValStr=The SID to which to jump. Depending on the behaviour required, other arguments may be required, such as the OID, PID, RK, etc.
<br >
<br >
</td></tr>
<tr name="Example0001Q039_Row" id="Example0001Q039_Row" >
<td BgColor="silver" >jsmbutton
<td BgColor="silver" >This is an example of a simple survey selecting button (returning) jump list. The control allows a list of named surveys as options found in the selectops response options table to be displayed as a button list and jumped to, with a return to the current sruvey start on completion. The jumped survey is displayed in the same window as the current survey.<br ><br >Note - destinations for these jumps are SURVEYS.<br ><br >It uses the SIDO, PIDO, RKO, IIDO OIDO fields to provide a return address, so the PID, RK, IID, and OID fields could be overridden if desired. <br ><br >The OpValStr must contain the destination SID. If other fields, such as the calling survey's IID or PID (etc), is to be overriden for the destination survey, they should be added to the OpValStr in the selectops table as follows:<br ><br >MyNextSurvey&IID=February<br ><br >The size field is ignored.<br ><br >To use this set:<br >Input=selectop<br >DisplayType=jsmbutton<br ><br ><br ><br >
<td BgColor="silver" >
[[IMAGE:Jsmbutton_Exampl1.gif]]
<tr name="Example0001Q040_Row" id="Example0001Q040_Row" >
<td BgColor="white" >jsmlabel
<td BgColor="white" >This is an example of a simple label returning jump list. The control allows a list of named surveys as options found in the selectops response options table to be displayed as a clickable label list and jumped to, with a return to the current survey start point on completion. The jumped survey is displayed in the same window as the current survey.<br ><br >Note - destinations for these jumps are SURVEYS.<br ><br >It uses the SIDO, PIDO, RKO, IIDO OIDO fields to provide a return address, so the PID, RK, IID, and OID fields could be overridden if desired. <br ><br >The OpValStr must contain the destination SID. If other fields, such as the calling survey's IID or PID (etc), is to be overriden for the destination survey, they should be added to the OpValStr in the selectops table as follows:<br >MyNextSurvey&IID=February<br ><br >The size field is ignored.<br ><br >To use this set:<br >Input=selectop<br >DisplayType=jsmlabel<br ><br ><br ><br >
<td BgColor="white" >
[[IMAGE:Jsmlabel_Exampl1.gif]]
<tr name="Example0001Q041_Row" id="Example0001Q041_Row" ><td BgColor="silver" >jsmwbutton
<td BgColor="silver" >This is an example of a simple window opening button jump list. The control allows a list of named surveys as options found in the selectops response options table to be displayed as a button list and "jumped to". The jumped survey is displayed in a NEW window without disturbing the current survey.<br ><br >Note - destinations for these jumps are SURVEYS.<br ><br >It uses the SIDO, PIDO, RKO, IIDO OIDO fields to provide a return address, so the PID, RK, IID, and OID fields could be overridden if desired. <br ><br >The OpValStr must contain the destination SID. If other fields, such as the calling survey's IID or PID (etc), is to be overriden for the destination survey, they should be added to the OpValStr in the selectops table as follows:<br ><br >MyNextSurvey&IID=February<br ><br >This is the most common way to display a menu<br ><br >To use this set:<br >Input=selectop<br >DisplayType=jsmwbutton<br ><br ><br ><br >
<td BgColor="silver" >
[[IMAGE:Jsmwbutton_Exampl1.gif]]
<tr name="Example0001Q042_Row" id="Example0001Q042_Row" ><td BgColor="white" >jbutton
<td BgColor="white" >This is an example of a simple window opening button jump list with NO CONTEXT. The control allows a list of named URLs as options found in the selectops response options table to be displayed as a button list and "jumped to". The jumped URL is displayed in a NEW window without disturbing the current survey.<br ><br >The button does not assume a returning position, and requires a complete URL (ie. it does not assume a SurveyManager Survey is the jump destination).<br ><br >The OpValStr contains a full URL. <br ><br >This is the most common way to display a menu for non BPCSM jump destinations.<br ><br >To use this set:<br >Input=selectop<br >DisplayType=jbutton<br ><br ><br ><br >
<td BgColor="white" >
[[IMAGE:Jbutton_Exampl1.gif]]
<tr name="Example0001Q043_Row" id="Example0001Q043_Row" ><td BgColor="silver" >jlabel<td BgColor="silver" >This is an example of a simple window opening label jump list with NO CONTEXT. The control allows a list of named URLs as options found in the selectops response options table to be displayed as a label list and "jumped to". The jumped URL is displayed in a the same window without any mechanism to return to the current survey.<br ><br >The button does not assume a returning position, and requires a complete URL (ie. it does not assume a SurveyManager Survey is the jump destination).<br ><br >The OpValStr contains a full URL. <br ><br >This is the most common way to display a menu for non BPCSM jump destinations.<br ><br >To use this set:<br >Input=selectop<br >DisplayType=jlabel<br ><br ><br ><br >
</td><td BgColor="silver" >
[[IMAGE:Jlabel_Exampl1.gif]]
<tr name="Example0001Q044_Row" id="Example0001Q044_Row" ><td BgColor="white" >jfbutton
<td BgColor="white" >This is an example of a simple window opening button jump list preserving the current SM form values. The control allows a list of named URLs as options found in the selectops response options table to be displayed as a button list and "jumped to". The jumped URL is displayed in a NEW window without disturbing the current survey.<br ><br >The button does not assume a returning position, and requires a complete URL (ie. it does not assume a SurveyManager Survey is the jump destination). Unlike the jbutton, it submits the entire form content of the current survey page to the destination URL. In this way a survey page can be used to call a non-survey manager URL, but submit the Survey page's values in the post. Thus an SM form can be used as a form to another web application if the calling page's field names and id's are named to match those required by the destination.<br ><br >The OpValStr contains a full URL. <br ><br >This is the most common way to use a BPCSM survey as a web form for another web app.<br ><br >To use this set:<br >Input=selectop<br >DisplayType=jfbutton<br ><br ><br ><br >
<td BgColor="white" >
[[IMAGE:Jbutton_Exampl1.gif]]
<tr name="Example0001Q045_Row" id="Example0001Q045_Row" ><td BgColor="silver" >jflabel
<td BgColor="silver" >This is an example of a simple label jump list preserving the current SM form values. The control allows a list of named URLs as options found in the selectops response options table to be displayed as a label list and "jumped to". The jumped URL is displayed in the same window as the current survey.<br ><br >The label does not assume a returning position, and requires a complete URL (ie. it does not assume a SurveyManager Survey is the jump destination). Unlike the jlabel, it submits the entire form content of the current survey page to the destination URL. In this way a survey page can be used to call a non-survey manager URL, but submit the Survey page's values in the post. Thus an SM form can be used as a form to another web application if the calling page's field names and id's are named to match those required by the destination.<br ><br >The OpValStr contains a full URL. <br ><br >This is the most common way to use a BPCSM survey as a web form for another web app in the same window as the current survey.<br ><br >To use this set:<br >Input=selectop<br >DisplayType=jflabel<br ><br ><br ><br >
<td BgColor="silver" >
[[IMAGE:Jlabel_Exampl1.gif]]
<tr name="Example0001Q046_Row" id="Example0001Q046_Row" ><td BgColor="white" >
<td BgColor="white" align=center colspan=2 >
<H3 >This Section provides examples of ADMINOP controls.</H3 >
<i >Adminop controls perform various survey system administration functions.</i ><br >
The selectops group is probably the most heavilly used group of input controls. The common element to these controls is that they all operate on the survey system tables - so be careful with these as they result in permanent changes to the SM infrastructure.
<br >
<br >
<tr name="Example0001Q047_Row" id="Example0001Q047_Row" ><td BgColor="silver" >admlist
<td BgColor="silver" width=50%
>This is an example of an admlist text field populated with the reserved administration action list commands. This is a special kind of list used to manage surveys on line. In combination with appropriate "rules", the survey listing control and the user checkilst control, it allows commands to be issued to publish a survey to a group of users, to lock or unlock access to the survey, and to distribute the survey to those users.<br ><br >When using this control to get a simle drop down list of selectable options, DO NOT set the question properyy ("admlist" ). The default action is to insert this list of options and "Nil", which will be interpreted by the SM engine as a "no response".<br ><br >Here we have set the size field to a number greater than 1, which creates a drop list.<br ><br >Otherwise it is the same as the single line text input control.<br ><br >The size field in this instance is used to indicate the number of options to display in the list box. A value of 1 creates a drop-down list, while any value greater than one creates a scrolling list..<br ><br >To use this set:<br >Input=textop<br >DisplayType=admlist<br >EditSize=A number representing the number of characters to display.<br >Question property "admlist"=not set<br ><br >
<td BgColor="silver" >
[[IMAGE:AdmlistEmpl1.gif]]
<tr name="Example0001Q048_Row" id="Example0001Q048_Row" >
<td BgColor="white" >cbxuser
<td BgColor="white" width=50%
>This control provides alist of the current organisation members with a checkbox attached to oeach one so that they can be flagged for some action checked in the rules.
<td BgColor="white" >
[[IMAGE:Cbxuser_example1.gif]]
<tr name="Example0001Q049_Row" id="Example0001Q049_Row" >
<td BgColor="silver" >filelist</td><td BgColor="silver" width=50%
>This control provides a selectable list of the files in the current organisation's folder list
<td BgColor="silver" >
[[IMAGE:filelist_Exampl1.gif]]
<tr name="Example0001Q050_Row" id="Example0001Q050_Row" >
<td BgColor="white" >filermdr
<td BgColor="white" width=50%
>This control provides a mechanism to upload a reminder for the current survey.
<td BgColor="white" >
[[IMAGE:filermdr_Exampl1.gif]]
<tr name="Example0001Q051_Row" id="Example0001Q051_Row" >
<td BgColor="silver" >instdlist
<td BgColor="silver" >This control provides a selectable list of the instances available for the current survey
<td BgColor="silver" >
[[IMAGE:instdlist_Exampl1.gif]]
<tr name="Example0001Q052_Row" id="Example0001Q052_Row" >
<td BgColor="white" >siddlist
<td BgColor="white" >This control provides a selectable list of surveys in the current organisation
<td BgColor="white" >
[[IMAGE:Sidlist_Exampl1.gif]]
<tr name="Example0001Q053_Row" id="Example0001Q053_Row" >
<td BgColor="silver" >userlogin
<td BgColor="silver" width=50% >
This control provides a login combo
<td BgColor="silver" >
[[IMAGE:Userlogin_Exampl1.gif]]
<tr name="Example0001Q054_Row" id="Example0001Q054_Row" >
<td BgColor="white" >usermaker
<td BgColor="white" width=50% >This control provides a control for creating new users.
<td>
[[IMAGE:Usermaker_Exampl1.gif]]
<tr name="Example0001Q055_Row" id="Example0001Q054_Row" >
<td BgColor="white" colspan=3 >
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
4dd8b441fe4a07a3d3fdaad33ea7788cff52a2c3
BPC SurveyManager - Creating Surveys - Properties
0
427
622
2019-09-10T15:42:35Z
Bishopj
1
Created page with "=Standard Properties= <table border=1 ><tr ><th >PropertyID</th ><th >PropGroup</th ><th >Description</th ><th >PValue</th ><th >OpDisplayType</th ><th >PDefault</..."
wikitext
text/x-wiki
=Standard Properties=
<table border=1 ><tr ><th >PropertyID</th ><th >PropGroup</th ><th >Description</th ><th >PValue</th ><th >OpDisplayType</th ><th >PDefault</th ></tr >
<tr ><td >*_TUCount</td ><td >default</td ><td >Where '*' is a short string representing a report name, this field represents the total user count query (numerator) for a question based report. Question based reports are invoked with &REP=ucnt,ulist,uresp,rprc,rcnt,rgrph,eulist,*&RMO=all,grp,**. Where REP are selectors for one or more report modes - {Uprc {user percentage responses}, ucnt {user count), ulist {user response list}, uresp {user responses by user}, rprc { user responses percent }, rcnt {user responses count}, rgrph {user responses graph}, eulist {user list outside of exception cond}, * {user defined report matching the * in this field name}. More than one of these can be used in the same report. RMO is either empty {this org onlt}, or 'all' {all orgs in Db}, or grp {the group of orgs of which this org is the parent} or ** {user defined report query defined in another property}. This last can change with each question if it is defined at the question property level. Allowed Qry Fields: SHOrgID, InstanceID,QID,SID</td ><td ></td ><td >default</td ><td ></td ></tr >
<tr ><td >*_URCount</td ><td >default</td ><td >Where '*' is a short string representing a report name, this field represents the response user count query (numerator) for a question based report. Question based reports are invoked with &REP=ucnt,ulist,uresp,rprc,rcnt,rgrph,eulist,*&RMO=all,grp,**. Where REP are selectors for one or more report modes - {Uprc {user percentage responses}, ucnt {user count), ulist {user response list}, uresp {user responses by user}, rprc { user responses percent }, rcnt {user responses count}, rgrph {user responses graph}, eulist {user list outside of exception cond}, * {user defined report matching the * in this field name}. More than one of these can be used in the same report. RMO is either empty {this org onlt}, or 'all' {all orgs in Db}, or grp {the group of orgs of which this org is the parent} or ** {user defined report query defined in another property}. This last can change with each question if it is defined at the question property level. Allowed Qry Fields: SHOrgID, InstanceID,QID,SID</td ><td ></td ><td >default</td ><td ></td ></tr >
<tr ><td >*_VWResp</td ><td >default</td ><td >Where '*' is a short string representing a report name, this field represents the repsonse query for a question based report. Question based reports are invoked with &REP=ucnt,ulist,uresp,rprc,rcnt,rgrph,eulist,*&RMO=all,grp,**. Where REP are selectors for one or more report modes - {Uprc {user percentage responses}, ucnt {user count), ulist {user response list}, uresp {user responses by user}, rprc { user responses percent }, rcnt {user responses count}, rgrph {user responses graph}, eulist {user list outside of exception cond}, * {user defined report matching the * in this field name}. More than one of these can be used in the same report. RMO is either empty {this org onlt}, or 'all' {all orgs in Db}, or grp {the group of orgs of which this org is the parent} or ** {user defined report query defined in another property}. This last can change with each question if it is defined at the question property level. Allowed Qry Fields: SHOrgID, InstanceID,QID,SID</td ><td ></td ><td >default</td ><td ></td ></tr >
<tr ><td >ActionOnException</td ><td >default</td ><td >Define an action to undertake when the response received to a question is outside the acceptable range as defined by the exception flags for a question.</td ><td >val=</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >Admlist</td ><td >default</td ><td >Comma separated list of response options available for an arbitary list or drop list adminop. This list defaults to the administration action list, but could be any arbitrary list you want for which you don't want to create a fully blown selectop list. The value that appears in the response table will be the string itself. The selectop has the advantage of ensuring that all the selectable responses for all similar questions will have the same response values, but this property allows a quick & dirty response list to be assembled, with the risk that you will not be consistent across all similar questions, and you will use a lot more space to store the possible responses. Value is a comma delimited list with "" around options containing commas or spaces.</td ><td >val=publish,distribute,lock,unlock</td ><td >default</td ><td >publish,distribute,lock,unlock</td ></tr >
<tr ><td >AllowAdvncdPubOptions</td ><td >organisation</td ><td >Survey Manager Maintenance web client only. (Manage Surveys). Allow the use of various advanced publication options including survey responder login.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >AllowAdvncdQEdOptions</td ><td >organisation</td ><td >Survey Manager Maintenance web client only. (Edit Surveys). Allow the display of advanced edit capabilities and views during question editing. This primarily relates to the survey header. In other SM clients this is on by default. In the web client the default is off for simplicity.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >AllowAdvncdSEdOptions</td ><td >organisation</td ><td >Survey Manager Maintenance web client only. (Edit Surveys). Allow the display of advanced edit capabilities and views during survey editing. In other SM clients this is on by default. In the web client the default is off for simplicity.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >AllowAdvncdUMOptions</td ><td >organisation</td ><td >Survey Manager Maintenance web client only. (Manage Users). Allow the display of advanced management capabilities and views during user management. In other SM clients this is on by default. In the web client the default is off for simplicity.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >AllowAdvncdRMSMOptions</td ><td >organisation</td ><td >Survey Manager Maintenance web client only. (Various). Allow the display of advanced risk manager survey integration controls and views throughout the web client. This is only relevant with integrated RM - SM databases. In other SM clients this is on by default. In the web client the default is off for simplicity.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >AllowMultiSrvyInstances</td ><td >organisation</td ><td >Survey Manager Maintenance web client only. (Manage Surveys). Allow the publication and management of multiple instances of a survey. In other SM clients this is on by default. In the web client instances are hidden by default for simplicity.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >AllowRMSM</td ><td >default</td ><td >Survey Engine only. Allow the survey engine to access risk manager tables. When true the survey engine will open a connection to the BPC RMS tables allowing the display and update of RMS tables directly. Tags and properties referring to features of the RMS not generated directly into surveys will be available. This is only relevant with integrated RM - SM databases. In the Survey Engine the default is False for security.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >AllowMultiSrvyInstances</td ><td >organisation</td ><td >Survey Manager Maintenance web client only. (Manage Surveys). Allow the publication and management of multiple instances of a survey. In other SM clients this is on by default. In the web client instances are hidden by default for simplicity.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >AllowPubToExstUser</td ><td >organisation</td ><td >Survey Manager Maintenance web client only. (Manage Surveys).Allow existing users to be selected for survey publishing in the Survey Manager web client. The default mode is that web maintenance client restrict responders to being specifically imported or manually added for each survey to simplify the interface. It is usually better to turn this on.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >AllowQDIDAutoNumCtrl</td ><td >organisation</td ><td >Survey Manager Maintenance web client only. (Edit Surveys). Allow the display of controls that optionally disable auto display numbering, and user defined ids. In other SM clients this is on by default. In the web client the default is autonumber for simplicity.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >AlwaysDo</td ><td >default</td ><td >Force the rule associated with a question to always be executed - regardless of whether a response has been received. Normally rules are only executed on questions with a received response.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >AutoPubFailIfNewOrgUser</td ><td >default</td ><td >Works with the AutoPublish property. AutoPubFailIfNewOrgUser causes the publication to FAIL if the user does NOT already exist in this organisation. In other words, setting this to true, means the user must already exist for the survey in order for the survey to be published to the user. This effectively stops autopublished surveys from just being published to any user not already a member of the current org.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >AutoPubFailIfNewUser</td ><td >default</td ><td >Works with the AutoPublish property. AutoPubFailIfNewUser causes the publication to FAIL if the user does NOT already exist in the database and it has NOT been generated by the UserMakerGenPID property. In other words, setting this to true, means the user must already exist or have been specifically created as an anonymous user iin order for the survey to be published to the user. This effectively stops autopublished surveys from just being published to any new user unless anonymous users are specifically allowed.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >AutoPublish</td ><td >default</td ><td >If the AutoPublish flag is set for an org or survey the survey being accessed is automatically published to the current user, and the next currently avai;able instance automatically authorised to the user. AutoPublish also allows for the automatic creation of the user if the UserMakerGenPID property is true, otherwise it validates the user exists and if not creates the user. Uses the UserMaker properties AND AutoPubFailIfNewUser, AutoPubFailIfNewOrgUser. If the user exists, the RK or the Pwd must match for the survey to be published to the user. Setting the AutoPublish flag means the UserMaker question type is not needed.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >CBXShwDistributed</td ><td >default</td ><td >Set the survey ID (SID) to show on a CBXUser input operation, where the CBXUser control is being used to manage distribution of surveys. Shows 'distributed' where a reminder has been sent (the last step of a distribution action). If ommitted, no message is displayed.</td ><td >val=</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >CBXShwStarted</td ><td >default</td ><td >Set the survey ID (SID) to show on a CBXUser input operation, where the CBXUser control is being used to manage distribution of surveys. Shows 'started' where the user has accessed the SID. If ommitted, no message is displayed.</td ><td >val=</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >CBXUEdEmail</td ><td >default</td ><td >Flag User Email Edit on a CBXuser input operation.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >CBXUEdName</td ><td >default</td ><td >Flag User Name edit on a CBXUser input operation if true. </td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >CBXUFilter</td ><td >default</td ><td >Set the user filter to determine which users are displayed in a CBXUser input operation (filter string must match a filter held by the desired users). If blank, or ommitted, all users in the current Org will be listed.</td ><td >val=</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >CBXUOrgRole</td ><td >default</td ><td >Flag User Orgrole Edit on a CBXUser input operation.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >CBXUPwd</td ><td >default</td ><td >Flag password edit on a CBXUser input operation</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >CellFrmt</td ><td >default</td ><td >Special formats for insertion into the (primarrilly) radio button cells.</td ><td >val=width="20%"</td ><td >default</td ><td >False</td ></tr >
<tr ><td >Checked</td ><td >default</td ><td >Sets the checked option on a checkbox if True</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >ClearOnNil</td ><td >default</td ><td >Determines whethera pre-existing value in the response table should be cleared if no response to a question is provided by the user. This property is most important in textop's where a nil response may mean the user has cleared the text box. For these it should generally be set to true. Default is false.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >ContinueButton</td ><td >survey</td ><td >Normally NOT required.The full HTML tag for a continue button (ie. an HTML submit input button), which causes the page to be sent to the SurveyManager in the default case - but can do anything that a submit button can normally do. </td ><td >val=<input type="submit" value="Continue" ></td ><td >default</td ><td ><input type="submit" value="Continue" ></td ></tr >
<tr ><td >ContinueIsEOS</td ><td >survey</td ><td >Always force the End Of Survey flag when the continue button is pressed, regardless of what the survey QScript rule is UNLESS question rules for the currently displayed page(s) have generated additional questions. A very useful property when one survey is used to select the questions to display in a second survey. This is most commonly used when the current survey has been branched to from another survey, and the EOS state of the current survey would result in a return to the previous survey, or a previously stacked additional branch. Thus after one page the current survey will always return to the calling or next survey. Normally a flag with this property set would display all questiongroups on a single page, and might use a property based QScript to select the QGroups to display. Currently this flag must be set at the survey level.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >CSSSheet</td ><td >survey</td ><td >Cascading Style Sheet values for this survey. Replaces the THTML tag '<#CSSSheet >' in the survey layout. This property must be used if you want to control printing. </td ><td >val=<style type="text/css" >
<br >.breaka {page-break-after: always}
<br >.breakb {page-break-before: always}
<br >@page {margin: 1cm }
<br ></style ></td ><td >default</td ><td >nil</td ></tr >
<tr ><td >CSSUse</td ><td >survey</td ><td ><br class="breaka">
<br ></td ><td >val=<br class="breaka"></td ><td >default</td ><td >nil</td ></tr >
<tr ><td >customop</td ><td >default</td ><td >HTML string, defining a custom input control. Any HTML block is valid. The customop input type allows custom or dcustom (single or double column) input controls to be defined and inserted where the predefined input controls would normally go for this question type. No tag expansion is performed on these HTML strings.</td ><td >val=</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >DefaultUserEmail</td ><td >survey</td ><td >The default email to assign to a new user when the UserMaker (or other user creation method) is used to reate a new user.</td ><td >val=</td ><td >default</td ><td ></td ></tr >
<tr ><td >DONESID</td ><td >survey</td ><td >Not a general property as such, but a property parameter of the PreReq property. It is the Survey ID (SID) of the survey that must be completed before the current survey can be attempted. Only one DONESID can be provided.</td ><td >val=Survey:</td ><td >prereq</td ><td ></td ></tr >
<tr ><td >DoQRLif</td ><td >default</td ><td >Identifies the question on this page to activate (make visible) if the response is above or below or equal to a specific value. Format of the value to this property is: "[gt/lt/eq/neq/between/nbetween, lowvalue, highvalue, QRLrow]". Options are enclosed in [] and any number can be listed. The result is that the row(s) (or other question containers) containing the invisible question and response parts (etc) identified by the QRL's are made visible if it was previously invisible if the test is true (and vice versa if false). A QRLRow is a QuestionID with "_Row" appended. So the QRLRow for Q001 is Q001_Row. A QRLRow can in fact be any DOM region - it does not have to be a question display area only. A list of QRLRows is separated by ";". If the test does not require a low and high bound, then put the numeric part in the low bound position and 0 in the high bound. For example: "['lt', 4, 0, 'Q01a_Row;Q01b_Row;MyDiv']" in the DoQRLIf propety for a question means "show questions Q01a and Q01b and DOM display object "MyDiv" if the value of the current question response is less than 4". </td ><td >val=[lt/eq/gt, vallow, valhi, QRLRow]</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >DoQRLifWght</td ><td >default</td ><td >Identifies the question on this page to activate (make visible) if the response to a user weighting is above or below or equal to a specific value. Format of the value to this property is: "[gt/lt/eq/neq/between/nbetween, lowvalue, highvalue, QRLRow]". Options are enclosed in [] and any number can be listed. The result is that the identified QRL's is made visible if it was previously invisible. Refer to the notes for DoQRLIf. </td ><td >val=[lt/eq/gt, vallow, valhi, QRLRow ]</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >ErrorHTML</td ><td >default</td ><td >THTML tag or HTML layout for the question specific error message. Typically used when a required question is omitted. </td ><td >val=<#ErrorMsg colspan=3 align=center BgColor=yellow style="color:red;font:bold;" ></td ><td >default</td ><td ><#ErrorMsg colspan=3 align=center BgColor=yellow style="color:red;font:bold;" ></td ></tr >
<tr ><td >ErrorMsgPos</td ><td >default</td ><td >Position of the question specific error message.</td ><td >lst=above,below</td ><td >default</td ><td >below</td ></tr >
<tr ><td >GenPageReqErrorMsg</td ><td >survey</td ><td >The HTML message to use for the '<#errormessage >' THTML tag in a survey layout.</td ><td >val="***Please Note: You have not filled in a required question. Please refer to the identified question below. ***"</td ><td >default</td ><td >You ommitted the responses to some required questions on this page. Please refer to the individual highlighted ommissions.</td ></tr >
<tr ><td >GLV</td ><td >default</td ><td >Show the last response provided for a question</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >Hint</td ><td >default</td ><td >If present for a question, a hint button will be inserted and the contents of the property will be displayed in a pop-up window.</td ><td >val=</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >Invisible</td ><td >default</td ><td >Control visibility of a question. Use this property to create questions that are displayed when the user performs some action (such as providing an answer to another question in the same question group or page that is outside of a given range). Note: Invisible questions should not have the required property turned on.</td ><td >lst=True,False</td ><td >default</td ><td ></td ></tr >
<tr ><td >ItemFrmt</td ><td >default</td ><td >The HTML attribute format values used by selectop controls (buttons, some labels, etc) to control item specific (as opposed to hosting table level) format characteristics. The most common purpose is to make a vertical list of buttons expand to the full width of the hosting table.</td ><td >val=width="100%"</td ><td >default</td ><td >minimum fit</td ></tr >
<tr ><td >JMPACTN</td ><td >default</td ><td >The URL or SM param line to which a jump op should jump to for the next page. The content of the property id determined by the type of jump op:<br>
<ul><li> jlabel, jflabel, jbutton, jfbutton - all require a full URL as the JMPACTN. It is not kilely that you will return after these actions.
<li> jsmlabel, jsmbutton, jsmwbutton - all require a SM SID as the first argument followed by other SM arguments separated by "&". It is most common to include the question group to which SM should return on falling back to the current survey as the QGRPO argument. Eg: "MySurvey&QGRPO=Page5"
</ul></td ><td >val=</td ><td >default</td ><td ></td ></tr >
<tr ><td >JMPTXT</td ><td >default</td ><td >The text or caption to display on a jump op input control (label, button, etc). The equivalent value in a selectop is found in the selectop record for the selectop item, but in jump ops there is no such table available so the value is supplied from this property.</td ><td >val=</td ><td >default</td ><td ></td ></tr >
<tr ><td >JMPATRB</td ><td >default</td ><td >The extra attributes to insert in the tag of a jump op input control (label, button, etc). There is no equivalent value in a selectop.</td ><td >val=</td ><td >default</td ><td ></td ></tr >
<tr ><td >LoginRequired</td ><td >default</td ><td >Impose login requirement at commencement of survey. If the user has not logged in, or loggin data supplied, a login screen is returned before the survey is displayed. </td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >LoginRetryCount</td ><td >default</td ><td >The count of the maximum login attempts before the authorisation failure message is displayed..</td ><td >val=</td ><td >default</td ><td ></td ></tr >
<tr ><td >LoginOrgOK</td ><td >default</td ><td >Where survey login rquired, survey does not have to be published to the user and access to the org alone is OK, if true.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >Multiple</td ><td >default</td ><td >Enable multiple selections (admlist, admdlist, filelist) if true.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >MOHint</td ><td >default</td ><td >Enable mouse over hints displayed in a user defined area of the screen for select ops and jump ops. Mouseover hints are drawn from the hint field of the select op table or for jump ops, the MOHintTXT property. Enabled if true.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >MOHintID</td ><td >default</td ><td >The ID of the user defined page region into which to write the hint text during mouse over events from select ops and jump ops. Mouseover hints are drawn from the hint field of the select op table or for jump ops, the MOHintTXT property. A DIV is typically used as the hint container referenced by this property.</td ><td >val=</td ><td >default</td ><td ></td ></tr >
<tr ><td >MOHintTXT</td ><td >default</td ><td >The Text to display in the user defined page region as the hint text during mouse over events from jump ops. Mouseover hints are drawn from the hint field of the select op table or for jump ops, the MOHintTXT property. A DIV is typically used as the hint container referenced by this property.</td ><td >val=</td ><td >default</td ><td ></td ></tr >
<tr ><td >MOHint</td ><td >default</td ><td >Enable mouse over hints displayed in a user defined area of the screen for select ops and jump ops. Mouseover hints are drawn from the hint field of the select op table or for jump ops, the MOHintTXT property. Enabled if true.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >NewUserFilter</td ><td >survey</td ><td >This is the filter to add to a new user when suto creating users with a 'usermaker' displayop. While a usermaker is normally used in a dedicated survey, that survey is generally invoked from a survey that offers restricts all but the first couple of questions with this filter, and restricts the first questions to a 'GUEST' PID that has only the 'GUEST' filter attached. You will need this property to be set in order to allow the remainder of the survey to be seen only be the newly created user. </td ><td ></td ><td >default</td ><td ></td ></tr >
<tr ><td >NILDSTR</td ><td >default</td ><td >Set the string to display if no selection is made in a drop list or list. the default is Nil and you can not set this value to nothing - Nil will be used instead.</td ><td >val=Nil</td ><td >default</td ><td >Nil</td ></tr >
<tr ><td >NoContinue</td ><td >survey</td ><td >Suppress the continue button at that normally appears at the bottom of each page. This button posts the survey responses.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >NoFirstContinue</td ><td >default</td ><td >Suppres the first continue button, but allow continues on the following pages (useful when the first page contains a login question)</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >NOPRVW</td ><td >default</td ><td >Disable preview mode on the survey</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >NoTitles</td ><td >default</td ><td >Supress Titles on horzontally aligned radio buttons.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >NoWght</td ><td >default</td ><td >Supress display of the weight response section for the current question(s).</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >NoWghtTitles</td ><td >default</td ><td >Supress display of the weight response section options titling when weight options are displayed as the radio button for the current question(s).</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >NOXHELP</td ><td >default</td ><td >Enable/disable help table xhlp display tag. Must be explicitly set to False to allow tag to be used in a survey (ie. Default is True)</td ><td >lst=True,False</td ><td >default</td ><td >True</td ></tr >
<tr ><td >OIDProp</td ><td >default</td ><td >Find the correct OID based on a property of the Person, Survey or Organisation. The Value in OIDProp is the Property to look up. If no valueis found in the associated property, the existing passed OID Value will be used. This mechanism is intended to allow a users survey response to be automatically allocated to an organisation based on a property of the user, the survey or the original named org in the survey). It provides a way to allocate responses across organisations. Two models are supported: 1. Where OIDProp property is assigned in the survey or organisation, use this value as the POIDValue - regardless of the paramvalue; 2. Where the POIDValue is assigned in the survey params (unless supressed by an OIDProp value of 'nil'. You can block Property org selection by setting it to 'nil' in the survey or the org.</td ><td >val=</td ><td >default</td ><td ></td ></tr >
<tr ><td >pexe</td ><td >default</td ><td >Replaces the <#pexe > THTML tag used for question specific plugin function calls. The default property name 'pexe' for a plugin function call can be overriden with a call of the form <#pexe pname="mypluginfunc" > where "mypluginfunc" becomes the name of the property. The value of the property should be of the form: exe( plugin_dll_name, plugin_func_name, param1, param2..paramN ) where all values in the brackets are strings (denoted by "" or '') or QRLs containg valid response values and plugin_dll_name and plugin_func_name are as the names imply and param1..paramN are the arguments to the plugin call. The plugin dll must be loaded for the call to work.</td ><td >val= exe( dll_name, func_name, params )</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >PIDPROPNN</td ><td >survey</td ><td >Not a general property as such, but a property parameter of the PreReq property. It is a Property of the user that must have a non empty and non nil value before the current survey can be attempted. Any number of PIDPROPNNs can be provided.</td ><td >val=Property Name:</td ><td >prereq</td ><td ></td ></tr >
<tr ><td >PLGIN</td ><td >default</td ><td >Causes a plugin function call result to be inserted in the response field of an input control (text, text area, numeric, date) and the list of an admlist or admdlist. The value of the property should be of the form: exe( plugin_dll_name, plugin_func_name, param1, param2..paramN ) where all values in the brackets are strings (denoted by "" or '') or QRLs containg valid response values and plugin_dll_name and plugin_func_name are as the names imply and param1..paramN are the arguments to the plugin call. The plugin dll must be loaded for the call to work.</td ><td >val= exe( dll_name, func_name, params )</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >PostAction</td ><td >survey</td ><td >Normally NOT required. The address to which to send a page when the continue button is pressed. (In HTML terms this is the action attribute of a form.) Normally NOT required and therefore results in the page being posted back to SurveyManager, but it can be overridden so that a surveymanager form posts to an entirely different application.</td ><td ></td ><td >default</td ><td >Post the page to BPC SurveyManager</td ></tr >
<tr ><td >PostMethod</td ><td >survey</td ><td >Sets the method used for sending a page to the survey manager or the target web server/web application. Although a SurveyManager form is normally 'posted' to the SurveyManager application, there is no actual requirement for it to either be posted, or even sent to SurveyManager as the form target. Any web application can be used as a page target.</td ><td >lst=post,get</td ><td >default</td ><td >post</td ></tr >
<tr ><td >PostType</td ><td >survey</td ><td >Normally NOT required. Defines the type of data being posted. Normally, NOT required (and therefore defaulting to application/x-www-form-urlencoded), this value MUST be set if the form contains a file displayop because uploading a file to the server requires multipart form posting. For file uploading use the 'multipart' short form, for anything else leave blank or enter the full HTML ENCTYPE attribute value (excluding the word 'ENCTYPE').</td ><td >val=multipart</td ><td >default</td ><td >application/x-www-form-urlencoded</td ></tr >
<tr ><td >PreReq</td ><td >survey</td ><td >A list of prerequisites for the survey / survey instance. Currently supports DONESID (the current responder must have completed the Survey identified as the DONESID property) and PIDPROPNN (the ressponder must have a non-nil and non empty value for the property PIDPROPNN). Only one DONESID can be listed but any number of PIDPROPNN can be included : eg. Usage is PreReq="DONESID=xyz,PIDPROPNN=ROOM"... Which means a reponder must have completed survey "xyz" and have property "ROOM" defined with a value other than "nil" or empty. </td ><td >val=DONESID=xxx,PIDPROPNN=yyy</td ><td >default</td ><td ></td ></tr >
<tr ><td >PRPROPREQD</td ><td >default</td ><td >The error message to display if a property noted any of the PIDPROPNN prerequisites is not found or nil for the user. Ie. The prequisit is not satisfied. Can be defined at the Org, Survey or Instance level. If not defined a default message is displayed.</td ><td >val=</td ><td >default</td ><td >At least one prerequisite for completing this survey has not been met. You must complete all prerequisites before taking this survey.</td ></tr >
<tr ><td >PRSIDREQD</td ><td >default</td ><td >The error message to display if the SID noted DONESID prerequisite has not been completed by the user. Ie. The prequisit is not satisfied. Can be defined at the Org, Survey or Instance level. If not defied a default message identifying the required survey is displayed.</td ><td >val=</td ><td >default</td ><td >At least one prerequisite for completing this screen has not been met. You must complete ..SID..</td ></tr >
<tr ><td >RadioAln</td ><td >default</td ><td >Either Vert or Horz (or nothing). This property determines wether radio buttons, selectop buttons & selectop labels are presented across the page (the default) or vertically down the page (if Vert).</td ><td >lst=Vert,Horz</td ><td >default</td ><td >Horizontal</td ></tr >
<tr ><td >RadioFrmt</td ><td >default</td ><td >The HTML attribute format values for radio buttons and the hosting table of selectop buttons & labels (see ItemFrmt for individual button/label formatting). Use primarily to control the width of the button table. By default, radio button tables are left justified. To spread buttons over the entire response column use "width=100%". </td ><td >val=width="100%"</td ><td >default</td ><td >left justified</td ></tr >
<tr ><td >REPPIDOK</td ><td >survey</td ><td >A comma separated list of PIDs that have been granted reporting access for the current survey, that would not otherwise have report access.</td ><td >val=</td ><td >default</td ><td ></td ></tr >
<tr ><td >ReqErrorMsg</td ><td >default</td ><td >Replaces the <#ErrorMsg> THTML tag used for question specific error messages.</td ><td >val=The question above is a required question. Please complete prior to continuing.</td ><td >default</td ><td >Error: This field is required - please try again.</td ></tr >
<tr ><td >Required</td ><td >default</td ><td >This field is required. A nil response is rejected.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >RMDREdit</td ><td >default</td ><td >Enable editing of Survey Reminders, if true. (Requires SIDEOID, SIDESID, SIDERNM )</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >RPTCCellFrmt</td ><td >default</td ><td >Table cell format to be used in the survey question list summary report.</td ><td >val=</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >RPTCFields</td ><td >default</td ><td >List the fields, separated by ';' to be included in a survey question list summary report. Only the fields available in the selected report are actually displayed. The default fields are:OpDisplayStr;ResponseStr;OpVal;RSCount;ODSCount;OVCount;OVDCount. The count fields count the number of reposnses for various ODSCount;OVCount;OVDCount Used with the RPTTField to calculate ratios etc.</td ><td >val=OpDisplayStr;ResponseStr;OpVal;RSCount;ODSCount;OVCount;OVDCount</td ><td >default</td ><td >OpDisplayStr;ResponseStr;OpVal;RSCount;ODSCount;OVCount;OVDCount</td ></tr >
<tr ><td >RPTCHeadings</td ><td >default</td ><td >List the headings, separated by ';' to be included in a survey question list summary report. Only the headings available in the selected report are actually displayed. The default headings are: OpDisplayStr;ResponseStr;OpVal;RSCount;ODSCount;OVCount;OVDCount. </td ><td >val=OpDisplayStr;ResponseStr;OpVal;RSCount;ODSCount;OVCount;OVDCount</td ><td >default</td ><td >OpDisplayStr;ResponseStr;OpVal;RSCount;ODSCount;OVCount;OVDCount</td ></tr >
<tr ><td >RPTRCellFrmt</td ><td >default</td ><td >Table cell format to be used in the survey question list report.</td ><td >val=</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >RPTRFields</td ><td >default</td ><td >List the fields, separated by ';' to be included in a survey question list report. Only the fields available in the selected report are actually displayed. The default fields are: PID;Person;OpDisplayStr;ResponseStr;OpVal, OpValDate. </td ><td >val=PID;Person;OpDisplayStr;ResponseStr;OpVal</td ><td >default</td ><td >PID;Person;OpDisplayStr;ResponseStr;OpVal</td ></tr >
<tr ><td >RPTRHeadings</td ><td >default</td ><td >List the fields, separated by ';' to be included in a survey question list report. Only theList the headings, separated by ';' to be included in a survey question list report. Only the headings available in the selected report are actually displayed. The default headings are: PID;Person;OpDisplayStr;ResponseStr;OpVal, OpValDate. fields available in the selected report are actually displayed. The default fields are: PID;Person;OpDisplayStr;ResponseStr;OpVal, OpValDate. </td ><td >val=PID;Person;OpDisplayStr;ResponseStr;OpVal</td ><td >default</td ><td >PID;Person;OpDisplayStr;ResponseStr;OpVal</td ></tr >
<tr ><td >RPTTField</td ><td >default</td ><td >The field name of the total count of user responses used in a survey question list summary report query. Defaults to RCount if not specified. Used as the denominator of a percentage or ratio calculation.</td ><td >val=RCount</td ><td >default</td ><td >RCount</td ></tr >
<tr ><td >SIDEdit</td ><td >default</td ><td >Enable editing of Survey Headers, if true. (Requires SIDEOID, SIDESID, SIDEFLD )</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >SIDEFLD</td ><td >default</td ><td >Survey Header Field to edit</td ><td >val=owner</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >SIDEOID</td ><td >default</td ><td >Org ID for Survey Header & Reminder edits.</td ><td >val=default</td ><td >default</td ><td >Current OrgID</td ></tr >
<tr ><td >SIDERNM</td ><td >default</td ><td >Survey Reminder number to edit.</td ><td >val=1</td ><td >default</td ><td >1</td ></tr >
<tr ><td >SIDESID</td ><td >default</td ><td >Survey ID for Survey Header and Reminder edits.</td ><td >val=</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >Sidlstfilter</td ><td >default</td ><td >Set the filter that determines which surveys are displayed in a sidlist input operation. SIDs must have the mentioned filter in their filter list to be displayed. All SIDs will be listed if ommitted UNLESS the SID has a filter and the user has a filter, in which case only SID's matching a user's filter will be displayed. This property overrides that behaviour.</td ><td >val=</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >SPLGINDLL</td ><td >survey</td ><td >Comma separated list of plugin dll names to be used for a specific survey - loaded and unloaded with each page. Do not include the 'dll' extension. These plugins should be stored in the AdHoc directory of the plugins library.</td ><td >lst=</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >SuppressFirstContinue</td ><td >default</td ><td >Suppress the first page continue button.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >SuppressFirstHeading</td ><td >survey</td ><td >Suppress the heading on the first survey page if true.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >SurveyEnd</td ><td >survey</td ><td >The message to be displayed at the end of a survey.</td ><td >val=<P ><H1 align=center ><font color="blue">The Survey is now complete.</font></H1></P><br><P><H3 align=center >Should you wish to revise your responses, you can do so prior to the survey response cut off date by using<br >
<br >the browser's back button now, or reclicking on the link in the original email later. <br ><br >
<br >Thankyou again for your participation.</H3></P><br><br><br></font>
<br ><form method=get action="http://www.acumenalliance.com.au/" ><input type=submit value="Close Survey" ></form><br><br><br><br><br><br><br>.<font color=white ></td ><td >default</td ><td ><H1>Survey Finished. Thankyou.</H1></td ></tr >
<tr ><td >TCELLEND</td ><td >default</td ><td >(CellEnd ) Define a question component end string - usually </tr > Only required if you are changing the layout or size of the cells (other than just changing the order of the columns). Must be paired with a TCELLSTART. The HTML code used to mark the end of a cell. A question line consists of components such as displayed ID, question and response, etc.</td ><td >val=</td ></td ><td >default</td ><td ></td ></td ></tr >
<tr ><td >TCELLSTART</td ><td >default</td ><td >(CellStart ) Define a question component start string - usually <tr > Only required if you are changing the layout or size of the cells (other than just changing the order of the columns). Must be paired with a TCELLEND, unless you are using the standard cell based cell marker. The HTML code used to mark the start of a cell. A question line consists of components such as displayed ID, question and response, etc.</td ><td >val=<td ></td ><td >default</td ><td ><td ></td ></tr >
<tr ><td >TH</td ><td >quesgroup</td ><td >(TableHeading) Define a question group section heading. Options are empty(ignore), Nil(No heading),Default(Survey default), or anything else which will be treated as a heading definition. Note that if you are changing more than just the order of the columns, you should insert a <tr ></tr > row definition pair.</td ><td >val=<tr ><#ID ><#Question ><#Response ></tr ></td ><td >default</td ><td >nil</td ></tr >
<tr ><td >TR</td ><td >quesgroup</td ><td >(TableRow) Define a question OR question group standard row definition. Values are any valid cell list reference.</td ><td >val=<#QuesDisplayID><#Question><#OpType></td ><td >default</td ><td >nil</td ></tr >
<tr ><td >TRC</td ><td >quesgroup</td ><td >(TableRowCheckBox) Define a question OR question group checkbox input control row display. Custom input tags are defined in person/question/survey properties entirely. Be careful about naming - the input field MUST use the QID for name and ID or it will not be seen by the response control engine.</td ><td >val=<#QuesDisplayID><#Question align=right><#OpType></td ><td >default</td ><td >nil</td ></tr >
<tr ><td >TRCOLOUR</td ><td >quesgroup</td ><td >(TableRowColour-cycle ) Define a question group colour cycle. Typically this is a list of one or more colours separated by commas.</td ><td >val=white,silver</td ><td >default</td ><td >nil</td ></tr >
<tr ><td >TRDCUS</td ><td >quesgroup</td ><td >(TableRowCustom control) Define a question OR question group double width custom input control row display. Custom input tags are defined in person/question/survey properties entirely. Be careful about naming - the input field MUST use the QID for name and ID or it will not be seen by the response control engine.</td ><td >val=<#QuesDisplayID ><td colspan=2 ><table width="100%" ><tr ><#Question ><#OpType ></tr ></table ></td ></td ><td >default</td ><td >nil</td ></tr >
<tr ><td >TRDTA</td ><td >quesgroup</td ><td >(TableRowDoubleTextArea )Define a question OR question group double width text area input control row display. Custom input tags are defined in person/question/survey properties entirely. Be careful about naming - the input field MUST use the QID for name and ID or it will not be seen by the response control engine.</td ><td >val=<#QuesDisplayID ><td colspan=2 ><table width="100%" ><tr ><#Question ><#OpType ></tr ></table ></td ></td ><td >default</td ><td >nil</td ></tr >
<tr ><td >TREND</td ><td >default</td ><td >(TableRow Row End) Define the table row ending characters for questiongroup/question level format override. Must be paired with TRSTART, unless you are using the standard TREND</td ><td >val=</tr ></td ><td >default</td ><td ></tr ></td ></tr >
<tr ><td >TRL</td ><td >quesgroup</td ><td >(TableRowLongLable) Define a question OR question group long info label input control row display. Custom input tags are defined in person/question/survey properties entirely. Be careful about naming - the input field MUST use the QID for name and ID or it will not be seen by the response control engine.</td ><td >val=<#QuesDisplayID><#Question align=center colspan=2 ></td ><td >default</td ><td >nil</td ></tr >
<tr ><td >TRSTART</td ><td >default</td ><td >(TableRow Row Start) Define the table row starting characters for questiongroup/question level format override. must be paired with TREND</td ><td >val=<tr ></td ><td >default</td ><td ><tr ></td ></tr >
<tr ><td >TTEND</td ><td >quesgroup</td ><td >(TableEnd ) Define a question group section end styring - usually </table > Only required if you are changing the layout or size of the cells (other than just changing the order of the columns) Or have used a TTSTART for this question group..</td ><td >val=</table ></td ><td >default</td ><td >nil</td ></tr >
<tr ><td >TTSTART</td ><td >quesgroup</td ><td >(TableStart ) Define a question group section start string - usually <table > Only required if you are changing the layout or size of the cells (other than just changing the order of the columns). Must be paired with a TTEND</td ><td >val=<table ></td ><td >default</td ><td >nil</td ></tr >
<tr ><td >ULEmail</td ><td >default</td ><td >User email name field</td ><td >val=EMail:</td ><td >default</td ><td >EMail:</td ></tr >
<tr ><td >ULPWD</td ><td >default</td ><td >User password field name</td ><td >val=Password:</td ><td >default</td ><td >Password:</td ></tr >
<tr ><td >ULUserId</td ><td >default</td ><td >UserId field name</td ><td >val=UserId:</td ><td >default</td ><td >UserId:</td ></tr >
<tr ><td >ULUserName</td ><td >default</td ><td >User name field name</td ><td >val=Name:</td ><td >default</td ><td >Name:</td ></tr >
<tr ><td >UserAutoLockOn</td ><td >default</td ><td >Causes the current survey instance to be responded only once by the current user. The value of UserAutoLockOn is a Filter string that will cause the current instance of the survey to be locked when the current user reaches the end IF the current user has the same string in their filter list. If the same user then attempts to answer the survey again they will be allocated a new instance, if any are available, or blocked from accssing it.</td ><td >val=</td ><td >default</td ><td ></td ></tr >
<tr ><td >UserLogin</td ><td >default</td ><td >List of user fields collected in a userlogin adminop</td ><td >lst="UID,PWD",UID</td ><td >userlogin</td ><td >UID,PWD</td ></tr >
<tr ><td >UserMaker</td ><td >default</td ><td >List of user fields collected in a usermaker adminop. When the response is processed by the server UserMaker works in conjunction with a number of other optional properties: UserOrgMerge, UserNameIsUserID, NewUserFilter, UserMakerMakeKey, UserMustMatchOrNotExist, UserMakerGenPID, DefaultUserEmail, AutoPublish. It generates a number of input values in fields: UNAME, EMAIL</td ><td >lst="UID,PWD,UNAME,EMAIL",UID,"UID,UNAME,EMAIL"</td ><td >usermaker</td ><td >UID,PWD,UNAME,EMAIL</td ></tr >
<tr ><td >UserMakerCanEditFlag</td ><td >default</td ><td >Enable editing of user details on create</td ><td >lst=True,False</td ><td >usermaker</td ><td >False</td ></tr >
<tr ><td >UserMakerGenPID</td ><td >default</td ><td >Where AutoPublish is True, AND you do not supply a PID in the call to get survey (to automatically publish the survey and the current instance to a user on access) you can set UserMakerGenPID to auto generate a GUID to uniquely identify the user. This enables anonymous surveys. In this case UserOrgMerge is ignored. Note: You will not be able to track responders with anonymous surveys.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >UserNameIsUserID</td ><td >default</td ><td >Set the user name to the PID value on creation.</td ><td >lst=True,False</td ><td >usermaker</td ><td >False</td ></tr >
<tr ><td >UseWeight</td ><td >survey</td ><td >Enable the extra weight/importance column for every question in the survey. The results for this are recorded as a floating point number between 0 and 1 in the weight column of each response. Typical use is to get a user importance rating for each question.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >UserOrgMerge</td ><td >default</td ><td >Enable adding of the ID entered by a user to the OrgID to create a new user, or login (if True). It is strongly recommended that if you allow autocreation of user ids, AND your orgs are independent entities that do not share responders, you also set this to True as otherwise user ID's across orgs can be shared. This will ensure that ids are unique to this org when the id is reused in another org in the same database.</td ><td >lst=True,False</td ><td >default</td ><td >False</td ></tr >
<tr ><td >WghtInfoOp</td ><td >default</td ><td >Force the display of weightings for an InfoOp (non input text or image display). </td ><td >lst=False,True</td ><td >default</td ><td >False</td ></tr >
<tr ><td >WghtOpList</td ><td >default</td ><td >Force the displayed names of weightings to be drawn from least to most from a comma separated list provided. The option may be suppressed by setting the property to "Nil", which will result in the default numeric display. Where commas or spaces are required in a list item, surround the item with ".</td ><td >val=</td ><td >default</td ><td ></td ></tr >
<tr ><td >WghtRadioAln</td ><td >default</td ><td >Either Vert or Horz (or nothing). This property determines whether weight response section radio buttons & selectop labels are presented across the page (the default) or vertically down the page (if Vert).</td ><td >lst=Vert,Horz</td ><td >default</td ><td >Horizontal</td ></tr >
<tr ><td >WghtScale</td ><td >default</td ><td >Set number of discrete weight selection options to be displayed in the weighting response section for this question(s). The resulting value will be an evenly spaced division of the weight value range from 0 to 1</td ><td >val=5</td ><td >default</td ><td >5</td ></tr >
<tr ><td >WghtType</td ><td >default</td ><td >Set display format for entering the discrete weight selection options to be displayed in the weighting response section for this question(s). Text will allow any floating point value to be entered. Other options provide evenly spaced division of the weight value range across 0 to 1</td ><td >lst=radio,text,list,droplist</td ><td >default</td ><td >radio</td ></tr >
</table >
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
91be89b48a991a1dba238190570dd72be35d98e6
BPC SurveyManager - The Built In Reports
0
428
623
2019-09-10T15:44:50Z
Bishopj
1
Created page with "=Standard Reports= ==Introduction== You do not have to know anything about the database to get a report. Every survey automatically has a number of reports and groupings av..."
wikitext
text/x-wiki
=Standard Reports=
==Introduction==
You do not have to know anything about the database to get a report. Every survey automatically has a number of reports and groupings available without you doing anything. The reports use the survey layout to deliver their output. These reports are:
*Individual responses by question and person
*Response count by question
*Responses by question
*Responder's name by question
*Count breakdown of responses by question
*Percentage breakdown of responses by question
*Percentage breakdown pie-chart by question
There are a number of predefined views that feed these reports and provide various groupings by survey:
*By user
*By Organisation
*By Region
*By Database
==Invoking a Report==
Reports are invoked by adding two tagattributes to a survey URL:
*RMO=xxx - Report MultiOrg Mode. Where xxx is one of:
**org - the current organisation only
**grp - all organisations in the current organisation-group / region of which this organisation is the parent
**all - all organisations in the database
*REP=yyy - Report Mode where yyy is one of:
**uprc - user percentage responses
**ucnt - user count
**ulist - user response list
**uresp - user responses by user
**rprc - user responses percent
**rcnt - user responses count
**rgrph - user responses in a pie chart. This report places a heavy load on the server as the charts are dynamically generated server side. Sometimes you may have to refresh the report to ensure that all charts are delivered. The charts are wrapped in links so that clicking on a chart provides a full screen version of the chart.
**eulist - user list outside of exception cond
The REP attribute will accept a list of report types separated by commas. Use with care are these can create very long reports and not all report types work well together.
For example:
<pre>http://xxxx/BPCSurveyManager1.dll/DoSurvey?SID=MySurvey&OID=MyOrg&PID=MyID&RMO=org&REP=uprc</pre>
Would generate a report containing all the questions in MySurvey with the response section replaced with the percentage responses for the organisation
By default reports attempt to deliver all questions in the survey to a single browser page. Sometimes this is not convenient, so an additional tagattribute is available to restrict the report to display a fixed number of questions to a browser page:
*RPS=zzz - The maximum number of questions in a report page. Where zzz is a number.
To support subset question reports a special tagattribute is provided:
*RQRL=qrlslist - Where qrlslist is a comma separated list of QRLs (Question Resource Locators)
The resulting report will include only the questions in the list.
=Access=
To view a report, the user accessing the report must have super or admin rights assigned to them at the organisation level, or be specifically allocated report rights in the REPPIDOK property for the current (or default) instance of the current survey. REPPIDOK contains a comma separated list of PIDs. An access denied message is displayed otherwise. Adding "default" to the REPPIDOK property will make the report available to everyone with validated access to the organisation.
=Special Reports=
==Individual Question Reports==
===Answer Tag Definition and Syntax===
In addition to the standard reports available for the survey as a whole, each type of report can be extracted for an individual question in any survey in an organisation by embedding special report <#answer > tag into the content of a question text, or a script command in the rules engine.
<pre>
<#answer OID=xxx SID=xxx IID=xxx QID=xxx PPROP=xxx PID=xxx QRL=sss.qqq VAL=vvv REP='idv','uprc','ucnt','ulist','uresp','rprc','rcnt','rgrph','eulist' RMO=all|grp TMPLT="[#RSPRC#] of management said they '[#ResponseStr#]'" >
</pre>
The answer tag handles both response retrieval (if REP="") and embedded reports (if REP<>""). Syntax of the answer tag:
<pre>
<#answer [OID=ooo |OIDP= ] IID=iii [QRL=rrr |QRLP= ] SID=sss QID=qqq [PID=ppp |PIDP= ] [[VAL=value|val|text|selectop|type|gid|id|] [REP=uprc|ucnt|ulist|uresp|rprc|rcnt|rgrph|eulist|<prop> RMO=org|all|grp| MRR=MRRTable|MRRList|MRRRes [[DVDC=cor|cand ] [DVD= |CNJT= ]] TPLT="[#ResponseStr#]"| STPLT="[#ResponseStr#]"| ifnil="No Result"| RFF=<string after each REP> ]]
</pre>
*OIDP - Name of a property to use for the OID (Organisation ID) for the report lookup (overrides OID)
*PIDP - Name of a property containing the PID to use for the report lookup (overrides PID)
*VAL - Only used if the REP is undefined. Determins the type of the returned result (note a PID will be required as well):
**val - numeric value
**value - the display string of a selectop if the underlying value is a selectop, else the response string
**text - the string value stored in the reponse string
**type - the type of the value stored in the target question (textop, selectop, dateop, numericop, etc.
**gid - the group name of the op-type stored (really only relevent to selectop, dateop and numericop types)
**id - the id of the op item stored (really only relevent to selectop, dateop and numericop types)
**wght - the value stored in the weight field of the response
*REP - will accept a comma separated list of report types or a property containing the report type
*MRR - the display format: table, list or simple result
*DVDC - if defined, a result with multiple values will be separated by commas and the selected conjunction will be use to separate the last two outcomes: "or" or "and". This enables the creating of a sentence structure.
*DVD - if the DVDC is not defined, this sepecifies the divider to use to separate multiple values when the result returns a list
*CNJT - if the DVDC is not defined, this sepecifies the conjunction to use to separate the last two values when the result returns a list
*TPLT - the template to use for reports producing multiple results. Defaults to "[#ResponseStr#]" is undefined. The template is an arbitrary string with the desired fields from the report view included between [#..#].
*STPLT - the template to use for reports producing a single result. Defaults to "[#ResponseStr#]" is undefined. The template is an arbitrary string with the desired fields from the report view included between [#..#].
*ifnil - the string to display if there is no result returned
*RFF - string to display after each REP listed in REP.
===Examples:===
*Embedd an conventional rprc report for a single question into the question text of a question in another survey:
<pre>
Students responded: <#answer OID=default SID=ACFE2004 QID=ACFE2004Q046 REP=rprc RMO=all MRR=MRRTable DVDC=cor
TPLT="[#Response#]" STPLT="[#Response#]" ifnil="No Result" >
</pre>
*Embedd an rprc report for a single question into the question text of a question in another survey as a comma separated list of items.
<pre>
Students listed their main objectives as including <#answer OID=default SID=ACFE2004 QID=ACFE2004Q013 REP=rprc RMO=all
MRR=MRRList DVDC=cor TPLT="'[#Response#]' " STPLT="'[#Response#]' " ifnil="No Result" >
</pre>
*Embedd two pie reports into the into the question text of a question in another survey. This form would go in the layout HTML of the question:
<pre>
<br ><br >
<table >
<tr>
<td>
<#answer OID=default SID=ACFE2004 QID=ACFE2004Q005 REP=rgrph RMO=all MRR=MRRTable DVDC=cor TPLT="'[#Response#]' " STPLT="'[#Response#]' " ifnil="No Result" >
</td >
<td >
The survey at left shows how students identified their courses were delivered.
Now, when asked whether their teacher was good at explaining things, they responded as shown at right.
</td >
<td >
<#answer OID=default SID=ACFE2004 QID=ACFE2004Q054 REP=rgrph RMO=all MRR=MRRTable DVDC=cor TPLT="[#RSPerc#] stated they '[#Response#]' " STPLT= ifnil="No Result" >
</td >
</tr >
<tr >
<td colspan=3 >
<#question >
</td >
</tr >
</table >
</pre>
*Embedd an rprc report as a list with multiple fields and a user defined conjunction
<pre>
The extent to which learning resources were appropriate for the needs of students was directly considered in th survey.
Students <#answer OID=default SID=ACFE2004 QID=ACFE2004Q056 REP=rprc RMO=all MRR=MRRList DVD=", " CNJT=" but " TPLT="'[#Response#]
([#Calc#]) ' " STPLT="'[#Response#] ([#ODSCount#])' " ifnil="No Result" > with the assertion that learning materials
were appropriate. I <#OpType > with these views about my course materials.
</pre>
===Template String Fields===
The Answer tag uses template strings to format the information from a report quesry. The tamplate allows you to select the fields you want and how you want them presented. Anything that would be legal DHTML is legal in a template string.
Two template strings are allowed. One to produce a single reponse (such as when the record has only one response, or you just want the first one, and one that allows the formating of multiple response records. In most cases these stings will be the same, but the option allows you to use a different method to display single versus multi line responses in your text.
The template string therefore includes a special markup tag for retrieving the fields that might be available in a report. These tags are denoted by [# #]. The use of the [] bracing as opposed to the <> used in other layout strings (and in fact in question text in which the Answer tag is embedded) ensures the template strings are not corrupted by tag expansion that may occur before the answer tag is addressed when embedded in strings that are parsed by one survey engine's various parsers, and also allow faster tag expansion because single pass expanders can be used where otherwise multi-pass tag expanders would be required. Other than this they are essentially the same as the <# > markup tags.
Essentially the string that should go in the [# #] markers are field names from the reports. These field names can vary depending on the built in report you have chosen, and whether you are using your own custom report. Further their are a couple of special names (virtual fields) that are availabe that do not appear in any report query's field list. The names corresponding to the built in report queries are:
*Response - This is a virtual field that attempts to provide the best choice for a meaningful response text for the underlying data. The value returned for each report type is as follows:
**uprc (user percentage responses) = RCount (numeric response count as string)
**ucnt (user count) = RCount (numeric response count as string)
**ulist (user response list) = Person (Person Name)
**uresp (user responses by user) = OpDisplayStr (The display text version of a selection list selection) or ResponseStr (for other ops).
**rprc (user responses percent) = OpDisplayStr (The display text version of a selection list selection) or ResponseStr (for other ops).
**rcnt (user responses count) = OpDisplayStr (The display text version of a selection list selection) or ResponseStr (for other ops).
**rgrph (user responses graph) = OpDisplayStr (The display text version of a selection list selection) or ResponseStr (for other ops).
**eulist (user list outside of exception cond) = ResponseStr.
**Other Undefined Reports = ResponseStr.
*Calc - This is a generalised numeric field converted to a string that attempts to deliver the primary calculated component in form expected by the report. It will vary with the report but is only relevant in reports that use a total (eg percentage reports, etc). Essentially it gives the item count / total count as either a float or a percent depending on the settings for the report. If the calc value is not available it will deliver either 1 or 100% (as appropriate) if referenced. Calc is available for the following reports:
**uprc (user percentage responses) = Percent is True, Calc Value is available
**rprc (user responses percent) = Percent is True, Calc Value is available
*Fields available by report
**uprc (user percentage responses) =
***RCount (numeric count percentage of responding users as string)
**ucnt (user count) =
***RCount (numeric count of responding users as string)
**ulist (user response list) =
***PID (Person ID),
***Person (Person Name)
**uresp (user responses by user) =
***PID (Person ID)
***Person (Person Name)
***Depending on the target question's optype the remaining fields will change:
****selectop (selection lists)
*****OpDisplayStr (The display text version of a selection list selection)
*****OpVal (Numeric value of the selection)
****dateop (date fields)
*****ResponseStr (The date as a string)
*****OpValDate (The date as a numeric time-stamp value).
****textop (text fields)
*****ResponseStr (The text entered)
****ratingop (numeric fields not handled as text fields)
*****ResponseStr (The numeric value as a string)
*****OpVal (Numeric value as a float)
****checkop (Checkbox fields)
*****ResponseStr (True or False as a string)
****adminop (reserved administration fields)
*****ResponseStr (The text entered, or the text of a selection)
**rprc (user responses percent) =
***Depending on the target question's optype the available fields will change:
****selectop (selection lists)
*****OpDisplayStr (The display text version of a selection list selection)
*****ODSCount (Count of the users making this selection)
*****RSPerc (Count as a percentage of users making this selection)
****dateop (date fields)
*****ResponseStr (The date as a string)
*****OpValDate (The date as a numeric time-stamp value).
*****OVDCount (Count of the users entering this date)
*****RSPerc (Count as a percentage of users entering this date)
****textop (text fields)
*****ResponseStr (The text entered)
*****RSCount(Count of the users entering this string)
*****RSPerc(Count as a percentage of users entering this string)
****ratingop (numeric fields not handled as text fields)
*****ResponseStr (The numeric value as a string)
*****OpVal (Numeric value as a float)
*****OVCount(Count of the users entering this value)
*****RSPerc(Count as a percentage of users entering this value)
****checkop (Checkbox fields)
*****ResponseStr (True or False as a string)
*****RSCount(Count of the users entering this True or False)
*****RSPerc(Count as a percentage of users entering this True or False)
****adminop (reserved administration fields)
*****ResponseStr (The text entered, or the text of a selection)
*****RSCount(Count of the users entering this string or selecting this option)
*****RSPerc(Count as a percentage of users entering this string or selecting this option)
**rcnt (user responses count) =
***Depending on the target question's optype the available fields will change:
****selectop (selection lists)
*****OpDisplayStr (The display text version of a selection list selection)
*****ODSCount (Count of the users making this selection)
****dateop (date fields)
*****ResponseStr (The date as a string)
*****OpValDate (The date as a numeric time-stamp value).
*****OVDCount (Count of the users entering this date)
****textop (text fields)
*****ResponseStr (The text entered)
*****RSCount(Count of the users entering this string)
****ratingop (numeric fields not handled as text fields)
*****ResponseStr (The numeric value as a string)
*****OpVal (Numeric value as a float)
*****OVCount(Count of the users entering this value)
****checkop (Checkbox fields)
*****ResponseStr (True or False as a string)
*****RSCount(Count of the users entering this True or False)
****adminop (reserved administration fields)
*****ResponseStr (The text entered, or the text of a selection)
*****RSCount(Count of the users entering this string or selecting this option)
**rgrph (user responses graph) =
***Depending on the target question's optype the available fields will change:
****selectop (selection lists)
*****OpDisplayStr (The display text version of a selection list selection)
*****ODSCount (Count of the users making this selection)
****dateop (date fields)
*****ResponseStr (The date as a string)
*****OpValDate (The date as a numeric time-stamp value).
*****OVDCount (Count of the users entering this date)
****textop (text fields)
*****ResponseStr (The text entered)
*****RSCount(Count of the users entering this string)
****ratingop (numeric fields not handled as text fields)
*****ResponseStr (The numeric value as a string)
*****OpVal (Numeric value as a float)
*****OVCount(Count of the users entering this value)
****checkop (Checkbox fields)
*****ResponseStr (True or False as a string)
*****RSCount(Count of the users entering this True or False)
****adminop (reserved administration fields)
*****ResponseStr (The text entered, or the text of a selection)
*****RSCount(Count of the users entering this string or selecting this option)
**eulist (user list outside of exception cond) =
***ResponseStr (entered value)
**Other Undefined Reports
***Fields as defined by user query
***ResponseStr (entered value)
==Data dump==
In the event that a user wishes to extract the responses from the engine for analysis in another system, the data can be extracted using one of the views into a CSV file.
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
98097391449562d6ce9fd86ae4fab5dade2c9fd7
BPC SurveyManager - Advanced Database Configuration Settings
0
429
624
2019-09-10T15:46:56Z
Bishopj
1
Created page with "=BPC SurveyManager Engine Configuration - Introduction= The BPC SurveyManager SurveyEngine and BPC SurveyManager Clients are configured through a single Configuration table...."
wikitext
text/x-wiki
=BPC SurveyManager Engine Configuration - Introduction=
The BPC SurveyManager SurveyEngine and BPC SurveyManager Clients are configured through a single Configuration table. This table includes both locational and behavioural configurations, and a set of properties that act as the universal defaults for key layout components. For this reason the survey database is shipped and/or installed with basic settings already loaded. There are two groups of configurations loaded - the general shared set called "SYS" and a user named set we call the "localisation" set.
The localisation set overrides the SYS set and contains information that is specific to the actual computer on which the survey engine resides. The SYS set contains settings that should be used by every copy of this database (or every instance of the survey engine in a web farm) no matter what machine the server engine resides on.
The BPC SurveyManager is designed as a distributed database and farmed middle tier system. There is no restriction on the number of databases that a survey environment /farm can have. One "implementation" might span both local (desktop) installations with local databases and multiple servers, some single instance web servers and some in a web farm, and each with potentially many databases. Each computer with at least one survey engine is called a "publishing server" (but only one server of a farm with shared databases should have data published to it - at least with respect to efficiency).
The BPC SurveyManager Survey Engine is a small ISAPI dll (library of 2 to 4 MB) "BPCSurveyManager.dll", but many uniquely named copies of the library can exist on the one web server, even in the one virtual directory. You decide what you want each copy to be called, but the name of the library on each server is then uniquely tied to a database through a combination of registry entries and configuration table entries. In fact you will never see a survey manager engine actually called BPCSurveyManager.dll on a server as that is simply the name we give it for distribution. This way no web page or survey form or user ever actually refers to a database by name. All this information is kept exclusively server side.
When the Survey Manager starts it looks in the local registry to find out what localisation group name it is supposed to use on the local server, and then accesses the database tied to the library and loads that localistaion group. A cross check occurs to ensure that the database agrees that this is the library that is supposed to be accessing it for certain operations where this matters.
For this reason there can be multiple localisations of many of the settings in the configuration table of any one database. The versions are distinguished by a group key. On some BPC SurveyManager clients you can choose the localisation group key to load and therefer load different sets of settings, but the survey engine will always find its default group key from the registry when it starts.
There are many advantages in this seemingly complex system. For example:
* A database can be set up for multiple machines and copied between servers or between a desktop and a server and automatically use the correct localisation in the destination environment with no configuration change.
* Multiple servers in a web farm can have server specific settings if needed and share the same database.
* Multiple non farmed web servers can share a database and use server specific settings
* Multiple non farmed web servers can have non cooperative unique copies of the same database aand be mirrored/replicated with out damaging the local server settings.
* Users never see the underlying database names so an attack vector is eliminated.
* A central IT team can setup a standard survey database with survey content and distribute the entire database as a backup to multiple physically isolated organisation units or locations.
The universal group key is 'SYS1'. Any number of uniquely names localisation keys can be used in the one database.
=The Settings=
==The Server Settings==
*DefMailServer - (local) The SMTP server network name or IP address to use for sending emails when a relay server is used
*DefMailUserID - (local) The login user id for the smtp server if required
*DefMailUserPwd - (local) The login user pwd for the smtp server if required
*DefSMLibrary - (SYS) The survey engine library name that should talk to this database
*DefSurveyPageLayout - This is the default page layout. It is only used if no layout is defined for the survey in the survey header. See the reference on page layouts and the page on tags for more information . It is usually set to something simple like:
<pre><HTML><HEAD><#JVScriptLib1></HEAD><BODY><H1><#SurveyName></H1><BPCDEBUG><#SurveyBody><BPCDEBUG></BODY></HTML></pre>
*DefSurveyType - (SYS) The default survey layout type if not specified in the survey. This essentially defined the underlying structure of what replaces the <#SurveyBody > tag. This is just a default structure if no tags are overridden at survey or even at the question level. It is quite ok to use the ROWTABLE as the default and then completely re-arrange the layout in the survey itself. The recommended default setting is ROWTABLE . Options iclude:
**ROWTABLE - Classic table structure with a column each for question numbers, questions, responses, weights and additional columns a required.
**GRIDTABLE - Table structure with the question text above the response area.
**DISCUSSION - Question text and response section are merged into a single column
**CUSTOM - User defined.
*DefDNSServer - (local) IP address of the preferred DNS server to use. Used by certain EMail modes and advanced operations to lookup URLs.
*DefMailFromEMail - (local) The default email address to use as the "from" email address in emails sent by the survey engine
*DefMailFromName - (local) The default email "from" name to use in emails sent by the survey engine
*DefTestPID - (SYS) Normally set to 'PRVW' this is the ID to preview a survey. Mainly used during survey creation and testing. PRVW inputs are ignored in reports and are liable to be purged from the response table at any time. A property can be set to block PRVW access to a survey. PRVW can access a survey without publication.
*DefUseMailRelay - (local) also (SYS). True or False. Flags whether emails should be sent via a relay server as defined in the DefMailServer setting. the recommended configuration is "True". The direct connection alternative required the DNS to be properly initialised and is much slower as it handles MX record interogation itself.
*FriendlyDBName - (SYS) This is a string that can be displayed to identify which database is connected. It would usually be something like "ACME Survey System"
==Port & Proxy Settings==
The distribution functions of survey manager may have to be routed via a proxy server depending on your network. These settings support that.
*PortHTTP - (local) Port for HTTP traffic - normally set to 80
*ProxyAuthenticationRequired - (local) True/False
*ProxyUserName - (local) Normally blank.
*ProxyPassword - (local) Normally blank.
*ProxyPort - (local) Normally blank.
*ProxyServerURL - (local) proxy URL or IP. Normally blank.
Leaving the items identified as "Normally blank" will ensure the proxy mechanism is not used.
==Locations of Support Files==
*URLScriptLibrary - (local) The full disk path of the standard javascript library including path and library name. The standard javascript library shipped with BPCSurveyManager is "BPCJavaScriptLib2.js"
*URLJVScriptLibrary - (local) The HTTP address of the standard javascript library including path and library name. The standard javascript library shipped with BPCSurveyManager is "BPCJavaScriptLib2.js"
*URLCSSLibrary - (local) The full disk path of the standard CSS file including path and file name. Can be blank.
*URLLogo1 - (local) The HTTP address of the standard logo graphic to use for surveys where the sitelogo tag is used. Include path and image file name.
*URLTestSite - (local) The HTTP address of the default test server including path but excluding the library name. Used by BPC SurveyManager clients to test a survey. Can be the same as the default publication site if the client is working directly on the publication database, but usually it is a local database.
*URLDefPublicationSite - (local) The HTTP address of the default publication server including path, but excluding the library name.
*URLMaintExceptSite - (SYS) The HTTP address of the standard error page to use for web client survey manager app session failure. Include path and file name. This should usually be a static web page with a link to re-login to the web client.
*URLMaintSite - (SYS) The HTTP address of the BPC SurveyManager web client including path, but excluding the library name.
==Counter Definitions for Populating the Counters Table==
The system uses a vaiety of ID counters. ID counters are alpha-numeric strings designed to allow alpha sorting of IDs. These setting define the masks to be used to initialise the counter table as required.
*ZCounterDate - DateOp counter prefix. Set to "DAT"
*ZCounterDateMask - DateOp counter mask for providing a numeric space for the counters. Set to "%.5d" (allows a 5 digit number "00000". So the fist item would be DAT00001
*ZCounterRating - RatingOp counter prefix. Set to "NUM"
*ZCounterRatingMask - RatingOp counter mask for providing a numeric space for the counters. Set to "%.5d" (allows a 5 digit number "00000". So the fist item would be NUM00001
*ZCounterSelect - SelectOp counter prefix. Set to "SEL"
*ZCounterSelectMask - SelectOp counter mask for providing a numeric space for the counters. Set to "%.5d" (allows a 5 digit number "00000". So the fist item would be SEL00001
SurveyManager does not care what these are. It will increment by replacing numeric fields and then add extra numbers when it runs out.
==Survey Body Presentation Tag Defaults where Layout Type Unknown==
This group os default tag settings is not a namable group, unlike those that are presented in the following sections. These tag names represent the tags that are overriden in the survey header layout set, and also in the properties of an organisation, question group, or question. This set is only used in the event that the survey header is set to a layout group name that is not recognised by the survey engine and not completely defined in the survey or organisation settings.
It is set to represent mirror the ROWTABLE settings as the standard survey body layout is a row table (ROWTABLE ). This layout puts each question-response on a single row, displayed as the question number - qustion text - response - weight (optional). This is a very classic question - answer structure. Usually the colours of the rows cycle every two rows between light and darker background bands to help the responder's eyes track the line. Since 2005 the standard ROWTABLE has used the "BishopPhillips Blue-Base Number1" colour scheme. Earlier databases used the "Stanton Consulting Burgundy" colour scheme. If you are operating an earlier default colour scheme you can easilly change all your your surveys in one step by replacing these values as shown below. Most recent demonstration surveys use the "BishopPhillips Blue-Base Number2" colour scheme, but this uses some graphic elements rather than plain colours, so is not used in the global database default.
The standard settings which can be, and often are, over ridden at the organisation, survey, questiongroup, or question levels by redefining properties are:
{| border="2" cellpadding="2"
|-
|TTSTART || ROWTABLE layout page/question group start tag. ||Normally <nowiki><table Width="100%" BgColor="white" ></nowiki>
|-
|TH || ROWTABLE layout headings cell start.||Normally <nowiki><tr><TH BgColor="lightblue" ><font color="darkblue">Number</font></TH><TH BgColor="lightblue" ><font color="darkblue">Question</font></TH><TH BgColor="lightblue" ><font color="darkblue">Response</font></TH></tr></nowiki>
|-
|TRSTART || ROWTABLE layout row start tag. ||Normally <nowiki><tr></nowiki>
|-
|TREND || ROWTABLE layout row end tag. ||Normally <nowiki></table></tr></nowiki>
|-
|TCELLSTART || This is the HTML tag used to define the start of a ROWTABLE table cell. Note the cellparams tagattribute. This is a reserved word that will automatically be replaced any the calculated BgColor, HAlign, VAlign and any CustomAttributes needed for the specific question. || Normally <nowiki><td cellparams ></nowiki>.
|-
|TCELLEND || This is the HTML tag used to define the end of a ROWTABLE table cell. ||Normally <nowiki></td></nowiki>.
|-
|TTEND || ROWTABLE layout page/question group end tag. ||Normally <nowiki></table></nowiki>
|-
|TR || This is the standard question-response row for the ROWTABLE layout. ||Normally set to <nowiki><#QuesDisplayID><#Question><#OpType></nowiki>.
|-
|TRC || This is the checkbox input control question-response row for the ROWTABLE layout. ||Normally set to <nowiki<#QuesDisplayID><#Question align=right><#OpType></nowiki>
|-
|TRL || This is the long question text row for the ROWTABLE layout. This is usually used for section headings and info blocks. ||Normally set to
<nowiki><#QuesDisplayID><#Question align=center colspan=2></nowiki> here.
|-
|TRCOLOUR || A list of colours separated by commas representing the colour row cycle for the survey. ||The standard setting is "#DDDDFF,white" for this type.
|-
|TRERROR
|-
|TRDTA || This is the long question-response row for the ROWTABLE layout. It is used for input controls requiring double column space - such as a large free text entry box. ||Normally set to <nowiki><#QuesDisplayID ><td colspan=2 ><table width=100% ><tr ><#Question ><#OpType ></tr ></table ></td ></nowiki>.
|}
==Survey Body Presentation Tag Defaults where Layout Type ROWTABLE==
The standard survey body layout is a row table (ROWTABLE ). This layout puts each question-response on a single row, displayed as the question number - qustion text - response - weight (optional). This is a very classic question - answer structure. Usually the colours of the rows cycle every two rows between light and darker background bands to help the responder's eyes track the line. Since 2005 the standard ROWTABLE has used the "BishopPhillips Blue-Base Number1" colour scheme. Earlier databases used the "Stanton Consulting Burgundy" colour scheme. If you are operating an earlier default colour scheme you can easilly change all your your surveys in one step by replacing these values as shown below. Most recent demonstration surveys use the "BishopPhillips Blue-Base Number2" colour scheme, but this uses some graphic elements rather than plain colours, so is not used in the global database default.
The standard settings which can be, and often are, over ridden at the organisation, survey, questiongroup, or question levels by redefining properties are:
{| border="2" cellpadding="2"
|-
|RT_TTSTART || ROWTABLE layout page/question group start tag. ||Normally <nowiki><table Width="100%" BgColor="white" ></nowiki>
|-
|RT_TH || ROWTABLE layout headings cell start.||Normally <nowiki><tr><TH BgColor="lightblue" ><font color="darkblue">Number</font></TH><TH BgColor="lightblue" ><font color="darkblue">Question</font></TH><TH BgColor="lightblue" ><font color="darkblue">Response</font></TH></tr></nowiki>
|-
|RT_TRSTART || ROWTABLE layout row start tag. ||Normally <nowiki><tr></nowiki>
|-
|RT_TREND || ROWTABLE layout row end tag. ||Normally <nowiki></table></tr></nowiki>
|-
|RT_TCELLSTART || This is the HTML tag used to define the start of a ROWTABLE table cell. Note the cellparams tagattribute. This is a reserved word that will automatically be replaced any the calculated BgColor, HAlign, VAlign and any CustomAttributes needed for the specific question. || Normally <nowiki><td cellparams ></nowiki>.
|-
|RT_TCELLEND || This is the HTML tag used to define the end of a ROWTABLE table cell. ||Normally <nowiki></td></nowiki>.
|-
|RT_TTEND || ROWTABLE layout page/question group end tag. ||Normally <nowiki></table></nowiki>
|-
|RT_TR || This is the standard question-response row for the ROWTABLE layout. ||Normally set to <nowiki><#QuesDisplayID><#Question><#OpType></nowiki>.
|-
|RT_TRC || This is the checkbox input control question-response row for the ROWTABLE layout. ||Normally set to <nowiki<#QuesDisplayID><#Question align=right><#OpType></nowiki>
|-
|RT_TRL || This is the long question text row for the ROWTABLE layout. This is usually used for section headings and info blocks. ||Normally set to
<nowiki><#QuesDisplayID><#Question align=center colspan=2></nowiki> here.
|-
|RT_TRCOLOUR || A list of colours separated by commas representing the colour row cycle for the survey. ||The standard setting is "#DDDDFF,white" for this type.
|-
|RT_TRERROR
|-
|RT_TRDTA || This is the long question-response row for the ROWTABLE layout. It is used for input controls requiring double column space - such as a large free text entry box. ||Normally set to <nowiki><#QuesDisplayID ><td colspan=2 ><table width=100% ><tr ><#Question ><#OpType ></tr ></table ></td ></nowiki>.
|}
==Survey Body Presentation Tag Defaults where Layout Type GRIDTABLE==
{| border="2" cellpadding="2"
|-
|GR_TTYPE
|-
|GR_TTSTART || GRIDTABLE layout page/question group start tag. ||Normally <nowiki><table Width="100%" BgColor="white" ></nowiki>
|-
|GR_TH || GRIDTABLE layout headings cell start.||Normally <nowiki> <tr><TH BgColor="white" width="10%" ><b>ID</b></TH><TH BgColor="white" ><b>Question (Response)</b></TH></tr></nowiki>
|-
|GR_TRSTART || GRIDTABLE layout row start tag. ||Normally <nowiki><tr><table width="100%" ></nowiki>
|-
|GR_TREND || GRIDTABLE layout row end tag. ||Normally <nowiki></table></tr></nowiki>
|-
|GR_TCELLSTART || This is the HTML tag used to define the start of a Discussion table cell. Note the cellparams tagattribute. This is a reserved word that will automatically be replaced any the calculated BgColor, HAlign, VAlign and any CustomAttributes needed for the specific question. || Normally <nowiki><td cellparams ></nowiki>.
|-
|GR_TCELLEND || This is the HTML tag used to define the end of a Discussion table cell. ||Normally <nowiki></td></nowiki>.
|-
|GR_TTEND || GRIDTABLE layout page/question group end tag. ||Normally <nowiki></table></nowiki>
|-
|GR_TR || This is the standard question-response row for the GRIDTABLE layout. ||Normally set to <nowiki><tr><#QuesDisplayID width="10%%" ><#Question width="90%%" ></tr><tr><#OpType colspan=2 ></tr></nowiki>.
|-
|GR_TRC || This is the checkbox input control question-response row for the GRIDTABLE layout. ||Normally set to <nowiki><tr><#QuesDisplayID width="10%%" ><#Question width="90%%" ></tr><tr><#OpType colspan=2 ></tr></nowiki>
|-
|GR_TRL || This is the long question text row for the GRIDTABLE layout. This is usually used for section headings and info blocks. ||Normally set to
<nowiki><tr><#QuesDisplayID width="10%%" ><#Question width="90%%" ></tr></nowiki> here.
|-
|GR_TRCOLOUR || A list of colours separated by commas representing the colour row cycle for the survey. ||The standard setting is "white" for this type.
|-
|GR_TRERROR
|-
|GR_TRDTA || This is the long question-response row for the GRIDTABLE layout. It is used for input controls requiring double column space - such as a large free text entry box. ||Normally set to <nowiki><tr><#QuesDisplayID width="10%%" ><#Question width="90%%" ></tr><tr><#OpType colspan=2 ></tr></nowiki>.
|}
==Survey Body Presentation Tag Defaults where Layout Type DISCUSSION==
{| border="2" cellpadding="2"
|-
|DR_TTSTART || DISCUSSION layout page/question group start tag. ||Normally <nowiki><table ></nowiki>
|-
|DR_TH || Normally empty in the DISCUSSION layout as this layout has no headings.||
|-
|DR_TRSTART || DISCUSSION layout row start tag. ||Normally <nowiki><tr ></nowiki>
|-
|DR_TREND || DISCUSSION layout row end tag. ||Normally <nowiki></tr ></nowiki>
|-
|DR_TCELLSTART || This is the HTML tag used to define the start of a Discussion table cell. Note the cellparams tagattribute. This is a reserved word that will automatically be replaced any the calculated BgColor, HAlign, VAlign and any CustomAttributes needed for the specific question. || Normally <nowiki><td cellparams ></nowiki>.
|-
|DR_TCELLEND || This is the HTML tag used to define the end of a Discussion table cell. ||Normally <nowiki></td></nowiki>.
|-
|DR_TTEND || DISCUSSION layout page/question group end tag. ||Normally <nowiki></table ></nowiki>
|-
|DR_TR || This is the standard question-response row for the DISCUSSION layout. ||Normally set to <nowiki><#Question></tr><tr><#OpType></nowiki>.
|-
|DR_TRC || This is the checkbox input control question-response row for the DISCUSSION layout. ||Normally set to <nowiki><#Question></tr><tr><#OpType></nowiki>.
|-
|DR_TRL || This is the long question text row for the DISCUSSION layout. This is usually used for section headings and info blocks. ||Normally set to
<nowiki><#Question></tr><tr><#OpType></nowiki> here.
|-
|DR_TRCOLOUR || A list of colours separated by commas representing the colour row cycle for the survey. ||The standard setting is "white" for this type.
|-
|DR_TRERROR
|-
|DR_TRDTA || This is the long question-response row for the DISCUSSION layout. It is used for input controls requiring double column space - such as a large free text entry box. ||Normally set to <nowiki><#Question></tr><tr><#OpType></nowiki>.
|}
==Messages==
*AccessNotAllowedMsg - (SYS) The messagre displayed if access is not allowed for the current responder to the survey. Normally:"<nowiki><H1>Access Not Available.</H1></nowiki>"
*XAccessNotAllowedMsg - (SYS) As above.
*InstanceAccessNotAllowedMsg - (SYS) The messagre displayed if access is not allowed for the current responder to any currently available instance of the survey. Normally:"<nowiki><H1>Access to the requested survey instance is not available</H1></nowiki>"
*XInstanceAccessNotAllowedMsg - (SYS) As Above
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
7d9c6862867402b829510f1a18ebca9802994e22
BPC SurveyManager - Client Overview
0
430
625
2019-09-10T15:48:51Z
Bishopj
1
Created page with "=Survey Manager Quick Start Guide= ==Introduction== SurveyManager is an application that enables an organisation to collect information from a large number of users by di..."
wikitext
text/x-wiki
=Survey Manager Quick Start Guide=
==Introduction==
SurveyManager is an application that enables an organisation to collect information from a large number of users by distributing on-line surveys. SurveyManager is used to define surveys with assigned questions and an assigned list of respondents. An e-mail message is sent to survey respondents requesting they complete the survey with a link to the on-line survey. Reminder messages can be sent to late respondents. Respondents complete the survey on-line and the responses are reported by SurveyManager. Surveys can be setup to run periodically.
The following quick start instructions have been provided to assist new users get their survey site up and running quickly. It provides a step by step guide to performing basic Survey Manager functions. Please refer to the install instructions for help on installation and setting up the software before using this guide.
==Survey Wizard - Using Quick Survey Login==
===Introduction===
BPC RiskManager contains a simplified BPC SurveyManager Wizard for creating, managing and reporting on surveys in the context of the BPC RiskManager application needs. In this section we provide a brief overview of the key functions and steps to using the wizard.
===Create or Edit a Survey===
In the Survey Centre:
*Select an existing survey or create a new survey.
When the survey wixard opens:
*if creating a new survey:
**Enter a survey name.
**Enter default number of questions per page
**Enter the number of reminders to send.
**For new surveys enter publication site for web server. EG: <nowiki>http://<your sub domain>.acumenalliance.com.au/staffsurvey/SurveyManager1.dll</nowiki>
**Enter owner’s name and email address.
*Remember to Post changes to save your survey.
There are a very large number of special purpose tags available for the survey layout. Clicking on the survey layout button will bring up the survey layout screen. If you want something other than the standard layout and appearance refer to here for details of the layout tags.
Surveys also have properties that can be referenced by tags and control aspects of the survey's behaviour. Clicking on the property button will bring up the survey properties screen. Refer here for more information on available properties. While editing properties, double clicking on the appropriate cell in the grid will bring up a list of the properties and the appropriate value options for the specific cell from which you can select.
==Create or Edit Reminders==
Reminders (also called invitations) are emails that are sent to survey respondents who have not started their survey. The number of reminders was specified on the first page. Each reminder is sent in turn to the repondents until the last available reminder has been sent. Thereafter the last reminder is sent each time you request that invitations be dispatched.
*Enter the survey reminder text.
*If there is more than one reminder you may want to include the number in the text. EG: This is your second reminder. Please complete survey…
*Reminders support SMHTML tags that will automatically insert things like the survey name, a clickable link and survey owner, etc. Refer to the here for a list of available tags and their meanings. Tags are inserted directly into the text of the email and replaced when the emailer dispatches the invitation.
==Create or Edit Responses==
Responses are used to define response groups used by survey questions. Response groups can be defined once and reused on many questions and on many surveys. A typical use is for a selection list or radio button group. EG: Select from: I Agree, I Disagree, Unsure.
*Select the type of response (not all type support response groups)
*Dates: enter a range name (response group identifier) and from date and/or to date values.
*Numbers: enter a rating name (response group identifier); select the Integer check box to enforce entry of integer numbers only, enter a rate from and rate to and size in characters.
*Selections, enter:
**OpGroupID (response group identifier);
**OpDisplayStr (value to display on form);
**OpValStr (value to store in database when selected) and
**OpVal (sort value for sorting selections).
*To create selection groups enter each selection option as a separate record with the same OpGroupID value.
==Create or Edit Questions==
Surveys are made up of questions. We call every line in the survey a question - even if it is just a heading.
*You can import questions from the question pool by clicking import. This function allows you to re-use questions you have already defined.
*Select the AutoNumber check box to turn on automatic numbering of questions. Each question requires a unique QID (question identifier) and this function will automatically allocate a QID to new questions. The number mask is displayed next to it. The default mask is the survey name + Q + a three digit incrementing number.
*Question fields:
**QID: Unique question identifier. The questions on a survey form appear in order of QID.
**QuesdDisplayID: Question display id used for question number or a reference number.
**Question: Question text appearing on survey form
**LayoutHTML: Advanced layout property. This field is optional and will default from the question layout html field value assigned at the survey level. The value at the survey level defaults from a system configuration value.
**Input: Use ‘selectop’ to define a form field to collect user information or ‘infoop’ to define a survey section break to display section header information.
**DisplayType: If field input type is set to ‘infoop’ then select label. For ‘selectop’ select ‘button’ for a button control, select ‘checkbox’ for a check box control (user can check 0, 1 or many boxes), select ‘droplist’ for a drop down list control, select ‘radio’ for a radio group control (can select only one from list).
**OpGroupID: Use the pop-up form to select a response group. This field is optional. The group selected will be used in assigning the contents of radio button controls, the list appearing in drop down list and the names beside each checkbox control. The group selected can be edited from the Response Options tab beside the question data grid.
**Assign rules: select the Rules tab. The survey form can perform different actions dependent upon the response to a question. The test field contains the expression to evaluate. The ‘Do if true’ and ‘Do if false’ fields store the action to perform.
==Assign Instances to the Survey==
Surveys need to be assigned instances in order to be published. Instances are arranged in groups. You can create groups, create instances for a group and assign instance(s) to a survey.
*Default group: The default group enables you to publish a survey and not need to add each survey instance manually.
*‘Filter by Group’ will filter the list of instances by the group selected.
*Create instances (with ‘Row Select’ unchecked) and assign InstanceID (unique instance identifier for group), Description (for descriptive purposes), InGroup (for group membership) and GrpSelector (unique identifier for all instances, order is important for assigning survey responses EG: value assigned for Jan 04 must sort before Feb 04).
*Select instance(s) (with ‘Row Select’ checked) by selecting instances from grid (grid supports multiple selections by holding down the control key) and click on ‘Insert Selected Instances’.
*Edit assigned instances: Field Locked will lock an instance and survey respondents will not be able to modify these surveys, FromDate and ToDate are used to assign the survey response to the correct instance. EG: Surveys completed on 5th Jan 04 will be assigned to Jan 04 instance if instance has date range 1/1/04 to 31/1/04.
==Publish the survey to People==
Surveys need to be assigned to people in order to distribute survey.
*Publish to people: Select person(s) (grid supports multiple selections by holding down the control key) and click ‘Publish to Selected People’.
*Send survey reminders: Click ‘Send Next Reminder Now’ to send notification email messages.
*Edit properties of people selected: ‘Remove’ will delete assigned user, ‘Lock’/’Unlock’ will lock/unlock a from accessing the survey, ‘Reset Rmdr’ will reset the reminder count and the next send survey reminders process will send the first reminder.
==Preview the Survey==
Preview the survey form.
*Refresh: Close and re-open survey form
*Preview PID: Select user to impersonate when previewing survey form.
=Advanced Survey functions - Using Advanced Login=
<table class=MsoTableGrid border=1 cellspacing=0 cellpadding=0 width=100%
style='border-collapse:collapse;border:none'>
<tr>
<td width=139 valign=top style='width:1.45in;border:solid windowtext 1.0pt;
background:maroon;padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal><b><span style='font-size:11.0pt;font-family:Garamond;color:white' >Task</span></b></p>
</td>
<td width=144 valign=top style='width:1.5in;border:solid windowtext 1.0pt;
border-left:none;background:maroon;padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal><b><span style='font-size:11.0pt;font-family:Garamond;color:white'>Location</span></b></p>
</td>
<td width=348 valign=top style='width:261.0pt;border:solid windowtext 1.0pt;
border-left:none;background:maroon;padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal><b><span style='font-size:11.0pt;font-family:Garamond;color:white'>Functions</span></b></p>
</td>
</tr>
<tr>
<td width=139 valign=top style='width:1.45in;border:solid windowtext 1.0pt;
border-top:none;padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal><span style='font-size:11.0pt;font-family:Garamond'>Create
Survey Respondents</span></p>
</td>
<td width=144 valign=top style='width:1.5in;border-top:none;border-left:none;
border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt;
padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal><span style='font-size:11.0pt;font-family:Garamond'>‘People’
tab.</span></p>
</td>
<td width=348 valign=top style='width:261.0pt;border-top:none;border-left:
none;border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt;
padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal><span style='font-size:11.0pt;font-family:Garamond'>Insert
record to the People list: Assign unique PID, enter your name, a valid e-mail
address and OrgRole='Admin' or ‘User’.</span></p>
</td>
</tr>
<tr>
<td width=139 valign=top style='width:1.45in;border:solid windowtext 1.0pt;
border-top:none;padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal><span style='font-size:11.0pt;font-family:Garamond'>Assign
Respondents to Organisation(s)</span></p>
</td>
<td width=144 valign=top style='width:1.5in;border-top:none;border-left:none;
border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt;
padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal><span style='font-size:11.0pt;font-family:Garamond'>‘Organisation’
tab</span></p>
</td>
<td width=348 valign=top style='width:261.0pt;border-top:none;border-left:
none;border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt;
padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal style='margin-left:.25in;text-indent:-.25in'><span
style='font-size:11.0pt;font-family:Symbol'>·<span style='font:7.0pt "Times New Roman"'>
</span></span><span style='font-size:11.0pt;font-family:Garamond'>Set radio
button to 'Select mode'. Highlight username and organisation from data grids.
</span></p>
<p class=MsoNormal style='margin-left:.25in;text-indent:-.25in'><span
style='font-size:11.0pt;font-family:Symbol'>·<span style='font:7.0pt "Times New Roman"'>
</span></span><span style='font-size:11.0pt;font-family:Garamond'>Drag and
drop OrgID to 'Is a member of the following Organisations' data grid.</span></p>
</td>
</tr>
<tr>
<td width=139 valign=top style='width:1.45in;border:solid windowtext 1.0pt;
border-top:none;padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal><span style='font-size:11.0pt;font-family:Garamond'>Publish
Survey to Web Server</span></p>
</td>
<td width=144 valign=top style='width:1.5in;border-top:none;border-left:none;
border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt;
padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal><span style='font-size:11.0pt;font-family:Garamond'>‘Communications’
tab</span></p>
</td>
<td width=348 valign=top style='width:261.0pt;border-top:none;border-left:
none;border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt;
padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal style='margin-left:.25in;text-indent:-.25in'><span
style='font-size:11.0pt;font-family:Symbol'>·<span style='font:7.0pt "Times New Roman"'>
</span></span><span style='font-size:11.0pt;font-family:Garamond'>Select the
survey you want to publish (using the wizard)</span></p>
<p class=MsoNormal><span style='font-size:11.0pt;font-family:Garamond'>Enter
publishing server name. EG: <nowiki>http://yoursubdomain.bishopphillips.com.au/staffsurvey/SurveyManager1.dll</nowiki></span></p>
<p class=MsoNormal style='margin-left:.25in;text-indent:-.25in'><span
style='font-size:11.0pt;font-family:Symbol'>·<span style='font:7.0pt "Times New Roman"'>
</span></span><span style='font-size:11.0pt;font-family:Garamond'>On the Communications
Centre tab enter your PID (for person) and OrgID (for organisation). </span></p>
<p class=MsoNormal style='margin-left:.25in;text-indent:-.25in'><span
style='font-size:11.0pt;font-family:Symbol'>·<span style='font:7.0pt "Times New Roman"'>
</span></span><span style='font-size:11.0pt;font-family:Garamond'>Hit
‘Connect’. </span></p>
<p class=MsoNormal style='margin-left:.25in;text-indent:-.25in'><span
style='font-size:11.0pt;font-family:Symbol'>·<span style='font:7.0pt "Times New Roman"'>
</span></span><span style='font-size:11.0pt;font-family:Garamond'>Send &
Retrieve data by clicking on all buttons below</span></p>
</td>
</tr>
<tr>
<td width=139 valign=top style='width:1.45in;border:solid windowtext 1.0pt;
border-top:none;padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal><span style='font-size:11.0pt;font-family:Garamond'>Perform
Reporting & Analysis</span></p>
</td>
<td width=144 valign=top style='width:1.5in;border-top:none;border-left:none;
border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt;
padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal><span style='font-size:11.0pt;font-family:Garamond'>‘Analyse’
tab</span></p>
</td>
<td width=348 valign=top style='width:261.0pt;border-top:none;border-left:
none;border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt;
padding:0in 5.4pt 0in 5.4pt'>
<p class=MsoNormal style='margin-left:.25in;text-indent:-.25in'><span
style='font-size:11.0pt;font-family:Symbol'>·<span style='font:7.0pt "Times New Roman"'>
</span></span><span style='font-size:11.0pt;font-family:Garamond'>Use the
Edit Reports tab to select a stored report or add a new report. Reports are
assigned to a survey, have an identifying RepID and contain command script
(sql statement). </span></p>
<p class=MsoNormal style='margin-left:.25in;text-indent:-.25in'><span
style='font-size:11.0pt;font-family:Symbol'>·<span style='font:7.0pt "Times New Roman"'>
</span></span><span style='font-size:11.0pt;font-family:Garamond'>Use the
Preview tab to run the current query and retrieve results.</span></p>
</td>
</tr>
</table>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
4c8aa897feb7d264b332834c6f483de689200c66
BPC SurveyManager - Tutorials - Survey Layouts
0
431
626
2019-09-10T15:50:12Z
Bishopj
1
Created page with "=Introduction To Survey Layouts= In the survey layout tutorials we shall look at some simple and some advanced layouts and behaviours and how you build them. In the course o..."
wikitext
text/x-wiki
=Introduction To Survey Layouts=
In the survey layout tutorials we shall look at some simple and some advanced layouts and behaviours and how you build them. In the course of these discussions we will be providing content that you can copy and paste into your BPC SurveyManager or BPC RiskManager clients to use to create the surveys. Feel free to take whatever you want from these tutorials, and use them for any purpose, including both private and commercial purposes.
In each case we willbe concentrating on the header section of the survey record and specifically looking at:
* Survey HTML Layout
* Survey QScript
* Survey Row Definition
* Survey Properties
=The Tutorials=
* [[SM Tutorial 1: A Simple Common Survey]]
* [[SM Tutorial 1: A One Page Static Menu]]
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
2ca3f1811da79776a5d4b79520c61b60b0f11d38
SM Tutorial 1: A Simple Common Survey
0
432
627
2019-09-10T15:51:01Z
Bishopj
1
Created page with "=Introduction= In this survey we shall build a survey with the cool blue look you have seen in some of the examples. Remember, we are only building the "appearance of the su..."
wikitext
text/x-wiki
=Introduction=
In this survey we shall build a survey with the cool blue look you have seen in some of the examples. Remember, we are only building the "appearance of the survey", the content of the questions and headings is up to you. This presentation is suitable for all classic question - answer survey purposes, and is probably the most common format you will use. It is easy to change the colouring and fonts etc to suite your needs.
This is what it looks like with questions loaded:
[[IMAGE:TutSM1_SurveyExampl1.gif]]
=Getting This Look=
Step 1: Create a new survey in one of the BPC SurveyManager clients and call it whatever you like.
Step 2: Working in the header section only, past the following into the "Layout-Form Shell" field:
<pre>
<HTML>
<HEAD>
<#JVScriptLib1><#CSSSheet ><#csslib1 >
<style type="text/css">
body {
scrollbar-3d-light-color : #AF75EA;
scrollbar-arrow-color : #0033cc;
scrollbar-base-color : #9999CC;
scrollbar-dark-shadow-color : #000000;
scrollbar-face-color : #9999CC;
scrollbar-highlight-color : #ffffff;
scrollbar-shadow-color : #1B0037;
background-color: #CCCCFF;
}
.style22 {color: #FFFFFF; font-weight: bold; font-size: 14px; font-family: Arial, Helvetica, sans-serif; }
.style24 {color: darkblue; font-weight: bold; font-size: 40px; font-family: Arial, Helvetica, sans-serif; }
</style>
</HEAD>
<BODY id="pagecontainer" >
<table width=100% >
<tr><td>
<table>
<tr>
<td></td>
</tr>
</table>
</td>
<td align=center >
<br >
<span class=style24 ><#SurveyName></span >
</td>
<td align=right ><#fpropimage FPROP="SIDLogo" width="110" height="120" >
</td>
</tr>
</table>
<hr>
<BPCDEBUG >
<table width=100%>
<tr ><td ><font color="red" ><i><#errormessage ></i ></font ></td >
</tr >
</table >
<span class="SurveyIntro" ><#SurveyIntro ></span >
<#SurveyBody >
<hr>
<p><span class="PageProgress"><#PageProgress ></span></p>
<p><input type=button name="cmdPring" value="Print Page" onClick="PrintMePage('pagecontainer')" ></p>
<p align=center ><i>Note: You may use your back or forward browser buttons and reenter information. - Just remember to press the continue button at the end of each page for which you have changed your response or the changes will not be recorded.</i></p>
<span class="SurveyHelp" ><#SurveyHelp></span ><#HelpLink ><br>
<span class="SurveyPrivacy" ><#SurveyPrivacy></span ><#PrivacyLink ><br >
<BPCDEBUG >
</BODY>
</HTML>
</pre>
Step 3: In the "Layout - Rows" field paste the following:
<pre>
RT_TR=<#QuesDisplayID valign=top ><#Question valign=top ><#OpType>
RT_TH=<tr><TH background="DoGetFile?URL=/bluebar_single.jpg&PID=PRVW" ><font color="darkblue">Number</font></TH><TH background="DoGetFile?URL=/bluebar_single.jpg&PID=PRVW" ><font color="darkblue">Question</font></TH><TH background="DoGetFile?URL=/bluebar_single.jpg&PID=PRVW" ><font color="darkblue">Response</font></TH></tr>
RT_TRCOLOUR=#DDDDFF,white
</pre>
Step 4: In the "Survey Properties" tab (or window - depending on the client you are using) copy and paste the following. You paste by right clicking on the edge of the grid and choosing paste (If paste is not highlighted, then switch the "row select" on and off in the contect menu and it should enable paste).
<pre>
<smxmlpacket ><survey_propertyQry ><rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >CSSSheet</PropertyID ><PValue ><style type="text/css" >.breaka {page-break-after: always}.breakb {page-break-before: always}@page {margin: 1cm }</style ></PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >CSSUse</PropertyID ><PValue ><br class="breaka" ></PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >ErrorHTML</PropertyID ><PValue ><#ErrorMsg colspan=3 align=center BgColor=yellow style="color:red;font:bold;" ></PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >ErrorMsgPos</PropertyID ><PValue >below</PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >GenPageReqErrorMsg</PropertyID ><PValue >***Please Note: You have not filled in a required question. Please refer to the identified question below. ***</PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >GLV</PropertyID ><PValue >True</PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >PrivacyLink</PropertyID ><PValue ></PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >RadioFrmt</PropertyID ><PValue >width=100%</PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >ReqErrorMsg</PropertyID ><PValue >The question above is a required question. Please complete prior to continuing.</PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >SIDLogo</PropertyID ><PValue ></PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >SuppressFirstContinue</PropertyID ><PValue >False</PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >SuppressFirstHeading</PropertyID ><PValue >False</PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >SurveyClosed</PropertyID ><PValue ><P>The Survey is Now Closed.</P>
<P>Thankyou for your interest.</P>
</PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >SurveyEnd</PropertyID ><PValue ><P><BR>
<H1 align=center><FONT color=blue>The Survey is now complete.</FONT></H1><BR>
<P></P><BR>
<P><BR>
<H3 align=center>Should you wish to revise your responses, you can do so prior to the survey response cut off date by using<BR>the browser's back button now, or reclicking on the link in the original email later. <BR><BR>Thankyou again for your participation.</H3>
<P></P><BR><BR><BR></FONT><BR><BR><BR><BR><BR><BR><BR>.<FONT color=white></FONT>
</PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >SurveyHeading</PropertyID ><PValue ></PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >SurveyHelp</PropertyID ><PValue ></PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >SurveyIntro</PropertyID ><PValue ><P>&nbsp;</P>
</PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >SurveyMessage</PropertyID ><PValue ></PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >SurveyPrivacy</PropertyID ><PValue ></PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >default</SID ><InstanceID >default</InstanceID ><PropertyID >UseWeight</PropertyID ><PValue ></PValue ></rowset >
</survey_propertyQry ></smxmlpacket >
</pre>
Alternatively, you can enter the properties manually use InstanceID="default" in all cases for each PropertyID:
*CSSSheet= <style type="text/css" >.breaka {page-break-after: always}.breakb {page-break-before: always}@page {margin: 1cm }</style >
*CSSUse= <br class="breaka" >
*ErrorHTML= <#ErrorMsg colspan=3 align=center BgColor=yellow style="color:red;font:bold;" >
*ErrorMsgPos= below
*GenPageReqErrorMsg= ***Please Note: You have not filled in a required question. Please refer to the identified question below. ***
*GLV= True
*PrivacyLink= <pre >http://myorg/myprivacypage.html </pre>
*RadioFrmt= width=100%
*ReqErrorMsg= The question above is a required question. Please complete prior to continuing.
*SIDLogo= /myfolder/MainLogo.jpg
*SuppressFirstContinue= False
*SuppressFirstHeading= False
*SurveyClosed= <pre ><P>The Survey is Now Closed.</P><P>Thankyou for your interest.</P></pre>
*SurveyEnd=
<pre>
<P><BR>
<H1 align=center><FONT color=blue>The Survey is now complete.</FONT></H1><BR>
<P></P><BR>
<P><BR>
<H3 align=center>Should you wish to revise your responses, you can do so prior to the survey response cut off date by using<BR>the browser's back button now, or reclicking on the link in the original email later. <BR><BR>Thankyou again for your participation.</H3>
<P></P><BR><BR><BR></FONT><BR><BR><BR><BR><BR><BR><BR>.<FONT color=white></FONT>
</pre>
*SurveyHeading=
*SurveyHelp=
*SurveyIntro= <P>&nbsp;</P>
*SurveyMessage=
*SurveyPrivacy=
*UseWeight= False
Step 5: (Optional) Load a logo into a folder. Using one of the logo or file upload methods (the web client will do this for you when you build the header, while the RM and SM Desktop clients provide dedicated upload facilities. You should replace the entry for "SIDLogo in step 4 with the correct path to your logo.
Step 6: In the "Survey - Script" tab/window of the header insert the following "@([*])" (excluding the quotes). This is the default qscript which tells the rules engine to display a single question group to a page and just get the next question group after the last one each time it has to display a page. It can be overridden by the question level rules.
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
0f6c7a6093166dd8c1579372fafbb4094306cabf
SM Tutorial 1: A One Page Static Menu
0
433
628
2019-09-10T15:52:49Z
Bishopj
1
Created page with "=Introduction= In this survey we shall build a survey that can act as the default front menu page for your surveymanager organisation. You have seen this in the demo example..."
wikitext
text/x-wiki
=Introduction=
In this survey we shall build a survey that can act as the default front menu page for your surveymanager organisation. You have seen this in the demo examples. Remember, we are only building the "appearance of the survey", the content of the questions/menus and headings is up to you. This presentation is suitable for an anonymous entry point (as you can force a login to each survey. You could combine it with the cool blue look to get a blue page instead. It is easy to change the colouring and fonts etc to suite your needs.
The survey includes a number of interesting features, like surveys within surveys, and display of random news messages, using the built-in surveymanager news engine, and floating help prompts at the bottom of the page as you mouse over the buttons. Each time you refresh the page, the news display changes, and when you click a button a survey is launched that returns to this page when in completes.
This is what it looks like with questions loaded:
[[Image:TutSM2_SurveyExampl1.gif]]
=Getting This Look=
==Stage 1. Making the Main Survey==
Step 1: Create a new survey in one of the BPC SurveyManager clients and call it whatever you like. If you want this to be the default survey executed if no survey is requested (and no org id is provided), make sure you call it "default" and store it in the "default" organisation. If you just want it to be the default for an organisation, then store it as "default" in which ever organisation on which you are working.
Step 2: Working in the header section only, past the following into the "Layout-Form Shell" field:
<pre>
<HTML>
<HEAD>
<#JVScriptLib1>
</HEAD>
<BODY id="pagecontainer" >
<table width=100% bgcolor="black" >
<tr>
<td><table><tr></tr></table></td>
<td align=center >
<br ><H1 ><font color="yellow"><#surveyname ></H1><br>
For Organisation: <#OrgID > (<#orgdescription > )
</td>
<td align=center ><#fimage FSRC="/BPCF8aBlackLR.jpg" width="110" height="120" ></td>
</tr>
</table>
<BPCDEBUG >
<table width="100%" >
<tr ><td ><font color="red" ><i><#errormessage ></i ></font ></td ></tr >
</table >
<table border=1 width="100%" >
<tr >
<td colspan=3 >
<#news OID=default ITM=any height=80 >
</td >
</tr >
<tr >
<td VALIGN=Top >
<#SurveyBody SID=MENU2 SIDO=default >
<#SurveyBody SID=MENU6 SIDO=default >
</td >
<td >
<#SurveyBody SID=MENU3 SIDO=default >
<#SurveyBody SID=MENU4 SIDO=default >
</td >
<td >
<#SurveyBody SID=MENU5 SIDO=default >
<#SurveyBody SID=MENU7 SIDO=default >
</td >
</tr >
<tr >
<td colspan=3 >
<div name=hintbox id=hintbox ></div>
</td >
</tr >
</table >
<br><br><hr >
<BPCDEBUG >
</BODY>
</HTML>
</pre>
Step 3: In the "Layout - Rows" clear any entries.
<pre>
</pre>
Step 4: In the "Survey Properties" tab (or window - depending on the client you are using) delete all entries.
<pre>
</pre>
Alternatively, you can remove any value for any properties present.
Step 5: (Optional) Replace the FSRC reference in the layout: <#fimage FSRC="/BPCF8aBlackLR.jpg" width="110" height="120" > with the path to your logo/image in the SM database (or change the tag to a conventional image tag with a reference to a logo you have stored at a conventional URL.
Step 6: In the "Survey - Script" tab/window of the header clear any script that is present and in the survey header, set default questions per page to 0. In other words - this survey has no questions.
This completes the definition of the main menu survey.
==Stage 2. Making The Menus==
Repeat these steps for each menu survey you referenced in the layout of main menu survey above.
Step 1: Create a new survey in one of the BPC SurveyManager clients and call it "MENU2" (or whatever name(s) you used for your menus). Change the default "Row Layout Type" to "GRIDTABLE" in the header.
Step 2: Working in the header section only, past the following into the "Layout-Form Shell" field:
<pre>
<table width=100% >
<tr>
<td align=center >
<br ><H1 ><font color="darkred"><#surveyname ></H1><br >
</td>
</tr>
</table>
<BPCDEBUG >
<#SurveyBody >
<br><br>
<hr >
<BPCDEBUG >
</pre>
Step 3: In the "Layout - Rows" paste the following:
<pre>
GR_TH=<!- comment >
GR_TR=<tr ><#Question align="center" style="width:100%%" ></tr ><tr ><#OpType align="center" style="width:100%%" ></tr >
GR_TRSTART=<tr ><td style="width:100%" ><table width="100%" >
GR_TREND=</table ></td ></tr >
GR_TTSTART=<table Width="100%" BgColor="white" >
GR_TTEND=</table >
</pre>
Step 4: In the "Survey Properties" tab (or window - depending on the client you are using) copy and paste the following. You paste by right clicking on the edge of the grid and choosing paste (If paste is not highlighted, then switch the "row select" on and off in the contect menu and it should enable paste).
<pre>
<smxmlpacket ><survey_propertyQry ><rowset ><OrgID >default</OrgID ><SID >MENU2</SID ><InstanceID >default</InstanceID ><PropertyID >MOHint</PropertyID ><PValue >True</PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >MENU2</SID ><InstanceID >default</InstanceID ><PropertyID >MOHintID</PropertyID ><PValue >hintbox</PValue ></rowset >
<rowset ><OrgID >default</OrgID ><SID >MENU2</SID ><InstanceID >default</InstanceID ><PropertyID >NoContinue</PropertyID ><PValue >True</PValue ></rowset >
</survey_propertyQry ></smxmlpacket >
</pre>
Note, that you do not need to edit this packet even if your survey has a different name as the paste function will replace these entries with the right entries for your current survey.
Alternatively, you can enter the properties manually use InstanceID="default" in all cases for each PropertyID:
*MOHint= True
*MOHintID= hintbox
*NoContinue= True
The MO_ properties are used by the BPCSM javascipt library to decide whether to display mouse-over hints, and where to display them. This is how the mouse-over help is done.
Step 5: In the "Survey - Script" tab/window of the header enter "@(.[*])" (minus the quotes) and set the default page size to 1 question.
Step 6: In order to make sense of this menu survey you need to add a single question (as the survey has only one "selectop" question, which causes the button menu to display. This is really the only tricky bit of this whole survey set up.
(Note - you may want to do Step 7 first, but read this so you know what is going on).
We are going to add a single selection list type question & response, and display it as a vertical list of buttons that launch BPC SurveyManager surveys when clicked. We will call that list "mstart4" - but you can call it anything you like - but use only letters and numbers (and no spaces) in the name. We will make that list in step 7, but it would really be better if you did that first and then returned to Step 6 after.
* Add a single question with the name of your menu as the text. In the example it is "Staff Survey Examples".
* Set the input type to "selectop" and the display type to "jsmbutton"
* Set the question properties to:
**ItemFrmt: style="width:100%"
**RadioAln: Vert
* Next set the OpGroupID to "mstart4" or whatever you have defined the selectop group you want to use for the menu to. (Haven't made one yet? Then read on...)
Step 7: Making the survey selection list.
The survey selection list can take a number of forms, and the differences depend on how you have set up each survey and the launch mode intended to be used. At its simplest, the launched survey inherits all the identifying information such as PID, RK and ORGId etc from the launching survey. So if you accessed the launching survey with a login, or through an invitation, that information will be inherited - unless it is overridden in the launch method. The jsmbutton is a launch method that assumes you want as much to be inherited as possible.
Go the section of your client that allows you to create selectop lists (this varies depending on which client you are using).
You will need to create one selectop for each button (and therefore survey) you want to be available in the button menu. All selectops in the menu will have to share a common selctop group ID (in our example we are calling this "mstart4"). Each selectop will need a unique selectop id (this is normally auto-allocated, but can be done manually as well).
For each slectop you will need to provide:
*OpGroupID (This is the shared group name the links all the selectops into a single group. It should be the same for each member of the select group.) Eg: "mstart4"
*OpDisplayStr (This is the text displayed on the button) Eg: "Staff Survey 2003 with Side Menu"
*OpValStr (This is the value ascribed to the selection and in the case of the jsmbutton it is the SurveyID launch string - effectively action performed when you press a menu button) Eg: "Survey001". Any special launch modes or data you want to override should be added to this entry. For example, if you want to launch the survey in report mode you might use "Survey001&VOM=report".
*OpVal (This is the numeric value of the option. It is a good idea to use a unique number, but this item has little importance in this example.)
*OpOrder (This is just the numeric order in which the selectops in the group are displayed. If it is blank, the opgroupid is used in ascending alpha sort order.)
*OpHintStr (This is the hint that will be used by the mouse over to display in the hintbox defined in the parent survey). Eg: "Show a staff survey with a side menu, popup hints, questions that appear based on reponses to another question and various format changes throughout."
Repeat this for each survey you want to launch from a menu button. Of course you might be launching the same survey in different modes - data entry, numeric report and graphical report, preview and a variety of others. You can have exactly the same OpValStr for each selectop - it doesn't matter (although it would be rather pointless).
After you have created and saved your selectop group members, go back to step 6 and make sure you have set the selectop group id to "mstart4" or whatever you called this group.
This completes the definition of the main/default and submenu surveys.
=Stage 3. Add Some News=
The built in news engine has an very simple news editor as part of the survey manager library. It is accessed with the the command: /DoEdNews. For example if you typed the following into the browser header to bring up the survey manager surveys:
<pre>
http://myorg.com/sm/MySurveyManager1.dll/DoSurvey
</pre>
You would enter the following into your browser:
<pre>
http://myorg.com/sm/MySurveyManager1.dll/DoEdNews
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
e1b7177f41fea07740c5f4b4a629fd6547599471
Category:Risk Management
14
434
629
2019-09-10T15:59:02Z
Bishopj
1
Created page with "Articles on Risk Management both theory and applied."
wikitext
text/x-wiki
Articles on Risk Management both theory and applied.
59ea5d0725b49ee69aa9e671a477fc85110b46e7
Category:Risk Management - Software
14
435
630
2019-09-10T16:00:23Z
Bishopj
1
Created page with "Articles relating to software addressing Risk Management, Compliance Management, Governance Information Systems and associated support systems."
wikitext
text/x-wiki
Articles relating to software addressing Risk Management, Compliance Management, Governance Information Systems and associated support systems.
c398cce17e8a070943a0690d87f5af1bf873dae9
Category:BPC RiskManager User Manual
14
436
631
2019-09-10T16:02:22Z
Bishopj
1
Created page with "Articles including the BPC RiskManager user manuals, tutorials, examples, and such other material as comprises the expanded user manual for the BPC RiskManager suite of softwa..."
wikitext
text/x-wiki
Articles including the BPC RiskManager user manuals, tutorials, examples, and such other material as comprises the expanded user manual for the BPC RiskManager suite of software tools.
c0e5bde3664d266825dc750776a2c6234d289eef
File:US Military Networked Simlator Projects 1938 To 2001 036.jpg
6
437
632
2019-09-10T16:16:22Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Category:Learning In Virtual Worlds
14
438
633
2019-09-10T16:20:27Z
Bishopj
1
Created page with "Articles comprising the Learning in Virtual Worlds Topic"
wikitext
text/x-wiki
Articles comprising the Learning in Virtual Worlds Topic
7a667d92e2609fd00d52a4a6ddc502e1a0fef486
Category:Book - Real Learning in Virtual Worlds
14
439
634
2019-09-10T16:22:06Z
Bishopj
1
Created page with "Articles comprising the text of the book Real Learning in Virtual Worlds."
wikitext
text/x-wiki
Articles comprising the text of the book Real Learning in Virtual Worlds.
3bed9e486ccbd2e79861d368ba85f971340544fa
Category:BPC RiskManager V6 Installation
14
440
635
2019-09-10T16:25:21Z
Bishopj
1
Created page with "=Category: BPC RiskManager V6 Installation= The RiskManager V6 Installation category includes pages dealing with installation procedures for the BPC RiskManager family of ri..."
wikitext
text/x-wiki
=Category: BPC RiskManager V6 Installation=
The RiskManager V6 Installation category includes pages dealing with installation procedures for the BPC RiskManager family of risk management software applications. BPC RiskManager is an industrial strength enterprise governance system covering the functions of risk, compliance, insurance, claims management, audit, and planning for both public and private organisations. It is available world wide from Bishop Phillips Consulting.
Installation and upgrade is 100% fully automated on single user and small network installations, and 95% automated on enterprise networks, with multiple installation frameworks available in the installer package. You can, however, perform all steps manually for total customisation, and change your installation configuration at any time after installation.
675496f02b6e895c78df8d21741e2b82e4eedb69
Category:BPC RiskManager V6 System Administration
14
441
636
2019-09-10T16:26:59Z
Bishopj
1
Created page with "=Category: BPC RiskManager V6 System Administration= The RiskManager V6 Installation category includes pages dealing with installation procedures for the BPC RiskManager famil..."
wikitext
text/x-wiki
=Category: BPC RiskManager V6 System Administration=
The RiskManager V6 Installation category includes pages dealing with installation procedures for the BPC RiskManager family of risk management software applications. BPC RiskManager is an industrial strength enterprise governance system covering the functions of risk, compliance, insurance, claims management, audit, and planning for both public and private organisations. It is available world wide from Bishop Phillips Consulting.
All system configuration decisions are defaulted during installation which is 100% fully automated on single user and small network installations, and 95% automated on enterprise networks, with multiple installation frameworks available in the installer package. You can, however, perform all steps manually for total customisation, and change your installation configuration at any time after installation.
All system configuration decisions are preserved through upgrades.
The system will run unattended and can be essentially ignored - even on enterprise systems, with the condition that database backups are automated and/or should be performed regularly for all RiskManager databases.
While defaults are set during installation based on questions you answer, and system setups discovered during installation there is an extensive number of ways of arranging the components across multiple machines, and databases, etc; and tuning the system.
If you are hosting complex multi-organisation or other multi-database setups, there are administration facilities built in to the application server and other support tools to directly support rollouts of new configurations and component updates, across all databases and web sites in one step with database or organisation specific context sensitive changes.
069d35cf285fa8af994d47860a4452ccfe273123
Category:RiskManager FAQ
14
442
637
2019-09-10T16:28:30Z
Bishopj
1
Created page with "=Category: BPC RiskManager Frequently Asked Questions= ==Introduction== The RiskManager FAQ category includes pages dealing with common questions asked about the BPC RiskMa..."
wikitext
text/x-wiki
=Category: BPC RiskManager Frequently Asked Questions=
==Introduction==
The RiskManager FAQ category includes pages dealing with common questions asked about the BPC RiskManager family of risk management software applications. BPC RiskManager is an industrial strength enterprise governance system covering the functions of risk, compliance, insurance, claims management, audit, and planning for both public and private organisations. It is available world wide from Bishop Phillips Consulting.
The headline and summary [[BPC RiskManager Frequently Asked Questions|page for this category can be found here.]]
<br>
a5235eb83ef5e931c5980b1483a7859e40675a2f
Category:Bishop Phillips Software
14
443
638
2019-09-10T23:54:35Z
Bishopj
1
Created page with "Articles on matters related to any software application from Bishop Phillips Consulting P/L."
wikitext
text/x-wiki
Articles on matters related to any software application from Bishop Phillips Consulting P/L.
9462213343f9cb8d3fb73d50192aba95567794e5
Drafts:SurveyManager Stargate Project
0
444
639
2019-09-11T02:00:41Z
Bishopj
1
Created page with "=Bishop Phillips BPC SurveyManager Wiki Mode= ==Introduction== In the wiki mode, the SM engine enables a freeflowing survey creation, viewing and responding environment. J..."
wikitext
text/x-wiki
=Bishop Phillips BPC SurveyManager Wiki Mode=
==Introduction==
In the wiki mode, the SM engine enables a freeflowing survey creation, viewing and responding environment. Just as some pages in a wiki might be protected from editing, some surveys a user sees are protected from change while others are not. Thos protected from change may allow data change (but not layout/content change) or disallow even data change (but allow viewing mode change between reporting modes (or not even that).
==Components==
The SM Wiki has:
*an enhanced WYSIWYG editor to that can display entire survey elements.
*an enhanced layout engine capable of working with tags that embedd an expanded set of survey element and meta tags supporting the LGROUP attribute extension
*an emhanced layout scripting engine capable of working with the LGROUP attributes
*an expanded rules engine capable of manipulatinng active LGROUPs.
==LGROUPs==
The LGROUP attribute of markup tags describes to which layout groups the element belongs. One or more layout groups can be specified with a comma separated list.
==Thoughts==
Define a survey layout like this (no #SurveyBody Tag):
<pre>
<#survey UVW LGROUP="1,2" DEFLGROUP=1 > //Defines a survey that maps the active LGROUP to 1 if it is default
//(the starting position with no questions and no initialised LGROUP.
<#survey XYZ LGROUP=3 > //Defines a survey that will only work on LGROUP 3
<#startsurvey UVW LGROUP="1,2" > //Issue start survey layout if active LGROUP is 1 or 2
<#question a LGROUP=1 > //Display this question if active LGROUP is 1
<#question b LGROUP=1 >
<#question c LGROUP="1" >
<#question aa LGROUP=2 > //Display this question if active LGROUP is 2
<#question ab LGROUP=2 >
<#question ac LGROUP=2 >
<#questiongroup g1 LGROUP=2 > // Display a question group (generates a qlist and implies a surveybody tag? )
<#endsurvey UVW LGROUP="1,2" > //Requiring LGROUPS in the in an endsurvey tag allows the survey end to be shifted for different LGROUPS (or even omitted)
<#startsurvey XYZ LGROUP=3,normal >
<#question ba LGROUP=3,normal > //Display this question if active LGROUP is 3 or normal
<#question bb LGROUP=3,normal >
<#question bc LGROUP=3,normal >
<#question bX LGROUP=2 > // This should be an error condition - but how can we detect it, when it might also be ok (such as where info only is displayed).
<#endsurvey XYZ LGROUP=3 >
<#surveybody LGROUP=normal >
<#endsurvey XYZ LGROUP=normal > //Survey end moved for normal LGROUP
</pre>
Now add the LGROUP to the QScript:
@{1,2,3}
or
@{default} //Sets to the default LGROUP (which is remapped to 1 in the above script) if no active LGROUP (otherwise use the one supplied
@{* of {default, 1, 2, 3}} //Sets to the default LGROUP (which is remapped to 1 in the above script) if no active LGROUP (otherwise uses the one after the active one in the list
@( iferror( {LGROUP}, if( in(LGROUP,{default, 1}), {1}, if( GT(.a.Value, 88), if( notempty(QLIST), {normal} , next( if(LGROUP={normal}, PREVLGROUP, LGROUP), {default, 1, 2, 3}), { 1 } ) ) )}
:This reads "if active LGROUP is one of default or 1, then show all tags with LGROUP 1, else if the value answered for the current user and survey question a is greater than 88, then if there are rule generated questions show those questions in the current QLIST else find the next active LGROUP to either the previous LGROUP if the Active LGROUP is "normal", or else the currently active LGROUP from an ordered list of LGROUPS and display tags matching that LGROUP." The QLIST is displayed by the <#surveybody LGROUP=normal > which only shows when the normal group is active. In the absence of a surveybody tag the layout tags act as an additional filter, because only those tags in the layout and in the QLIST will display. In the event of an error in an input page (such as the when a required question is ommitted, we must redisplay the current active LGROUP, and the current QLIST.
Question: Should we allow LGROUP to be a list (rather than a single item).
c69d0d6b94480a720577036c51ff6e392c894f3c
Drafts:Main Page
0
445
640
2019-09-11T02:37:18Z
Bishopj
1
Created page with "* [[Drafts:SurveyManager Stargate Project]]"
wikitext
text/x-wiki
* [[Drafts:SurveyManager Stargate Project]]
9f55733a32715a15ab9ba4a3164e39af03bee8bc
File:RMWC WSSetup3.png
6
446
641
2019-09-11T13:15:27Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TEXOP file Exampl1.gif
6
447
642
2019-09-11T13:28:15Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:TEXOP admlist Exampl1.gif
6
448
643
2019-09-11T13:36:02Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Sidlist Exampl1.gif
6
449
644
2019-09-11T13:42:08Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMDS RMWD8.png
6
450
645
2019-09-11T16:21:20Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:RMSS HTTPSrvr10.png
6
451
646
2019-09-11T16:30:17Z
Bishopj
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
BPCStndLib1
0
452
647
2019-09-11T16:35:33Z
Bishopj
1
Created page with "==Language Tokenising and Parsing, tree, date and file support library== Language: Delphi 7 - 2007 * String Tokenising functions * BPC FLang1 Language parsing routines * DAT..."
wikitext
text/x-wiki
==Language Tokenising and Parsing, tree, date and file support library==
Language: Delphi 7 - 2007
* String Tokenising functions
* BPC FLang1 Language parsing routines
* DATE to String functions
* GENERAL FILE Import/Export Functions - COMMA TEXT
Use the bpcSMScript equivalents in preference to these routines where available.
<pre>
interface
uses windows, classes, Sysutils, db, DBTables;
const
bpslSkipSpace = 1;
bpslDropDelimeters = 2;
bpslPushBack = 4;
bpslMatchSE = 8;
bpslToEOL = 16;
bpslDelimOveridesNum = 32;
bpslCmprsSpace = 64;
type
TbpTreeNodeType = ( tttnCell, tttnOp, tttnVal );
{
TbpTreeNode = class( Tobject )
private
public
NodeType : TbpTreeNodeType;
Token : string;
Done : boolean;
Value : double;
LeftPtr : TbpTreeNode;
RightPtr : TbpTreeNode;
constructor Create( myToken : string);
destructor Destroy ;
function Eval : boolean;
function FindInTree( Target : string ) : TbpTreeNode;
function BuildTree( SourceString : string ) : TbpTreeNode;
end;
}
tbpReservedWord = ( tbpRwNIL, tbpRwILLEGAL, tbpRwVAR, tbpRwIGNORE, tbpRwPOST, tbpRwHOLD, tbpRwIF, tbpRwDO, tbpRwEQ, tbpRwNEQ, tbpRwLT, tbpRwGT ,
tbpRwLTE, tbpRwGTE ,tbpRwASSIGN, tbpRwPLUS, tbpRwMINUS, tbpRwTIMES,
tbpRwDIVIDE, tbpRwPERIOD, tbpRwSEMIC, tbpRwCOLON, tbpRwCOMMA, tbpRwLEFTBRACKET, tbpRwRIGHTBRACKET,
tbpRwNOT, tbpRwTRUE, tbpRwFALSE, tbpRwAND, tbpRwOR );
TbpTokenType = (ttComma, ttMapArg, ttReserved );
TbpMapArgType = (ftxNil, ftxInt, ftxDateYearLead, ftxDateLongMonLead, ftxDateDayLeadTime, ftxDateDayLead, ftxDateYearLeadTime, ftxDateYearLeadWithDiv, ftxChar, ftxMoney, ftxFloat, ftxLogical, ftxBlob, ftxQChar, ftxQNil);
TbpTrimType = (ttTrimLeft, ttTrimRight, ttTrimBoth );
TbpTokenListType = (tltRelation, tltMap);
TbpToken = class(TObject)
private
public
TokenType : TbpTokenType;
TokenString : string;
end;
TbpMapDividerToken = class(TbpToken)
private
public
DividerWord : string;
constructor Create(myDivider : string);
end;
TbpMapReservedWordToken = class(TbpToken)
private
public
ReservedWord : string;
constructor Create(myReservedWord : string);
end;
TbpMapArgToken = class(TbpToken)
private
public
MapToName : string;
MapSrcType : TbpMapArgType;
Size : integer;
constructor Create(myMapToName : string; myMapSrcType : TbpMapArgType; mySize : integer );
function MapTo( MyDataSet : TDataSet ; var Erra : string ) : boolean;
end;
TbpTokenList = class(TStringlist)
private
{ Private declarations }
public
ListType : TbpTokenListType ;
procedure AddToken( TokenName : string; MyToken : TbpToken );
procedure DelToken( TokenName : string );
function GetTokenStrIfComma( i : integer) : string;
function GetToken( i : integer) : TbpToken;
end;
TbpMapTokenList = class(TbpTokenList)
private
{ Private declarations }
public
{ Public declarations }
Rows : string;
Criteria : string;
MergeWith : string;
Action : string;
Prep : string;
SrcLine : string;
SrcMask : string;
ParseError : string;
SrcIgnored : string;
SrcParseError : tstringlist;
constructor Create;
destructor Destroy; override;
function IsCriteriaMatch( sourceline : string ) : boolean;
function ReLoadSrc( TargetDataSet : TDataSet) : boolean;
function LoadMask : boolean;
function LoadSrc ( TargetDataSet : TDataSet; NoPost : boolean = false ) : boolean;
end;
TbpListTokenList = class(TStringlist)
private
{ Private declarations }
public
{ Public declarations }
SrcLines : TStringList;
SrcMasks : TStringList;
ParseErrors : TStringList;
SrcIgnoreds : TStringList;
constructor Create;
destructor Destroy; override;
function MatchSourceToMaskCriteria( SourceLine : String ) : integer;
procedure ClearMasks ;
function LoadMasks : boolean;
function LoadSrcs(debugflg : boolean; Imid, Imbid : integer; TargetDataSet : TDataSet; SourceLines : TStringList) : boolean;
end;
/////////////////////////////////
//// String Tokenising functions
/////////////////////////////////
// Return a copy of the string bounded by start and end
function bpStrWhatsBetween(strTokenStart, strSource, strTokenEnd : string ): string;
// Returns the index of the closing delimeter matching the nesting level of the
// current start delimeters, (or the outermost match if direction is not forward).
// Source string unaffected
function bpNextMatchDelim(bGoForwards : boolean; strTokenStart, strTokenEnd, strSource: string; startpos : integer ): integer;
// Returns the index of the next delimeter from startpos or the last occurence
// in string if direction is not forward). Forward Search starts from startpos.
// Source string unaffected
function bpNextDelimiter(bGoForwards : boolean; strTokenDelim, strSource : string; startpos : integer ): integer;
// Returns a string without the removechars. Doesn't Change Source
function bpStripCh(removechars : string; targetstr : string ) : string;
// Returns a string token optionally delimetered by start and pos, according to a variety of rules in Flags. Changes Source
function bpStrStripToken(Num : integer; Flags:integer; strTokenStart, strPunct, strTokenEnd : string; var strSource : string; var bTokenComplete : boolean; NumChar : integer = 0 ) : string;
// Returns the Map type given a string type name
function bpstrtoMapArgToken( strArgType : string ) : TbpMapArgType;
// Trim leading and trailing trimchars
function bpChTrim( LTrimByString, RTrimByString: string; TokenString: string): string;
// Move the sign char at the back to the front
function bpSignToLeft( chSign : string; TokenString: string): string;
// Returns the value part of a Name/Value pair in the string of the form:
// Name="MyValue" or Name.MyValue
function RetrieveValue(strSource,strName,strAssign,strTokenStart,strTokenEnd : string) : string;
// Returns a simple token : "word" | Punctuation
function RetrieveToken(strSource : string; var curpos : integer ) : string;
// Retrieves the entire first line after index <startfrom> in the tstring <strSource>
// that matches the pattern 'CALC.<strinstance> <strsubname>="<strsubinstance>" '
// Returns the entire line string including all attributes - use RetrieveValue to extract the
// component attributes or filter further
// the CALC pattern where " symbol is replaced by <strTokenStart>, <strTokenEnd>
// EG: StartingOffset := 0;
// RetrievSPattern( FormuliStringList, StartingOffset, 'BASXL', 'PeriodGroup', 'Monthly' );
// If <strsubname> is '' then the match is to the CALC.strinstance tag alone and
// <strsubinstance> is ignored.
// ##JB : Assumes CALC name and subName pair are unique, or queued
function RetrieveSPattern( strSource : TStrings; var StartFrom : integer; strName, strInstance, strSubName, strSubInstance : string) : string;
// Uses RetrieveSPattern to return CALC formula (the value attribute of the tag) or ''
// EG: RetrievePeriodCalc( FormuliStringList, 'BASXL', 'Monthly', '"', '"');
function RetrievePeriodCalcValue( strSource : TStrings; strInstance, strSubInstance, strTokenStart,strTokenEnd : string) : string;
//////////////////////////////////////////////////////////////////////////////////////////
/// Language parsing routines
{
In all cases the source is not damaged and (except for RetrieveValue,
the structure of the calling pascal function is:
<func>(<source string>, Var <curpos> ; Var <return args> ) : <parsed ok>;
where <func> is function name
<source string> is the source string
<curpos> next char position immediately after expression or 1 if at start of line
or -1 if EOLN
<return args> the arguments returned
<parsed ok> a boolean verifying correct syntax
These routines assume a language of expressions containing (examples):
value="POST" or value="POST;"
value="if( NE(a,b), POST, HOLD)"
value="if( NE(a,b), if( NE(a,c), POST, do( =(a,d), POST), HOLD));"
LANGUAGE GRAMMAR:
<statementlist> = [<statement>]...;
<statement> = <postfunc> | <holdfunc> | <iffunc> | <dofunc> | <assignfunc> | <mathfunc>
<arglist> = ( <statement> [,<statement>]... )
<postfunc> = POST // return true
<holdfunc> = HOLD // return true
<dofunc> = DO ( <statement> [, <statement>]... ) // return last statement executed
<iffunc> = IF ( <conditionpart>, <truepart>, <falsepart> ) // return whichever part is executed
<conditionpart> = <logicfunc> ( <logicarg>, <logicarg>) | <logicliteral>
<logicfunc> = EQ | NE | LT | GT | LTE | GTE | OR | AND
<logicliteral> = TRUE | FALSE
<logicarg> = <literal> | <var> | <logicliteral>
<mathfunc> = [+ | - | * | / ]( <statement> [, <statement>]... )
<assignfunc> = =( <lvalue>, <rvalue> )
<lvalue> = <var>
<rvalue> = <logicfunc> | <var> | <literal> | <iffunc>
<literal> = <logicliteral> | <value>
END GRAMMAR.
An example of the use is:
curpos := 1;
myValStr := RetrieveValue( 'Relation.R1 value="if( NE(a,b), if( NE(a,c), POST, do( =( a,b), HOLD ), HOLD)"', 'value', '=', '"', '"');
if EvalStatementList( myValStr, curpos, strStatementList) then
Parse Complete;
EvalLiteral
RetrieveToken( strSource, curpos );
EvalStatementList
while curpos > 0 do
begin
parseok := EvalStatement(myValStr, curpos, strstatement );
if curpos >= length(myValStr) then curpos := -1;
end;
EvalStatement
}
//////////////////////////////////////////////////////////////////////////////////////////
// Returns the statement list of a string of the form (terminates on EOL):
// "statement;statement;"
function RetrieveStatementList( strSource: string; var curpos : integer; var strStatementList : string ) : boolean;
// Returns the statement part of a string of the form (terminates statement part on ; or EOL):
// "function < optionally some other stuff>;"
function RetrieveStatement( strSource: string; var curpos : integer; var strStatement : string) : boolean;
// Returns the Function part of a string of the form (terminates function part on space, ; or EOL:
// "function < optionally some other stuff>;"
function RetrieveFunction( strSource : string; var curpos : integer; var strFunc, strArglist : string ) : boolean;
// Returns true if parsed OK and Decomposes an ifexpression into its component parts of the form:
// "(condition action )? true action : false action"
// "if ( a<>b )? POST : HOLD;"
function RetrieveIfExpr(strSource: string; var strCondition: string; strIfTrue, strIfFalse : string ) : boolean;
// Returns the Function part of a string of the form (terminates function part on space, ; or EOL:
// "function < optionally some other stuff>;"
{function RetrieveFunction( strSource; var ) : string;
var
begin
end;
}
// True if ch at curpos is numeric else false
function bpChIsNum( strSource : string; curpos : integer ) : boolean;
// True if ch at curpos is A..z
function bpChIsAlpha( strSource : string; curpos : integer ) : boolean;
// Returns the index of the strSource after chars (spaces) skipped
function bpSkip( strDelims, strSource : string; var curpos : integer ) : integer;
// Returns the reserved word corresponding (or VAR flag or illegal flag ) to the token
function bpWhichReservedWord( strSource : string ) : tbpReservedWord;
///////////////////////////////
//// DATE to String functions
///////////////////////////////
// Return the Number of Days in a month (1..12) - use IsLeapYear [Borland VCL]
// for isleapy test.
function DaysInMonth( monthint : integer; isleapy : boolean ) : integer;
// convert a month num (1..12) to a month string
function MonthToStr( monthint : integer ) : string;
// convert a month string to the closest possible month num (1..12)
function StrToMonth( month : string ) : integer;
//////////////////////////////////////////////////////
//////////////////////////////////////////////////////
/// GENERAL FILE Import/Export Functions - COMMA TEXT
//////////////////////////////////////////////////////
// General Table Import Routine
function ImportATable( TargetTableName: string; TargetTable : TDataSet; FileName : string; DoMemFields : boolean = False ) : boolean;
// General Table Export Routine
function ExportATable( FileName : string ; TargetTable : TDataSet; DoMemFields : boolean = False ) : boolean;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
0f234206c297bd7682eb878ca3210addd9ac0ced
BpcSMScriptLibrary 1
0
453
648
2019-09-11T16:37:03Z
Bishopj
1
Created page with "=BPC String Manipulation Library 1= Language: Delphi 7 - 2007 <pre> ////////////////////////////////////////////////////////////////////////////////////////// //////// Str..."
wikitext
text/x-wiki
=BPC String Manipulation Library 1=
Language: Delphi 7 - 2007
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// String Based Auto Counter Routines
//////////////////////////////////////////////////////////////////////////////////////////
//////// String & StringList Manipulation Routines
//////////////////////////////////////////////////////////////////////////////////////////
//////// TbpcXMLNodeStringList Manipulation Routines
//////////////////////////////////////////////////////////////////////////////////////////
//////// Module, Database & registry Utilities
//////////////////////////////////////////////////////////////////////////////////////////
{$INCLUDE bpcDefs.PAS}
interface
uses HTTPApp, Classes, DBGrids, Types, ShDocVw, bpcStringList, IdHTTP, ADODB, Windows;
type
TbpcStndFilExtTypes = (bpcfetunkown, bpcfetbmp,bpcfetjpg,bpcfetjpeg,bpcfetjpeg2000,bpcfetwmf,bpcfetemf,bpcfetico,bpcfeticon,bpcfetmpg,bpcfetmpeg,bpcfetwmv,bpcfetavi,bpcfetmov,bpcfetmp3,bpcfetmp4,bpcfetdoc,bpcfetrtf,bpcfettxt,bpcfetxls,bpcfetdat,bpcfetbak,bpcfetmdf,bpcfetlog,bpcfettmp);
// Used in bpcMergeMessageAtMarkupTags(...) as the callback function. It takes a single tag string
// and returns a replacement string. Similar to a tstatementproducer call back routine.
TbpcMergeMessageFunc= function ( MessageID : string; myProperties : tstringlist; myGFParam : TObject ) : string of object;
// Used for decoding XML strings into Tstrings and back again
TbpcXMLNodeType=(bpcXMLInlineNode, bpcXMLBlockNode );
TbpcXMLNodeStringList = class (TbpcStringList)
public
TagName : string;
TagType : TbpcXMLNodeType;
Content : string;
// Make an XMLTag object
constructor create(myTagName : string; myTagType : TbpcXMLNodeType ); overload;
// Return the Node as an XML Tag block
function AsXMLTag : string;
end;
//////////////////////////////////////////////////////////////////////////////////////////
//////// String Based Auto Counter Routines
//////////////////////////////////////////////////////////////////////////////////////////
// These routines take a masked string of the form 'QUES###' or 'QUES001' or QU##ES##', etc and
// populate it with a counter to make something like 'QUES001' and then through successive calls
// to bpcMaskIncAutoNumber, return the incremented string QUES001...QUES002...QUES003... etc.
// Use:
// This example fills a mask from the end to the front ('QUES###') with a startstring ('1'),
// the first time through filling the extra '#' with '0', and then increments the RowIdString
// with each subsequent call.
//
// from j := fromrow to torow do
// if j=fromrow then
// RowIdString := bpcMaskFillString( Mask, trim(StartString), MaskChar, '0', true)
// else
// RowIdString := bpcMaskIncAutoNumber( RowIdString, Mask, OnlyNums );
//
//
// This example populates a RowIdString with '' initially and takes a mask string ('QUES000') ,
// the first time through, and then increments the RowIdString with each subsequent call.
//
// RowIdString := '';
// from j := fromrow to torow do
// RowIdString := bpcMaskIncAutoNumber( RowIdString, Mask, OnlyNums );
// Takes a masked string and returns the incremented version of that string
// Mask pattern uses numbers for numeric increments from right to left (or all chars otherwise)
// Handles 'carry' of alphanumb autoindex keys
function bpcMaskIncAutoNumber( sQuesIDLastAutoNumber, sQuesIDAutoNumberPattern : string; NumOnly : boolean ) : string;
// Handles 'carry' of alphanumb autoindex keys
function bpcMaskRippleAutoNumber(sQuesIDLastAutoNumber : string; IndChr : char; NumOnly : boolean ) : string;
// Return a copy of the MaskedSource with mask characters replaced by FillSource characters where characters equal MaskCh.
// Working from the end of the strings to the front if FromEnd is true (else go from the front).
function bpcMaskFillString( MaskedSource, FillSource : string; MaskCh : char; PadCh : char; FromEnd : boolean) : string;
//////////////////////////////////////////////////////////////////////////////////////////
//////// String & StringList Manipulation Routines
//////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////
// Display message if testme is true and return testme
// Use like: if bpcShowOnTrue( Failed, 'Ooops. Error.' ) then exit;
function bpcShowOnTrue( testme : boolean; Msg : string ) : boolean ;
///function bpcStrEvalCondition( TheCondition : string; CheckAValue : TbpcGetParamValFunc; TagObject : TObject ) : boolean;
// Slow name-based copy FromValues to ToValues preserving name-value combinations (and replacing them where needed)
function bpcAssignSLValues(ToValues, FromValues : TStrings ) : TStrings;
// Validate a notional ID field (string). Returns an Error Message or '' if ok. Accepts only Alpha Numeric or Underscore and rejects empty or spaced values
// Strips spaces if StripSpace=true and returns the id in FixedID, else returns the original ID (if ok) or '' if not ok.
// Trims spaces and control chars if TrimMe is true and returns the trimmed ID in FixedID, else as above
// Three error messages can be provided - one general non alphnum message (if id contains something other then alpha's, numbers or underscore,
// and a space or blank err message if the ID is contains spaces or is blank.
// If the ErrMsgSP is '' then the ErrMsgBL is used for ids containing spaces as well as empty ids, and if ErrMsgBL is also '', then the ErrMsgNA is used for all errors.
// SO...ErrMsgNA is the only REQUIRED error msg.
function bpcIsValidIdentifier( Id : string; var FixedID : string; TrimMe, StripSpace : boolean; ErrMsgNA : string; ErrMsgBL: string=''; ErrMsgSP : string='' ) : string;
// Pattern match the target string in the lhs_target to the pattern in the rhs using soundex phometic comparison
// (if UseSoundEx = true - [the default]), or string comparison (if false). Default SoundexLength is 4.
// Returns true if the lhs string matches the rhs pattern, else false.
// A pattern may us * (match 0 or many words) or ? (match one word). Spaces,',',(,),.,;,:,!,?,* are ignored in the target string.
// Matching is case insensitive.
// Examples: "I am a good bunny who eats carrots." Matches "* carrots" and "I * bunny ? eats *"
function bpcStrPatternMatch( lhs_target, rhs_pattern : string; var LastOkIndex : integer; UseSoundEx : boolean=True; SoundexLength : integer=4 ) : boolean;
// A very fast comparison routine. Compares AsubText to AText starting at i (and buffer safe), optionally igonring
// case.
function bpcFastIndexedAnsiStrSame( ASubText, AText : string; i : integer=1; ignorecase : boolean=true ) : boolean;
// Compare substr to buferstr starting at i optionally ignoring the case and using *? to pattern match.
// NOTE: Pattern matching ALWAYS ignores case (sorry!).
function bpcStrMatches( substr, bufferstr : string; i : integer=1; ignorecase : boolean=true; usewildcard: boolean=false ) : boolean;
// Return true if the value is empty or contains a space
function bpcIsValueNilOrSpaced( value : string='' ): boolean;
// Return true if the value is empty
function bpcIsValueNil( value : string='' ): boolean;
// Switches a dbgrid between row selecting and cell selecting mode
procedure bpcGridRowSelectSwitch( var MyDBGrid : TDBGrid; RowSelect : boolean ) ;
// Find the index [0..(count-1)] of the element in strArray matching tagstr or -1
function bpcWSIndexOfList( targstr : string; strArray : array of string ) : integer;
// Find the string at the index [0..(count-1)] of the element in strArray or ''
function bpcWSStringAtIndexOfList( targind : integer; strArray : array of string ) : string;
// Use with the bpcWSIndexOfList to get the string found at an index in an array of string strArray matching tagstr or '' if -1
function bpcStringAtWSIndexOfList( i : integer; strArray : array of string ) : string;
// Classic string 'explode' routine using a substring (psubstr) as the trigger to explode
// S into a atring array. Strips any characters in the psubstr and allocates a dynamic string array of the
// necessary size. Returns nil if S=''. If btrimstrings=true then every token is stored in its trimmed form
// and spaces break a word only once, regardless of how many there are.
function bpcStrExplode ( psubstr, S : string; btrimstrings : boolean=false ) : TStringDynArray; overload;
// A Classic string 'explode' routine using splitting on forward scanned pairs of startsubstring (pstartsubstr), endsubstring
// (pendsubstr) as the trigger to explode the string, trimming spaces if btrimstrings=true (FALSE is the default).
// Does not handle nested pairs. If (exclusive=true) only those strings enclosed by the markers are included in
// the array. So, if (exclusive=true): 'a cat (of the female variety) would eat only fish (or another).'
// exploded on '(' & ')' and trimmed would become [of the female variety][or another]
// If (exclusive=FALSE) - the DEFAULT - 'a cat (of the female variety) would eat only fish (or another).'
// exploded on '(' & ')' and trimmed, would become [a cat][of the female variety][would eat only fish][or another][.]
function bpcStrExplode ( pStartsubstr, pEndsubstr, S : string; btrimstrings : boolean=false; exclusive : boolean=false ) : TStringDynArray; overload;
// Return the an array of strings as a single string seperated by 'seperater'
function bpcAsString( dstringarray : array of string; seperater : string = '') : string; overload;
// Return the an stringlist as a single string seperated by 'seperater'
function bpcAsString( dstringarray : TStrings; seperater : string = '') : string; overload;
// Return the index of the first char after index that is not a 'seperater'
function bpcSkipPos( targstr : string; index : integer; seperaters : string = ' ') : integer;
// Pos - only backwards
function bpcBackPos( substr, targstr : string) : integer;
// Return the true if both arg stringss represent integers or reals
function bpcArgIsNum(lhs : string ) : boolean;
//Return the substring between the start and stop characters
function bpcStrBetweenCh(const S: AnsiString; const Start, Stop: AnsiChar): AnsiString;
// Return the value of the form_var matching FieldName or '' in the query and form blocks of a web-request object
function bpcWSWebFormField( Request: TWebRequest; FieldName : string ) : string;
// Return the value of the form_var matching FieldName or '' in the query and form blocks of a web-request object for MULTI RESPONSE FIELDS
// like select - mutliple, BUT if the query line has the value, forms fields are NOT examined. This allows query jump lines to override
// form fields - (hence the SQ in the name - single query).
// Returns the FIRST value found in the result string and all values in the populated TStrings (which must exist, and is cleared prior to use).
function bpcWSWebFormFieldMultiSQ( Request: TWebRequest; FieldName : string; MultiFieldValues : TStrings ) : string;
// Find the char index in strtemp (starting at index) of the first member of strlist to appear, else 0; and the token matched and its length.
// WARNING: CASE SENSITIVE MATCH, IGNORES ENCLOSING BRACKETS, QUOTES, etc - if case insensitivity, quote/bracket based string exclusion
// or the index of the found token is required use bpcWSCmdNestedPosListCS instead.
// Note: If EOSIsEOT is True, the end of the strTemp causes length of Strtemp + 1 to be returned, with tokstr=''.
// If you want 0 returned at the end of the string, set EOSIsEOT to False.
function bpcWSCmdPosList( strlist : array of string; EOSIsEOT : boolean; const strTemp : string; index : integer; var tokstr : string; var toklen : integer) : integer;
// Return everything between the opening and closing tag of a block
function bpcCursorMatch( inXML : string; i : integer; TargStr : string ) : boolean;
// Find the char index in strtemp (starting at index) of the first member of strlist to appear, else 0; and the token matched and its length.
// WARNING: CASE SENSITIVE MATCH - if case insensitivity, or the index of the found token is required use bpcWSCmdNestedPosListCS instead.
// 1. Handle nesting (by ignoring matching strings found within a nest) if Nesting is true, from an arbitrary nesting starting point (0 means
// the we are at the outer nest level - this allows us to start from within a nest. Nests are defined by pairs of '' or "" or <> or (), etc presented
// as an string of single chars like ' or " (where the start and end are the same) or an array of strings containing a pair of chars
// like [] or () or {}, etc. The nesting algorythm tracks both the count and the matching of the nest pairs.
// Note: If EOSIsEOT is True, the end of the strTemp causes length of Strtemp + 1 to be returned, with tokstr=''.
// If you want 0 returned at the end of the string, set EOSIsEOT to False.
function bpcWSCmdNestedPosList( Nesting: boolean; nestlevel : integer; strlist : array of string; nestpairlist : array of string; nestsinglelist : string; EOSIsEOT : boolean; const strTemp : string; index : integer; var tokstr : string; var toklen : integer) : integer;
// Find the char index in strtemp (starting at index) of the first member of strlist to appear, else 0; and the token matched, its index and its length.
// 1. Handle nesting (by ignoring matching strings found within a nest) if Nesting is true, from an arbitrary nesting starting point (0 means
// the we are at the outer nest level - this allows us to start from within a nest. Nests are defined by pairs of '' or "" or <> or (), etc presented
// as an string of single chars like ' or " (where the start and end are the same) or an array of strings containing a pair of chars
// like [] or () or {}, etc. The nesting algorythm tracks both the count and the matching of the nest pairs.
// 2. Ignore case in matching if IgnoreCase is true, else be case sensitive.
// Note: If EOSIsEOT is True, the end of the strTemp causes length of Strtemp + 1 to be returned, with tokstr=''.
// If you want 0 returned at the end of the string, set EOSIsEOT to False.
function bpcWSCmdNestedPosListIC( IgnoreCase, Nesting: boolean; nestlevel : integer; strlist : array of string; nestpairlist : array of string; nestsinglelist : string; EOSIsEOT : boolean; const strTemp : string; index : integer; var tokstr : string; var tokindex : integer; var toklen : integer) : integer;
// Skip leading spaces and return a trimmed token (as terminated by a member of delimiterlist), ALso return the index of the token end (delimeter) in index, and the delimeter in delimiterstr
function bpcWSCmdGetToken(const strTemp : string; delimiterlist : array of string; Var index : integer; var delimiterstr : string ) : string;
// Skip leading spaces and return a trimmed token (as terminated by a member of delimiterlist), ALso return the index of the token end (delimeter) in index, and the delimeter in delimiterstr
function bpcWSCmdGetNestedToken(const strTemp : string; nestinglevel : integer; delimiterlist : array of string; Var index : integer; var delimiterstr : string ) : string;
// Exactly like TStringList CommaText, except that it doesn't strip "", doesn't break on ' ', (unless these are also
// delimiterlist members) and knows about brackets ()[]{} - preserving nesting in these and " or '.
// An example call is:
// mystringlist := bpcNestingCommaText( 'my string,( is, [ wild ] ), and; "free"', [',',';'], TStringlist.Create);
// Which will return mystringlist with:
// my string
// ( is, [ wild ] )
// and
// "free"
//
// Uses bpcWSCmdGetNestedToken, and returns a cleared tstrings with each token on a separate line, trimmed of leading
// and trailing white space and Stripped of delimeters in the delimiterlist.
function bpcNestingCommaText(const sourcestr : string; delimiterlist : array of string; myStringList : TStrings) : TStrings;
// Returns the LabelList (after first clearing it) with the sLine broken into trimmed tokens (treats punctuation as a token), kills all white space.
function bpcAsTokenList( sLine: string; LabelList : TStringList; DelimiterList: array of string; GimmeSpace : boolean=false; KeepCase : boolean=false) : TStringList;
// Reads a single CSV record and splits it into strings in a stringlist. It behaves exactly like TStringList CommaText,
// except that it reads from a stream and allows either "" or '' as a field designiator, allowing the other quote to
// appear inside a quoted string, and does not break on ' ' nor CR or LF. If CRLF appears as field end it is treated as
// a line break, which terminates the CSV record. If a quote appears other than at the start of a field it is treated like a normal
// character. If trimddq is true, doublequotes can be nested by doubling them inside an outer single double-quote set - a single doublequote
// will be returned.
function bpcCSVTABText( source : TStream; trimspace : boolean; delimiterlist : array of string; myStringList : TStringList) : TStringList;
function bpcCSVCommaText( source : TStream; trimspace : boolean; myStringList : TStringList) : TStringList; overload;
function bpcCSVCommaText( source : TStream; trimspace : boolean; delimiterlist : array of string; myStringList : TStringList) : TStringList; overload;
function bpcCSVDelimText( source : TStream; trimspace : boolean; delimiter : char; myStringList : TStringList) : TStringList; overload;
function bpcCSVDelimText( source : TStream; trimspace : boolean; trimddq : boolean; delimiter : char; myStringList : TStringList) : TStringList; overload;
function bpcCSVCommaText( source : TStream; trimspace : boolean; trimddq : boolean; delimiterlist : array of string; myStringList : TStringList) : TStringList; overload;
// Return a trimmed complex token (everything from index to the end), ALso return the index of the token end (strTemp length + 1) in index
function bpcWSCmdGetAfterToken(const strTemp : string; Var index : integer ) : string;
// Strip the outer brackets off an already trimmed string, or the string if no brackets.
// Doesn't skip white space, accepts half brackets (ie no closing bracket), else returns what it gets.
function bpcWSCmdGetBracketedExpression(const strTemp : string; delimiterlist : array of string; Var tokenindex : integer ) : string;
// Return a valid Encoded QRL (Question resource locater) comprising the merger of vSID.vQID:vRID with sActionArg (optionally with or without the ':' rule designator as flagged by 'AsRule')
// Senses 'sss', 'sss.qqq', '.', '.qqq', '.[sqg]', ':rrr', 'sss.qqq:rrr' to give 'sss.qqq:rrr' or 'sss.qqq'
function bpcFixJumpTarget(AsRule: boolean; vSID, vQID : string; vRID : integer; sActionArg : string) : string;
// Loads a list of QRL jump targets (after 'fixing'- see bpcFixJumptarget- each) into NextPageList (and simultaneously retunrs a ',' separated list)
function bpcLoadJumpTargets(AsRule: boolean; vSID, vQID : string; vRID : integer; BracketedExpression : string; var tokenindex : integer; var NextPageList : TStringList; NonQRLList : boolean = False ) : string;
// Explode a valid QRL held in jumpArgs into jumpSID, jumpQID, jumpRID and true/false on success or syntax failure
function bpcExplodeJumpTarget(jumpArgs: string; var jumpSID, jumpQID : string; var jumpRID: integer ) : boolean;
// Converts a string to a vararray
function bpcStringToPostData(const Value: string): OleVariant;
// Load a browser from a stream
{
function bpcLoadBrowserFromStream(myWebBrowser : TWebBrowser; const myStream: TStream): HRESULT;
}
// Post (or Get) a browser to sURL with stPostData as the content of the http message (nil content causes a get to be used)
// Content may be binary. Call forces refresh (ie. no read from cache)
procedure bpcWBNavigateNoCache(stURL : String; stPostData:TByteDynArray; var wbWebBrowser: TWebBrowser);
// Post (or Get) an Indy HTTP component to sURL with stPostData as the content of the http message (nil content causes a get to be used)
// Content may be binary. Call forces refresh (ie. no read from cache) AND retunrs the reponse.
function bpcIndyNavigateNoCache(stURL : String; stPostData: TStream; var wbWebBrowser: TIdHTTP) : string;
// Strip all HTML tags from a string
function bpcStripHTML(S: string): string;
// Return a string with all '<','>','&' contained in inward encoded as '&xx;' sub-strings.
function bpcHtmlEncode( inward : string ) : string;
// Decodes an HTML encoded string
function bpcHtmlDecode( inward : string ) : string;
// Replace all single quotes with double quotes to prevent injection attacks
function bpcSQLSafeQuote( inward : string ) : string;
// Replaces all tags of the form StartMUTag and EndMUTag (eg. [#...#] ) with <#...>. Useful if exchanging tagged HTML with word,
// since word and some other HTML editors are confused by the <#...> combination. This reverses the action of the bpcReplaceSMTagsWithMarkupTags
// routine.
function bpcReplaceMarkupTagsWithSMTags(myBuffer, StartMUTag, EndMUTag : string) : string; overload;
// As for bpcReplaceMarkupTagsWithSMTags(myBuffer, StartMUTag, EndMUTag : string) but assumes StartMUTag and EndMUTag are '[#' and '#]'
function bpcReplaceMarkupTagsWithSMTags(myBuffer : string) : string; overload;
// Replaces all tags of the form <#...> with StartMUTag and EndMUTag (eg. [#...#] ). Useful if exchanging tagged HTML with word,
// since word and some other HTML editors are confused by the <#...> combination. This reverses the action of the bpcReplaceMarkupTagsWithSMTags
// routine.
function bpcReplaceSMTagsWithMarkupTags(myBuffer, StartMUTag, EndMUTag : string) : string; overload;
// As for bpcReplaceSMTagsWithMarkupTags(myBuffer, StartMUTag, EndMUTag : string) but assumes StartMUTag and EndMUTag are '[#' and '#]'
function bpcReplaceSMTagsWithMarkupTags(myBuffer : string) : string; overload;
// Replaces all tags of the form 'start_string' ... 'end_string' defined by StartMUTag and EndMUTag (eg. [#...#] or <#...>) with the string returned by
// calling the user provided function 'GetMessageFor'. Performs a simple version of the tstatementproducer service.
function bpcMergeMessageAtMarkupTags(myBuffer, StartMUTag, EndMUTag : string; GetMessageFor : TbpcMergeMessageFunc; myProperties : tstringlist; myGFParam : TObject ) : string;
// Calls user provided function GetMessageFor with each tag of the form 'start_string' ... 'end_string' defined by StartMUTag and EndMUTag (eg. [#...#] or <#...>).
// Performs a simple version of the tstatementproducer service, but doesn't update the calling string. Typical use would be to cause a sequence of things to happen based on
// a source string containing markup tags.
function bpcCallMessageAtMarkupTags(myBuffer, StartMUTag, EndMUTag : string; GetMessageFor : TbpcMergeMessageFunc; myProperties : tstringlist; myGFParam : TObject ) : string;
//////////////////////////////////////////////////////////////////////////////////////////
//////// Other String Conversion Routines
//////////////////////////////////////////////////////////////////////////////////////////
// Map filename extensions to -1 (if not known), or an integer 0...n in the following order:
// Note: Accepts strings with or without a leading period
// '.bmp','.jpg','.jpeg','.jpeg2000','.wmf','.emf','.ico','.icon'
function bpcMapExtToImageType( myext : string ) : TbpcStndFilExtTypes;
//////////////////////////////////////////////////////////////////////////////////////////
//////// TbpcXMLNodeStringList Manipulation Routines
//////////////////////////////////////////////////////////////////////////////////////////
// This group of routines carves up XML inline and block tags into a TbpcXMLNodeStringList
// which is essentially a TbpcStringList (ie. an TStringList with extra value and
// string array routines) that contains a Content string (for block tags) and a TagType
// to distibguish block and inline tags (TbpcXMLNodeType of either bpcXMLInlineNode or bpcXMLBlockNode).
// The stringlist contains name value pairs for
// TagName (the XML tag id),
// TagType (either 'inline' or 'block')
// Any other attribute found in the open tag string
// Attribute strings with outer quotes are NOT stripped
// Attributes are terminated by space or / or > unless quoted, in which case quotes terminate
// Quote-in-quote is correctly handled.
// THESE ROUTINES MUST WORK TOGETHER - the i param is left in the correct
// location after the bpcGetXMLOpenTag for bpcGetXMLContent and then for bpcGetXMLCloseTag.
// Return a full XML tag object from a single XML object
// By Default the routine will expand/decode the HTML encoded content - if you want to preserve the encoding of
// < > ' " etc then set NoHTMLDecode=true.
// This is the main routine eg:
// i := 1;
// resultObj := bpcGetXMLTagObject('<test attrib1="mystring" attrib2=23 />',i );
// if resultObj.TagType=bpcXMLInlineNode then ...
function bpcGetXMLTagObject( inXML : string; var i : integer; NoHTMLDecode : boolean=False; SenseQuotes : boolean=false ) : TbpcXMLNodeStringList;
// Return a XML tag object attribute list from an open XML tag
function bpcGetXMLAttributeList( inTagStr : string ) : TbpcXMLNodeStringList;
// Return an XML token as a single string. A token is either a quoted string or a
// alphanum terminated by a space or '/' or '>' object attribute list from an open XML tag
function bpcGetXMLToken( inXML : string; var index : integer ) : string;
// Return everything inside the closing tag of a block. If SenseQuotes is true, tags inside quotation marks are
// ignored. The algorythm assumes that quotes never appear after alphanums. A quote appearing after an
// alphanum is assumed to be a single orphaned quote. As in isn't he nice. In this sentense the single
// quote will not confuse the algorythm, and it will be ignored, but in: I "am the light </name >" I said. The
// double quote would be assumed to be the start of the string and </name > would not be seen as the closing tag for
// <name ></name >.
function bpcGetXMLCloseTag( inXML : string; var index : integer; SenseQuotes : boolean=false ) : string;
// Return everything between the opening and closing tag of a block
function bpcGetXMLContent( inXML : string; var index : integer; endtagname : string; SenseQuotes : boolean=false ) : string;
// Return everything inside the opening tag.
function bpcGetXMLOpenTag( inXML : string; var index : integer; SenseQuotes : boolean=false ) : string;
// Return the starting index of a target tag block, OR i > length( inXML ) if failed.
function bpcSeekXMLTagStart( inXML : string; i : integer; TargStr : string ) : integer;
///////////////////////////////////////////////////////
// These routines do useful database & registry things
///////////////////////////////////////////////////////
// Return a stringlist containg key field names for ADO-XML table schemas
// Relies on the MSXML ADO Schema Definition created when a Db table is saved to file
// from an ADO dataset.
function bpcXMLADOKeyFieldList( XMLStr : string; KeyFields : TStrings ) : TStrings;
// Sets a SMLibrary Key, to a value assuming a SM Style registry structure.
// Eg: bpcSetTheSMLibraryToTheKeyValue('BPCSurveyManager1', 'Localisation', 'DEV1' );
procedure bpcSetTheSMLibraryToTheKeyValue(sDBIPath, sKey, sValue : string);
function bpcGetTheSMLibraryForTheKeyValue(sDBIPath, sKey : string) : string;
// Returns a subpath with a closing '\' - gauranteed, or '' if subpath is ''
function bpcAsRegistrySubPath( subpath : String ) : string;
// Returns a subpath with a closing '/' - gauranteed, or '' if subpath is ''
function bpcAsURLSubPath( subpath : String ) : string;
// Create a key path for SM keys (ie. Do not include the sRegKey here - just the sRegSubPath
// Ie bpcCreateKeyPathForSMLibrary('BPCSurveyManager1' )
procedure bpcCreateKeySubPathForSMLibrary( sRegSubPath : string ) ;
// Get all keys from a key path (sRegSubPath) for the BPC registry offset .
procedure bpcListSubPathsForSMLibrary( sRegSubPath : string; const Key: string; const List: TStrings ) ;
// Get all value keys from a key path for for the BPC registry offset.
procedure bpcListValueKeysForSMLibrary( sRegSubPath : string; const List: TStrings ) ;
// Read an entry from the registry representing an ADO connection string and assign it to an ADO connection and open it.
// Returns non-zero iErrorVal if the connection cannot be successfully established, else zero. The error is in sErrorMsg.
// Example call:
// iErrorVal := bpcDBOpenConnectionFromRegistry(MyDataMod.ADOConnection1, HKEY_LOCAL_MACHINE, 'SOFTWARE\BishopPhillips\BPCSurveyManager1', 'DBConnectString', sErrorMsg);
function bpcDBOpenConnectionFromRegistry(myADOConnection : TADOConnection; RootKey : HKEY; sRegPath, sDBRegConnectName : string; var sErrorMsg : string) : integer;
// Opens a database using bpcDBOpenConnectionFromRegistry, but assumes a SM Style registry structure.
function bpcDBOpenSMConnectionFromRegistry(myADOConnection : TADOConnection; sDBIPath : string; var sErrorMsg : string) : integer;
// Sets a database connection string to the SMLibrary Key, assuming a SM Style registry structure.
procedure bpcSetTheSMLibraryToTheDBConnection(sDBIPath, sDBConnect : string);
// This routine assumes a Registry key (sDBIPath + 'DBConnectString') containing an DB connection string of which sDBI
// is the initial catalogue (database). If the supplied sDBI does not match that in the registry database, or
// the registry entry does not exist, the routine returns False, else True.
// Example Use: Use this routine with a module name (like a dll) and a matching registry key path with the expected database,
// if this latter differs from the expected database, the database must be reset & reloaded (or the key changed).
// Eg. If the expected database is 'SurveyDB' and the calling dll (from bpcGetDLLModuleName) is bpcSurveyManager1.dll, and
// the registry is set up with a DBConnectString held in "SOFTWARE\BishopPhillips\bpcSurveyManager1 [DBConnectString]", then
// bpcDoesDBImatchSMLibrary( 'SurveyDB', 'bpcSurveyManager1') would return True if SurveyDB was the correct db for this module.
function bpcDoesDBIMatchSMLibrary( sDBI, sDBIPath : string ) : boolean;
// Exactly like bpcDoesDBImatchSMLibrary, but uses the entire key value.
// A typical use of this might be to force the reload of a configuration file in a dll that has an arbitrary life
function bpcDoesKeyMatchSMLibrary( sExpectedValue, sRegSubPath, sRegKey : string ) : boolean;
// Used in conjunction with bpcDoesKeyMatchSMLibrary to get the current value of the registry key
function bpcGetCurrentKeyForSMLibrary( sRegSubPath, sRegKey : string ) : string;
// Get the current value of the registry key with sDefVal as default on nil value in registry
function bpcGetCurrentKeyForModuleWithDefault( sRegSubPath, sRegKey, sDefVal : string ) : string;
// Merge a list of name=value pairs (SectAsValues) with the keys in sRegSubPath. Create sRegSubPath if required.
procedure bpcMapStringsToRegistry(sRegSubPath : string; SectAsValues : tstrings);
// Merge a list of name=value pairs defined in SectAsList from the keys in sRegSubPath and store them in SectAsValues.
// Create SectasValue entries as required.
function bpcMapStringsFromRegistry( sRegSubPath : string; SectAsList : Tstrings; SectAsValues : TStrings=nil ) : tstrings;
// Exract and return the database name from an ADO SQL Server Connection String
function bpcGetDBNameFromMSSQLConnectionString( sConnectionString : string ) : string;
// Set/replace the database name in an ADO SQL Server Connection String
function bpcSetDatabaseInMSSQLConnectionString( sConnectionString, sDataBaseName : string ) : string;
// Set/replace the user and password in an ADO SQL Server Connection String with a correctly formed user and password
function bpcSetUserInMSSQLConnectionString( sConnectionString, sUserName, sPassword : string ) : string;
// Set/replace the source name in an ADO SQL Server Connection String with a correctly formed source name comprising server/instance
function bpcSetDataSourceInMSSQLConnectionString( sConnectionString, sServer, sInstance : string ) : string;
// Replace the source name (server/instance ) in an ADO SQL Server Connection String
function bpcReplaceDataSourceInMSSQLConnectionString( sConnectionString, NewDataSource : string ) : string;
// Replace a arbitrary parameter in an ADO SQL Server Connection String
function bpcReplaceAParamInAnMSSQLConnectionString( sConnectionString, TargParam, NewArg : string ) : string;
////////////////////////////////////////////////////////////////
// like "Application.ExeName", but in a DLL you get the name of
// the DLL instead of the application name
function bpcGetDLLModuleName: String;
////////////////////////////////////////////////////////////////
// like "Application.ExeName", but in a DLL you get the drive:path of
// the DLL instead of the application name and it is guaranteed to end
// with a separater
function bpcGetDLLModulePath: String;
////////////////////////////////////////////////////////////////
// like "Application.ExeName", but in a DLL you get the name of
// the DLL instead of the application name and prefixed with a subpath
// from sProdFamily : sProdFamily\DLLModuleName
function bpcGetProdFamilyDLLModuleName( sProdFamily : String ) : String;
////////////////////////////////////////////////////////////////
// like "Application.ExeName", but in a DLL you get the drive:path of
// the DLL instead of the sProdFamily\DLLModuleName name and it is guaranteed to end
// with a separater
function bpcGetProdFamilyDLLModulePath( sProdFamily : String ): String;
// Various prodfamily prefixed bpc dll modulename routines for registry access
function bpcGetSM1_DLLModuleName: String;
function bpcGetRM_DLLModuleName: String;
function bpcGetGM1_DLLModuleName: String;
const
CNST_SM1ProdFamily : string = 'BPCSurveyManager1\' ;
CNST_RMProdFamily : string = 'BPCRiskManager\' ;
CNST_GMProdFamily : string = 'BPCGovManager1\' ;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
a94989a84e340450f397bd2d972042c41341fa20
BpcSMScriptLibrary 2
0
454
649
2019-09-11T16:38:18Z
Bishopj
1
Created page with "=bpcXML Language Version 1= Language: Delphi 7 - 2007 ==Interpreter Routines== <pre> uses Classes, DB, XMLIntf, XMLDoc, DBGrids, Grids, bpcDBBookMarkList; // These routi..."
wikitext
text/x-wiki
=bpcXML Language Version 1=
Language: Delphi 7 - 2007
==Interpreter Routines==
<pre>
uses Classes, DB, XMLIntf, XMLDoc, DBGrids, Grids, bpcDBBookMarkList;
// These routines support the bpcXML Language Version 1 (bpcXML-1) which is a
// distributed database update language. The routines herein assemble and dissassemble
// bpcXML-1 language elements.
// Refer to the language definition comment at the end of the implementation section
Type
TbpcXMLFType=(xrsfVAR, xrsfSTR, xrsfQSTR);
TbpcValidfunc = function ( sOID, sPID : string ) : string of object;
TbpcsmDSMapperfunc = function ( const sDataSetName : string ) : string of object;
// This class used for holding dataset.locate routine indexes. The locate routine
// expects a field list of the form 'field1;field2;field3' and a variant array of
// corresponding values. This class provides a place to store both the field list
// and the variant array (as well as the original index string.
TbpcLocateIndexList= class (TObject )
private
myIndexes : string;
protected
function GetLocators : string;
procedure SetLocators( const destindexby : string);
function GetSQLCondition : string;
procedure SetSQLCondition( const sqlstr : string);
public
findexlist : TStringList;
locarray : variant;
count : integer;
// Consructs an index by translating the desindexby string of the form 'field1;field2;field3'
// into a stringlist of the field names, and a variant array of the same length
constructor create( const destindexby : string ) ;
destructor destroy; override;
// Clear the index list
procedure Clear;
// Sets each value in the vararray to 'unassigned' and, if count=0, the array itself
procedure ClearVarArray;
// Set or Read the index list as a string
property AsString : string read GetLocators write SetLocators;
// Set or Read the index list as an SQL conditional 'and' expression of the form
// "( field1=value1 ) and ( field2='text value2' )" where fields are in findex and the
// values are in the vararray. Works with a TbpcXMLFType of xrsfQSTR
property AsSQLCondition : string read GetSQLCondition write SetSQLCondition;
end;
// Returns a XML block START tag with attributes
function smXMLStartTag( sTag, sAttributes : string ) : string;
// Returns a XML block END tag
function smXMLEndTag( sTag : string ) : string;
// Returns a XML block tagged object
function smXMLDualTag( sTag, sContains : string ) : string;
// Returns a bpcXML-1 XML 'Authority' packet
function smXMLAuthorityPacket( sOID, sPID : string; sUpMap : string='' ) : string; overload;
function smXMLAuthorityPacket( sOID, sPID : string; sUpOrgMap, sUpSIDMap: string ) : string; overload;
// Returns a bpcXML-1 XML 'Action' packet
function smXMLActionPacket( sAction: string; source : string= ''; dest : string = '' ) : string;
// Returns a bpcXML-1 XML 'Message' packet
function smXMLMessagePacket( sMessage : string ) : string;
// Returns a bpcXML-1 XML 'Data' packet
function smXMLDataPacket( sMessages : string ) : string;
// Returns one or more bpcXML-1 XML 'Message' packets depending on bAsSingleMessage
// If bAsSingleMessage then a datasource referenced more than once in the action list
// will appear ONLY ONCE in the message, otherwise each message gets its own copy of the
// data source.
function smXMLMessagesFromActionsPacket( sActions : TStringList; bAsSingleMessage : boolean=False; FieldRefList : TbpcLocateIndexList=nil; BookMarks : TBookmarklist=nil; bpcBookMarks : TbpcBookmarkList=nil; FieldMapList : string=''; SubFieldMapList : string='' ) : string;
// Returns a bpcXML-1 XML 'smXMLPacket' block packet containing sContains
function smXMLPacket( sContains : string ) : string;
// Returns a bpcXML-1 XML 'smXMLPacket' block packet with an authorisation request
// for organisation sOID and person sPID, and list of actions and their datasets in
// a stringlist sActions. The datasets are held on the object pointer of the string entry
// Each action string should contain all the desired attributes of the action
// except the datasource, which will be taken from the dataset object.
// Refer to the language definition comment at the end of the implementation section
// Exanple call:
// XMLPacketList.AddObject('update indexby="OrgID;SID"', WebSurvMaintDM.ADOTable1);
// XMLPacket:=smXMLPacketCreate( trim(OIDEdit.Text), trim(PIDEdit.Text), XMLPacketList);
function smXMLPacketCreate( sOID, sPID : string; sActions : TStringList; BookMarks : TBookmarklist=nil; bpcBookMarks : TbpcBookmarkList=nil; sUpMap : string='' ) : string; overload;
function smXMLPacketCreate( sOID, sPID : string; sActions : TStringList; BookMarks : TBookmarklist; bpcBookMarks : TbpcBookmarkList; sUpMap, sMapSID : string ) : string; overload;
// As above but also does full field substitution and partial field substitution to allow effective copying of a group of tables from one group of keys to another.
// The routine allows for full field and part field string replacement. Usually used on key fields to enable storing of a set of tables in a key independent manner, and subsequent
// pasting into a table with new keys/strings for the mapped ones.
// Example field map lists are of the form:
// sMapArgs := 'OrgID=@@OID$OID@@,SID=@@SID$SID@@,ssqOrgID=@@OID$OID@@';
// sMapPartArgs := 'QID=ACFE1999:@@SID$SID@@,ssqQID=ACFE1999:@@SID$SID@@ ;
// In sMapPatArgs the QID field content will have the first sub-string matching 'ACFE1999' replaced with @@SUD$SID@@, commas seperate the fields, while the ':' seperates the target
// substring from its replacement.
// The entire matching field name will be replaced in sMapArgs.
// Generally the sUpMap will be the same as the OrgID in sMapArgs, and the sMapSID will be the same as the SID - @@OID$OID@@ and @@SID$SID@@ in the above example
// These strings can then be replaced for the real orgid and sid in a subsequent string replacement operation before being sent to the database.
// While the examples are about OrgID and SID, any fields can be mapped, and if OID and SID are not involved in the maps then the real OID and SID should be used.
function smXMLPacketCreate2( sOID, sPID : string; sActions : TStringList; BookMarks : TBookmarklist; bpcBookMarks : TbpcBookmarkList; sUpMap, sMapSID, sMapArgs, sMapPartArgs : string ) : string;
// As above, but asumes the contents of the data segment has been built
function smXMLPacketCreate( sOID, sPID : string; sDataContent : string; sUpMap : string='' ) : string; overload;
// Get the data portion of the smXMLpacket, or if no data tags return ''.
// Essentially unwraps the data block from the smXMLpacket - seeks, so a burried data block will confuse the routine.
// Allows:
// smxmlpacket.authority.data../data./smxmlpacket
// smxmlpacket.data../data./smxmlpacket
// data...data
// anyother arbitrary content.
function smXMLPacketDataExtract( smXMLPacketOrDataPacket : string; NoHTMLDecode : boolean=false ) : string;
// Join two bpcXML-1 XML 'smXMLPacket's together at the data tag level, treating the smXMLAuthorityMaster as the first (containing the authority node,
// and the smXMLChild as the second. I.e the Data packet from the second is added to the end of the data packet of the first.
function smXMLPacketAdd( smXMLAuthorityMaster, smXMLChild : string ) : string;
// Returns an bpcXML-1 XML envelope with sXML as the content. sDocTypeName is any legal
// string used to identify the document.
function bpcXMLWrapinEnvelope( sDocTypeName : string; sXML : string ) : string;
// Builds & returns a bpcXML-1 data packet representing the current row in a dataset
// tagged with block tags '<SdtsName >xxx</SdtsName >'
function bpcXMLDataSetRecordToXML( SdtsName : string; SourceDts : TDataSet; FieldRefList : TbpcLocateIndexList=nil; FieldMapList : TStringList=nil; SubFieldMapList : TStringList=nil ) : string;
// Builds & returns a bpcXML-1 data packet representing all rows (if not singleonly)
// or just the current row (if singleonly is true) in a dataset
// The entire packet will be tagged with SdtsGroup, and each record set (row) will be
// wrapped in tags named SdtsName.
function bpcXMLDataSetToXML( SdtsGroup, SdtsName : string; SourceDts : TDataSet; singleonly : boolean; FieldRefList : TbpcLocateIndexList=nil; BookMarks : TBookmarklist=nil; bpcBookMarks : TbpcBookmarkList=nil; FieldMapList : string=''; SubFieldMapList : string='' ) : string; overload;
// Builds & returns a bpcXML-1 data packet representing all rows (if not singleonly)
// or just the current row (if singleonly is true) in a dataset
// The entire packet will be tagged with SdtsGroup, and each record set (row) will be
// wrapped in tags named SdtsName. The result is stored in myXMLStream which is created if nil and
// which is always returned as the result.
function bpcXMLDataSetToXML( myXMLStream : TStringStream; SdtsGroup, SdtsName : string; SourceDts : TDataSet; singleonly : boolean; FieldRefList : TbpcLocateIndexList; BookMarks : TBookmarklist; bpcBookMarks : TbpcBookmarkList; FieldMapList : string=''; SubFieldMapList : string='' ) : TStringStream; overload;
// Designed for speed on large data, this Builds & returns a bpcXML-1 data packet representing all rows (if not singleonly)
// or just the current row (if singleonly is true) in a dataset
// The entire packet will be tagged with Prefix + SdtsGroup + PostFix, and each record set (row) will be
// wrapped in tags named SdtsName. The result is stored in myXMLStream which is created if nil and
// which is always returned as the result. Prefix and PostFix allows the often smaller leadin and leadout strings to be assembled
// before the stream is filled, and particularly in the case of Prefix, will save an entire copy of the string.
function bpcXMLDataSetToXML( myXMLStream : TStringStream; Prefix, PostFix, SdtsGroup, SdtsName : string; SourceDts : TDataSet; singleonly : boolean; FieldRefList : TbpcLocateIndexList; BookMarks : TBookmarklist; bpcBookMarks : TbpcBookmarkList; FieldMapList : string=''; SubFieldMapList : string='' ) : TStringStream; overload;
// Builds & returns a bpcXML-1 data packet representing bookmarked rows, or if nil, all rows (if not singleonly)
// or just the current row (if singleonly is true) in a dataset
// The entire packet will be tagged with StoreID, and each record set (row) will be
// wrapped in tags named 'rowset'.
function bpcXMLCopyDbToXML( StoreID : string; SourceDts : TDataSet; singleonly : boolean; myBookMarks : TBookMarkList; FieldRefList : TbpcLocateIndexList=nil; bpcBookMarks : TbpcBookmarkList=nil ) : string;
// Paste the rowset contents of the XMLPacket into the DestDts dataset (either all, or just the first record). Filter the contents by the fields in the reflist
// of the form "field1;field2;field3" and/or excluding those in the exclist. If the RefList is nil or '' then all fields (but for those in the exclist) are pasted
function bpcXMLPasteXmlToDb( XMLPacket : string; DestDts : TDataSet; singleonly : boolean; myBookMarks : TBookMarkList; sFieldRefList : string=''; sFieldExcList : string=''; bpcBookMarks : TbpcBookmarkList=nil ) : boolean;
// Return an XML clause from the currow of a stringgrid
function bpcXMLSGRecordToXML( SdtsName : string; SourceSG : TStringGrid; CurRow : integer; FieldRefList : TbpcLocateIndexList=nil ) : string;
// Builds & returns a bpcXML-1 data packet representing all rows (if not singleonly)
// or just the current row (if singleonly is true) in a stringgrid
// The entire packet will be tagged with SdtsGroup, and each record set (row) will be
// wrapped in tags named SdtsName.
function bpcXMLSGToXML( SdtsGroup, SdtsName : string; SourceSG : TStringGrid; singleonly : boolean; FieldRefList : TbpcLocateIndexList=nil; BookMarks : TBookmarklist=nil; bpcBookMarks : TbpcBookmarkList=nil ) : string;
// Builds & returns a bpcXML-1 data packet representing bookmarked rows, or if nil, all rows (if not singleonly)
// or just the current row (if singleonly is true) in a stringgrid
// The entire packet will be tagged with StoreID, and each record set (row) will be
// wrapped in tags named 'rowset'.
function bpcXMLCopySGToXML( StoreID : string; SourceSG : TStringGrid; singleonly : boolean; myBookMarks : TBookMarkList; FieldRefList : TbpcLocateIndexList=nil; bpcBookMarks : TbpcBookmarkList=nil ) : string;
function bpcXMLPasteXmlToSG( XMLPacket : string; DestSG : TStringGrid; currow : integer; singleonly : boolean; myBookMarks : TBookMarkList; sFieldRefList, sFieldExcList : string; bpcBookMarks : TbpcBookmarkList ) : boolean;
// Returns a string with any character &, <, >, ', " or greater than 127
// replaced with '&xxx;' versions or space if >127.
// Suitable for encoding general strings into into 7 bit XML ascii for XML engines
function bpcXMLEscape( unescapedstring : string ) : string;
// Mainly meant for bpcXMLExecute - returns true if the required right is a substring of srightslist, or srightslist is '*'
// sRequiredRight and sRightsList can be any legal string.
// Notionally it verifies that a 'right' is held in the 'rightslist', or the rightlist allows all rights (ie. '*').
function bpcValidateSMAccessRights( sRequiredRight : string; sRightsList : string ) : boolean;
// Execute a string containing a bpcXML-1 smxmlpacket (in XMLPacket) using the XML Engine contained in XMLDocument1
// Return the result of executing the packet - usually an html response.
// The role of the myDataSetCollection is critical as it contains tdataset objects with names
// corresponding to the datasources referenced in the action items. Examples of such objects
// are TDataModule, or TWebModule - or any other component that can "own" dataset components
function bpcXMLExecute( const XMLPacket : string; var XMLDocument1: TXMLDocument; fValidateSMAccess : TbpcValidfunc; myDataSetCollection : TComponent; bXMLOut : boolean; sUploadMap : string=''; fMapDataSetName: TbpcsmDSMapperfunc=nil ) : string; overload;
function bpcXMLExecute( const XMLPacket : string; var XMLDocument1: TXMLDocument; fValidateSMAccess, fGetSMAccessRights: TbpcValidfunc; myDataSetCollection : TComponent; bXMLOut : boolean; sUploadMap : string; fMapDataSetName: TbpcsmDSMapperfunc ) : string; overload;
// Performs a tdataset.locate operation on SourceDataSet using destindexby as the locate key fields and
// drawing the vararray for the look-up values from the locindexlist, which will be built if locindexlist is
// not assigned, and rebuilt if the destindexby<>locindexlist otherwise it is reused.
function bpcXMLLocate( SourceRowNode : IXMLNode; destindexby : string; var SourceDataSet : TDataSet; var locindexlist : TbpcLocateIndexList; sUploadMap : string{=''} ) : boolean;
// Attempts to put the SourceDataSet into an updatable state (Insert or Edit) and
// Returns the ds state achieved (dsInsert, dsEdit, dsInactive) based on:
// -No destindexby -> Insert
// -Else bpcXMLLocate can locate a row in sourcedataset that matches the index fields of SourceRowNode -> Edit
// -Else bpcXMLLocate can't locate -> Insert
// -Failed -> dsInactive
// The current row of the sourcedataset will be moved to the target if dsEdit, else the first row.
function bpcXMLEditOnLocate( SourceRowNode : IXMLNode; destindexby : string; var SourceDataSet : TDataSet; var locindexlist : TbpcLocateIndexList; sUploadMap : string{=''} ) : TDataSetState;
// Attempts to delete from the SourceDataSet and
// Returns the ds state achieved (dsInsert, dsEdit, dsInactive) based on:
// -No destindexby -> NoChange
// -Else bpcXMLLocate can locate a row in sourcedataset that matches the index fields of SourceRowNode -> dsBrowse
// -Failed -> dsBrowse
// The current row of the sourcedataset will be moved to the row after the target, else the first row.
function bpcXMLDeleteOnLocate( SourceRowNode : IXMLNode; destindexby : string; var SourceDataSet : TDataSet; var locindexlist : TbpcLocateIndexList; sUploadMap : string{=''} ) : TDataSetState;
// Scan a rowset for the field names and return them in FieldList, making the obect as a stringlist if not provided.
function bpcXMLRowSetToFieldList( SourceRowNode : IXMLNode; FieldList : TStrings=nil ) : TStrings;
// Copies a bpcXML-1 rowset node into the current TDataSet rowset. Returns false on error, else true.
// Filter the contents by the fields in the reflist of the form "field1;field2;field3" and/or excluding those in the exclist.
// If the RefList is nil or '' then all fields (but for those in the exclist) are pasted. The upload map is used to map orgid's - this is a
// temporary hack. Leaving it '' will cause it to have no effect.
function bpcXMLCopyRowSet( var DestDataSet : TDataSet; SourceRowNode : IXMLNode; var tempresult : string; sUploadMap : string=''; FieldRefList : TbpcLocateIndexList=nil; FieldExcList : TbpcLocateIndexList=nil ) : boolean; overload;
function bpcXMLCopyRowSet( var DestSG : TStringGrid; currow : integer; SourceRowNode : IXMLNode; var tempresult : string; sUploadMap : string; FieldRefList : TbpcLocateIndexList; FieldExcList : TbpcLocateIndexList ) : boolean; overload;
// Populates a single term (TargetIndexField) of the vararray of the locindexlist with a value drawn from
// the SourceVal. Only those fields named in the locindexlist are effected
// with the SourceVal. The SourceDataset is used to find the correct type of the variant.
// for the value in the VarArray. The UseVarType may be one of (xrsfVAR, xrsfSTR,
// or xrsfQSTR). It modifies the individual variant type to one of a natural type, a string
// or a 'quoted if text else unquoted' string. The latter type is used for
// database datafields in sql expressions, while the first is usd for dataset.locate commands
// and the second is used in XML packets. If SourceDataSet is nil, or a field can not be
// found the standard XML string type is used (ie unquoted string).
function bpcSetLocIndValAsVarTerm( TargetIndexField, SourceVal : String; var SourceDataSet : TDataSet; var locindexlist : TbpcLocateIndexList; UseVarType : TbpcXMLFType; sUploadMap : string) : boolean;
// Populates the vararray of the locindexlist with values drawn from a simple name/value pair stringlist
// at SourceNameValPair. Only those fields named in the locindexlist are retrieved
// from the SourceNameValPair. The SourceDataset is used to find the correct type of the variant.
// for the value in the VarArray. The UseVarType may be one of (xrsfVAR, xrsfSTR,
// or xrsfQSTR). It modifies the individual variant type to one of a natural type, a string
// or a 'quoted if text else unquoted' string. The latter type is used for
// database datafields in sql expressions, while the first is usd for dataset.locate commands
// and the second is used in XML packets. If SourceDataSet is nil, or a field can not be
// found the standard XML string type is used (ie unquoted string).
function bpcNameValPairAsVarArray( SourceNameValPair : TStrings; var SourceDataSet : TDataSet; var locindexlist : TbpcLocateIndexList; UseVarType : TbpcXMLFType; sUploadMap : string) : boolean;
// Populates the vararray of the locindexlist with values drawn from a bpcXML-1
// rowset at SourceRowNode. Only those fields named in the locindexlist are retrieved
// from the SourceRowNode. The SourceDataset is used to find the correct type of the variant.
// for the value in the VarArray. The UseVarType may be one of (xrsfVAR, xrsfSTR,
// or xrsfQSTR). It modifies the individual variant type to one of a natural type, a string
// or a 'quoted if text else unquoted' string. The latter type is used for
// database datafields in sql expressions, while the first is usd for dataset.locate commands
// and the second is used in XML packets. If SourceDataSet is nil, or a field can not be
// found the standard XML string type is used (ie unquoted string).
function bpcXMLRowSetAsVarArray( SourceRowNode : IXMLNode; var SourceDataSet : TDataSet; var locindexlist : TbpcLocateIndexList; UseVarType : TbpcXMLFType; sUploadMap : string{=''} ) : boolean;
// Returns a string presenting the bpcXML-1 rowset (in SourceRowNode) as a
// conditional expression using the index list (in locindexlist). The result
// has the form: "( field1='text value') and ( field2=number )" - comprising
// each field in the index list paired with its value in the vararray.
// Use it with the xrsfQSTR in bpcXMLRowSetAsVararray to build dataset filters
// from bpcXML-1 rowsets and actions. The Source DataSet provides the field type info
function bpcXMLRowSetAsFilter( var SourceDataSet : TDataSet; SourceRowNode : IXMLNode; var locindexlist : TbpcLocateIndexList; sUploadMap : string{=''} ) : string;
// TbpcXMLFType=(xrsfVAR, xrsfSTR, xrsfQSTR);
// Returns a variant translation of a string based on FieldKind as the type determinant
// xUseVarType modifies the variant type to one of a natural type, a string
// or a 'quoted if text else unquoted' string. The latter type is used for
// database datafields in sql expressions
function bpcReturnStrAsType( StrVar : string; xUseVarType : TbpcXMLFType; FieldKind : TFieldType ) : variant;
// Returns a filtered dataset based on the rowset and the locindexlist. Uses bpcXMLRowSetAsFilter.
function bpcXMLApplyRowSetAsFilter( var SourceDataSet : TDataSet; SourceRowNode : IXMLNode; var locindexlist : TbpcLocateIndexList; sUploadMap : string{=''} ) : TDataSet;
</pre>
==BPCXML LANGUAGE DEFINITION==
BPCXML Language 1 (bpcXML-1)
bpcXML-1 is a simple XML language for interfacing distributed databases.
It uses a messaging paradigm in which conversations consist of 'smxmlpacket' objects.
A syntax for the language is:
<pre>
smxmlpacket='<smxmlpacket >'.authority.data.'</smxmlpacket >'
authority='<authority OID='.string.' PID='.string.' Pswd='.string.' />'
string='"'.value.'"'
value=sequence of letters or numbers
data='<data >'.messages.'</data >'
messages=message.[messages]
message='<message >'.actions.datasources.'</message >'
actions='<actions >'.actionlist.'</actions >'
actionlist=action.[actionlist]
action=updateaction| retrieveaction | commandaction | selectaction | -to be defined-
updateaction='<'.updatecommand.' source="'.string.'" indexby="'.fieldlist.'" />'
updatecommand='update'|'updateall'|'updatet'|'updatetall' (update a datasource or updatet an SQL table
retrieveaction='<'.retrievecommand.' source="'.string.'" indexby="'.fieldlist.'" fields="'.fieldlist.'" destination="'.string.'" returnindexby="'.fieldlist.'" table="'.on|off.'" />'
retrievecommand='retrieve'|'retrieveall'
commandaction='<'.commandcommand.' source="'.string.'" from="'.string.'" />'
commandcommand='cm'
selectaction='<'.selectcommand.' source="'.string.'" from="'.string.'" where="'.string.'" orderby="'.string.'" indexby="'.fieldlist.'" fields="'.fieldlist.'" destination="'.string.'" returnindexby="'.fieldlist.'" table="'.on|off.'" />'
selectcommand='select'
fieldlist=fieldname.[';'.fieldlist]
datasources='<'.datasourcename.' >'.rowsetlist.'</'.datasourcename.' >'
rowsetlist=rowset.[rowsetlist]
rowset='<rowset >'.fieldnamevaluelist.'</rowset >'
fieldnamevaluelist=fieldnamevalue.[fieldnamevaluelist]
fieldnamevalue='<'.fieldname.' >'.fieldvalue.'</'.fieldname.' >'
fieldname=value
fieldvalue=value
// Under consideration Copy & Paste Support
First we copy:
<copy source="SurveyResponse" destination="SurveyResponse1" indexby="OrgID;SID;PID;InstanceID" fields="OrgID;SID;PID;QID;InstanceID;ResponseStr" />
--Make a packet called SurveyResponse1 from SurveyResponse using indexby to extract a match between the source template rowset packets and the datasource
--comprising the fields. If no fields specified, use all fields in datasource
We first have:
<message >
<actions >
<copy source="myDataSource" destination="myDataSource1" indexby="FirstField;ThirdField" />
</actions >
<myDataSource > // This is the set of bookmark rowsets for copy to use to select from myDataSource
<rowset >
<FirstField >FieldValue</FirstField >
<SecondField >FieldValue</SecondField >
<ThirdField >FieldValue</ThirdField >
</rowset >
...Other rows mentioned in the datasource packet
</myDataSource >
</message >
The Copy message is applied and it returns a set of rowsets called myDataSource1
We now have:
<message >
<actions >
<store source="myDataSource1" indexby="FirstField;ThirdField" />
</actions >
<myDataSource1 >
<rowset >
<FirstField >FieldValue</FirstField >
<SecondField >FieldValue</SecondField >
<ThirdField >FieldValue</ThirdField >
</rowset >
...Other rows mentioned in the datasource packet
</myDataSource1 >
</message >
Next we paste this clipped structure:
<message >
<actions >
<paste store="myDataSource1" destination="myDataSource" flags="allfields;exclkey" />
</actions >
</message >
An alternative paste might look like this:
<paste source="SurveyResponse1" destination="SurveyResponse" indexby="OrgID;SID;PID;InstanceID" fields="OrgID;SID;PID;QID;InstanceID;ResponseStr" />
A typical message structure looks like this:
<smxmlpacket >
<authority OID="myOID" PID="myPID" Pswd="myPassword" />
<data >
<message >
<actions >
<update source="myDataSource" indexby="FirstField;ThirdField" />
<updateall source="myDataSource2" indexby="FirstField;ThirdField" />
...Other actions - including more updates, etc.
<actions >
</myDataSource >
<rowset >
<FirstField >FieldValue</FirstField >
<SecondField >FieldValue</SecondField >
<ThirdField >FieldValue</ThirdField >
<rowset >
...Other rows mentioned in the datasource packet
</myDataSource >
...Other datasource mentioned in the actions
</message >
...Other messages in this communication message batch
</data >
</smxmlpacket >
Some specific examples of action Commands supported (nodes) are:
<update source="SurveyResponse" indexby="OrgID;SID;PID;InstanceID" /><SurveyResponse><rowset>...</rowset></surveyresponse>
// -------Update the component named source using the indexby with values drawn from the source rowsets
<updatet source="SurveyResponse" indexby="OrgID;SID;PID;InstanceID" /><SurveyResponse><rowset>...</rowset></surveyresponse>
// -------Update the table named source using the indexby with values drawn from the source rowsets
<retrieve source="SurveyResponse" destination="SurResponseTrnsf" indexby="OrgID;SID;PID;InstanceID" returnindexby="OrgID;SID;PID;QID;InstanceID" fields="OrgID;SID;PID;QID;InstanceID;ResponseStr" /><SurveyResponse><rowset>...</rowset></surveyresponse>
// -------Retrieve the component or table named source (depending on whether table=on) using the indexby with values drawn from the source rowsets and returning
// -------the rows in an update message packet using destination as the source table (or source if no destination) indexed by returnindexby and containing rowsets made up of fields.
<select source="SurveyResponse" from="SurveyResponse" destination="SurResponseTrnsf" where="(OrgID='default') and (PID='MELB001')" orderby="SID, QID" indexby="OrgID;SID;PID;InstanceID" returnindexby="OrgID;SID;PID;QID;InstanceID" fields="OrgID;SID;PID;QID;InstanceID;ResponseStr" />
// -------Retrieve the result of a select expression compised of tables listed in from matching the where clause and in orderby order named source (depending on whether table=on) using the indexby with values drawn from the source rowsets and returning
// -------the fields matching fields. (Note no rowset argument)
<cm source="SurveyResponse" from="update SurveyResponse set InstanceID='default', ResponseStr='1' where (OrgID='default') and (PID='MELB001') and (SID='MENU2')" />
// -------Apply a non returning command using an sql command in from. (Note no rowset argument)
The language relies on the client and the server agreeing on the definition of
data objects held in the messages. The L-1 statetment consists of an authority packet
that is used to validate the packet, and a data packet containing N messages.
Each message consists of a set of actions (in some cases requiring data from datasources)
and a list of the datasources and the contents of the relevant data records presented as rowsets.
Each datasource can contain any number of rowsets. Each rowset represents the
data contained in one row of the datasource.
Within a message a given datasource can be referenced by as many actions as you choose,
but only the first definition of that datasource will be used for all actions.
Datasources are defined by their rowsets, following the actions object which contains
a list of actions.
An action is comprised of a command (eg update or updateall), a datasource (the tag name of
a set of rowsets), a (optional) datadestination (tells the server the name of the
intended recipient of the datasource), a (optional) indexby list (which lists the fields
to use for matching records in the datasource to records in the datadestination).
The indexby list should match tags appearing in the rowsets and be separated by ';'
If the datadestination is not provided, the datasource is assumed to be the datadestination
If the indexby field list is not included, 'insert only' is assumed on the destination dataset
in the update commands. If the indexby list is provided, the record is edited if it already exists
and inserted if it doesn't - based on treating the indexby list as a primary index.
The following message causes the Survey S0001 in MyOrg to be created (if it doesn't exist)
and updated (otherwise) on the server with the contents of the listed fields.
It contains
-An Authority object for person P0001 on organisation MyOrg
-An update action using data sourced from 'Survey'
-A Rowset for the current record drawn from the datasource 'Survey'
<smxmlpacket >
<authority OID="MyOrg" PID="P0001" Pswd="myPwd"/>
<data >
<message >
<actions >
<update source="Survey" indexby="OrgID;SID" />
<actions >
<Survey >
<rowset >
<OrgID>MyOrg</OrgID >
<SID>S0001</SID >
<surveyname >My Staff Survey</surveyname >
<owner>Mr. Squiggle</owner >
</rowset >
</Survey >
</message >
</data >
</smxmlpacket >
The following message causes the Survey S0001 in MyOrg to be created (if it doesn't exist)
and updated (otherwise) on the server with the contents of the listed fields in the Survey
datasource. It then causes all the questions in the SurveyQues datasource object to
be added to the survey-questions table
It contains
-An Authority object for person P0001 on organisation MyOrg
-An update action using data sourced from 'Survey'
-A single Rowset for the current record drawn from the datasource 'Survey'
-An updateall action using data sourced from 'SurveyQues'
-Multiple Rowsets for the records drawn from the datasource 'Survey'
<smxmlpacket >
<authority OID="MyOrg" PID="P0001" Pswd="myPwd"/>
<data >
<message >
<actions >
<update source="Survey" indexby="OrgID;SID" />
<updateall source="SurveyQues" indexby="OrgID;SID;QID" />
<actions >
<Survey >
<rowset >
<OrgID>MyOrg</OrgID >
<SID>S0001</SID >
<surveyname >My Staff Survey</surveyname >
<owner>Mr. Squiggle</owner >
</rowset >
</Survey >
<survques >
<rowset >
<OrgID>default</OrgID >
<SID>S0001</SID >
<QID>S0001</QID >
<question>What is your name?</question >
<quesgroup>abcdefg</quesgroup>
</rowset >
<rowset >
<OrgID>default</OrgID >
<SID>S0001</SID >
<QID>S0002</QID >
<question>What is your foot size?</question >
<quesgroup>abcdefg</quesgroup>
</rowset >
</survques >
</message >
</data >
</smxmlpacket >
In fact update and updateall are the same on the server - all the rowsets in a
datasource are applied in both cases.
This packet retrieves a message containing the 'fields' for all the records in
SurveyResponse matching the fields in indexby with values given in the rowsets.
The packet returned is an update packet, with returnindexby provided as the
indexby attribute of the returned update action (see below).
<smxmlpacket >
<authority OID="MyOrg" PID="P0001" Pswd="myPwd"/>
<data >
<message >
<actions >
<retrieve source="SurveyResponse" indexby="OrgID;SID;PID;InstanceID" returnindexby="OrgID;SID;PID;QID;InstanceID" fields="OrgID;SID;PID;QID;InstanceID;ResponseStr" />
<actions >
<SurveyResponse >
<rowset >
<OrgID>MyOrg</OrgID >
<SID>S0001</SID >
<PID >P0001</PID >
<InstanceID>January</InstanceID >
</rowset >
</SurveyResponse >
</message >
</data >
</smxmlpacket >
The packet returned might look like:
<smxmlpacket >
<authority OID="MyOrg" PID="P0001" Pswd="myPwd"/>
<data >
<message >
<actions >
<update source="SurveyResponse" indexby="OrgID;SID;PID;QID;InstanceID" />
<actions >
<SurveyResponse >
<rowset >
<OrgID>MyOrg</OrgID >
<SID>S0001</SID >
<PID >P0001</PID >
<QID >Q0001</QID >
<InstanceID>January</InstanceID >
<ResponseStr >1</ResponseStr >
</rowset >
<rowset >
<OrgID>MyOrg</OrgID >
<SID>S0001</SID >
<PID >P0001</PID >
<QID >Q0002</QID >
<InstanceID>January</InstanceID >
<ResponseStr >3</ResponseStr >
</rowset >
</SurveyResponse >
</message >
</data >
</smxmlpacket >
finally a large Example updating bpcXML-1 data packet:
<smxmlpacket >
<authority OID="myOID" PID="myPID" Pswd="myPwd"/>
<data >
<message >
<actions >
<update source="adotable1" indexby="OrgID;SID" />
<update source="survques" indexby="OrgID;SID;QID" />
<actions >
<adotable1 >
<rowset >
<OrgID>default</OrgID >
<SID>S0001</SID >
<surveyname >xxxyyy</surveyname >
<ShellHTML>abcdefg</ShellHTML>
</rowset >
</adotable1 >
<survques >
<rowset >
<OrgID>default</OrgID >
<SID>S0001</SID >
<QID>S0001</QID >
<question>xxxyyy</question >
<quesgroup>abcdefg</quesgroup>
</rowset >
</survques >
</message >
<message >
<actions >
<update source="adotable1" indexby="OrgID;SID" />
<updateall source="survques" indexby="OrgID;SID;QID" />
<actions >
<adotable1 >
<rowset >
<SID>S0001</SID >
<surveyname >xxxyyy</surveyname >
<ShellHTML>abcdefg</ShellHTML>
</rowset >
</adotable1 >
<survques >
<rowset >
<OrgID>default</OrgID >
<SID>S0001</SID >
<QID>Q0001</QID >
<question>xxxyyy</question >
<quesgroup>abcdefg</quesgroup>
</rowset >
<rowset >
<OrgID>default</OrgID >
<SID>S0001</SID >
<QID>Q0002</QID >
<question>xxxyyy</question >
<quesgroup>abcdefg</quesgroup>
</rowset >
<rowset >
<OrgID>default</OrgID >
<SID>S0001</SID >
<QID>Q0003</QID >
<question>xxxyyy</question >
<quesgroup>abcdefg</quesgroup>
</rowset >
</survques >
</message >
</data >
</smxmlpacket >
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
a832a98064be0425c069c5253fee8b430f86fe05
BpcSMScriptLibrary 3
0
455
650
2019-09-11T16:39:38Z
Bishopj
1
Created page with "==PopUp Menu Utility Routines== Language: Delphi 7 - 2007 <pre> ////////////////////////////////////////////////////////////////////////////////////////// //////// PopUp Me..."
wikitext
text/x-wiki
==PopUp Menu Utility Routines==
Language: Delphi 7 - 2007
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// PopUp Menu Utility Routines
//////////////////////////////////////////////////////////////////////////////////////////
interface
uses Menus, Classes, DBGrids, Controls, Types, bpcStringList, Windows, Db;
type
TbpcComponentDynArray = array of TComponent;
TbpcDBGridDynArray = array of TDBGrid;
TbpcDataSourceDynArray = array of TDataSource;
// Show a popupmenu at the cursor location, remembering the calling grid
// If the source is a TDBGrid, use that, else use the controls DataSource for the Sender.PopupComponent
procedure bpcPopMenu1GridShow(myPop : TPopupMenu; Sender: TDBGrid );
// Show a popupmenu at the cursor location, remembering the calling db component's datasource
// If the source is a TDBGrid, use that, else use the controls DataSource for the Sender.PopupComponent
procedure bpcPopMenu1DBControlShow(myPop : TPopupMenu; Sender: TDataSource );
// Show a popupmenu at the cursor location, remembering the calling component
// If the source is a TDBGrid, use that, else use the controls DataSource for the Sender.PopupComponent
procedure bpcPopMenu1Show(myPop : TPopupMenu; Sender: TComponent );
// Copy the selected rows from the dbgrid into the clipboard as an smXMLPacket
function bpcPopMenu1Copy(Sender : TPopupMenu ) : boolean;
// Copy the current row from a db control aware control's datasource into the clipboard as an smXMLPacket
function bpcsmXMLDBControlCopy( Sender: TDataSource) : boolean ;
// Copy the selected rows from the dbgrid into the clipboard as an smXMLPacket
function bpcsmXMLDBGrid1Copy( Sender: TDBGrid) : boolean ;
// Paste the selected rows to the db control held on the popmenu into the clipboard as an smXMLPacket
// If the source is a TDBGrid, use that, else use the controls DataSource for the Sender.PopupComponent
procedure bpcPopMenu1Paste(Sender : TPopupMenu; IndexBank : array of string; GridBank : array of TComponent);
// Paste the clipboard into a db control aware control's datasource from the clipboard stored smXMLPacket
procedure bpcsmXMLDBControlPaste( Sender: TComponent; IndexBank : array of string; GridBank : array of TComponent ) ;
// Step through the list (array) of TDBGrid pointers and turn on or off (on mode) the row select mode.
// Returns the value of mode.
function bpcSwitchAllDBGridsToRowSelect( mode: boolean; Targets : array of TDBGrid) : boolean;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
4d1cd138898ea917dee632b9080ab2d47bb489c3
BpcSMScriptLibrary 4
0
456
651
2019-09-11T16:40:56Z
Bishopj
1
Created page with "==Plugin DLL remote command node & data exchange routines== Language: Delphi 7 - 2007 This library defines the API for BPC SurveyManager plugin DLLs. This library is avail..."
wikitext
text/x-wiki
==Plugin DLL remote command node & data exchange routines==
Language: Delphi 7 - 2007
This library defines the API for BPC SurveyManager plugin DLLs. This library is available on request to third party developers, including other BPC support libraries that may be required.
This is V2.3 of the Plugin DLL library. This definition can be considered as fixed and stable by third party developers.
Usage: Any library intended to be automatically read and plugged into the BPC SurveyManager ISAPI library V 1 through 8 must deliver this interface to the BPC SurveyManager engine in 32 bit form. The libraries must be loadable in memory libraries, and support sharemem.
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// Plugin DLL remote command node & data exchange routines
//////////////////////////////////////////////////////////////////////////////////////////
interface
uses Classes, Types, sysutils, Graphics, bpcStringList;
type
// Used for housing plugin command nodes and lists of nodes
TbpcPlgInCmd = class( TObject )
public
PlgCmdOwner : TObject;
Hint, Data: String;
Bitmap: TBitmap;
Event: TNotifyEvent;
end;
TbpcPlgInCmdStringList = class (TbpcStringList)
public
destructor Destroy; override;
procedure AddCommand( APlgCmdOwner : TObject; ACaption, AHint, AData: String; ABitmap: TBitmap; AEvent: TNotifyEvent);
function IsCommand( ACaption : string ) : boolean;
function GetCommand( ACaption : string ) : TbpcPlgInCmd;
function Execute( ACaption : string; Sender : TObject ) : string;
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
8bcded82dbe687d63d3e114c60d699874bd69b98
BpcSMScriptLibrary 5
0
457
652
2019-09-11T16:42:09Z
Bishopj
1
Created page with "==Value List Editor Utility routines== Language: Delphi 7 - 2007 <pre> ////////////////////////////////////////////////////////////////////////////////////////// //////// V..."
wikitext
text/x-wiki
==Value List Editor Utility routines==
Language: Delphi 7 - 2007
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// Value List Editor Utility routines
//////////////////////////////////////////////////////////////////////////////////////////
interface
uses Classes, Grids, ValEdit;
type
TbpcSimplProc = procedure of object;
TbpcOnValidateEvent = procedure(Sender: TObject; ACol, ARow: Longint; const KeyName, KeyValue: string) of object;
// Create and return a property item owned by a VLE with editstyle and editmask as provided.
// myEditStyle = esSimple, esEllipsis, esPickList
// myEditMask = '' or something like '999;9; '
function bpcVLEPropCreate( OwnerValueListEditor : TValueListEditor; myEditStyle: TEditStyle; myEditMask : string ) : TItemProp;
// Create, Assign to key (as name string) and return a property item owned by a VLE with editstyle and editmask as provided.
function bpcVLEPropAssign( OwnerValueListEditor : TValueListEditor; key : string; myEditStyle: TEditStyle; myEditMask : string ) : TItemProp; overload;
// Create, Assign to key (as an index position) and return a property item owned by a VLE with editstyle and editmask as provided.
function bpcVLEPropAssign( OwnerValueListEditor : TValueListEditor; key : integer; myEditStyle: TEditStyle; myEditMask : string ) : TItemProp; overload;
// Initialise (by creating) all the propertied for keys in a VLE with the ordered list of editstyles and editmasks
procedure bpcVLESetPropItems( OwnerValueListEditor : TValueListEditor; myEditStyles : array of TEditStyle; myEditMasks : array of string ); overload;
// Initialise (by creating) the properties for named keys (in myKeys) in a VLE with the ordered list of editstyles and editmasks
procedure bpcVLESetPropKeyedItems( OwnerValueListEditor : TValueListEditor; myKeys : array of string; myEditStyles : array of TEditStyle; myEditMasks : array of string ); overload;
// Initialise the VLE properties to esSimple, excepte for those defined in the SpecialFormats stringlist
// SpecialFormats strings should have the form:
// <visible keyname>=ellipsis|picklist;<mask>
// where <mask> can be a delphi format string of the form '999;9;_' (Note a '_' as the final mask part is changed to a ' ' in the created property)
// Example:
// 'Company Number=picklist;999;9;_'
procedure bpcVLEInitPropItems( OwnerValueListEditor : TObject; SpecialFormats : TStrings );
// Get the current Value of a VLE based on its current row
function bpcVLECurrentValue( OwnerValueListEditor : TObject ) : string;
// Set the current Value of a VLE based on its current row (returns the newly assigned value as now stored (not necessaryily as provided)
function bpcVLESetCurrentValue( OwnerValueListEditor : TObject; ToValue : string ) : string;
// Get the current Key of a VLE based on its current row
function bpcVLECurrentKey( OwnerValueListEditor : TObject ) : string;
// Get the trimmed value of a VLE based on a Key
function bpcVLETrimmedValue( OwnerValueListEditor : TObject; Key : string ) : string; overload;
// Get the trimmed value of a VLE based on an Index
function bpcVLETrimmedValue( OwnerValueListEditor : TObject; Index : integer ) : string; overload;
// Get the key of a VLE based on an Index
function bpcVLEName( OwnerValueListEditor : TObject; Index : integer ) : string;
// Set the current Value of a VLE based on its current row (returns the newly assigned value as now stored (not necessaryily as provided)
function bpcVLESetKeyedValue( OwnerValueListEditor : TObject; Key : string; ToValue : string ) : string;
// Get the row count of the current VLE
function bpcVLERowCount( OwnerValueListEditor : TObject ) : integer;
// Set the VLE's Strings Content
function bpcVLESetStrings( OwnerValueListEditor : TObject; Text : string ) : string ; overload;
// Set the VLE's Strings (Key/Value) using a stringlist
function bpcVLESetStrings( OwnerValueListEditor : TObject; myStrings : tstrings ) : tstrings ; overload;
// Set the VLE's ValidateEvent
function bpcVLESetOnValidate( OwnerValueListEditor : TObject; Value : TbpcOnValidateEvent ) : TbpcOnValidateEvent ;
// Get the VLE's ValidateEvent
function bpcVLEGetOnValidate( OwnerValueListEditor : TObject ) : TbpcSimplProc ;
// Get the VLE's Keys List
function bpcVLEGetKeys( OwnerValueListEditor : TObject ) : tstrings ;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
d679c257882baebdfe843c3483f3d866e0f1c1a9
BpcSMScriptLibrary 6
0
458
653
2019-09-11T16:43:11Z
Bishopj
1
Created page with "==ADO Database Connection Utility routines== Language: Delphi 7 - 2007 <pre> ////////////////////////////////////////////////////////////////////////////////////////// ////..."
wikitext
text/x-wiki
==ADO Database Connection Utility routines==
Language: Delphi 7 - 2007
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// ADO Database Connection Utility routines
//////////////////////////////////////////////////////////////////////////////////////////
interface
uses sysutils, Types, classes, DB, ADODB, JclStrings, JclSysUtils;
////////////////////////////////////////////////////////////////
// Routines for managing the bpcSMDataModule Standard
// Assumes a datamodule with the following components
// Change the database connection to NewDb and return the new connection string or the old one if no change made.
// (Assumes AdoConnection has a pre-existing valid Db connection)
function bpcChangeADODBConnectionTo( AdoConnection : TADOConnection; WithRegistryReset: boolean; NewDb: string): string;
// Return the name of the current database in the connectionstring
function bpcGetCurrentADODBName(AdoConnection : TADOConnection): string;
// Check whether the current Db is connectable and return the Db to the pre-test state
function bpcIsADODBConnectable(AdoConnection : TADOConnection): boolean;
// Copy a list of database names (containing Filter if not '') on the server in 'List' and return true if everything is successful, else false
function bpcGetADODBNames( AdoConnection : TADOConnection; LoginPrompt : boolean; Filter:string; List: TStrings) : boolean;
// Reconnect the datamodule returning true if successful, and false if error or otherwise unsuccesful. (Always forces disconnect first)
function bpcReConnectDataModule( AdoConnection : TADOConnection): boolean;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
a6d3d3bbfc4d79c1f348a5ab9c55d168fd604b2c
BpcSMScriptLibrary 7
0
459
654
2019-09-11T16:44:15Z
Bishopj
1
Created page with "==Useful Types== Language: Delphi 7 - 2007 <pre> ////////////////////////////////////////////////////////////////////////////////////////// //////// Useful Types //////////..."
wikitext
text/x-wiki
==Useful Types==
Language: Delphi 7 - 2007
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// Useful Types
//////////////////////////////////////////////////////////////////////////////////////////
interface
// Just Contains Useful Types
type
TSimpleStringFunc = Function : string of object;
TSimpleBoolFunc = Function : boolean of object;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
0a176e404b93614b4b7f28cbab7e4af8b52984a9
BpcSMScriptLibrary 8
0
460
655
2019-09-11T16:45:08Z
Bishopj
1
Created page with "==bpcXML Data Transfer Utility Routines== Language: Delphi 7 - 2007 <pre> ////////////////////////////////////////////////////////////////////////////////////////// ///////..."
wikitext
text/x-wiki
==bpcXML Data Transfer Utility Routines==
Language: Delphi 7 - 2007
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// bpcXML Data Transfer Utility Routines. Uses TbpcPublicationManager object.
//////////////////////////////////////////////////////////////////////////////////////////
interface
uses windows, SysUtils, Classes, JvComponent, JvDirectories, Types, DB, ADODB, Psock, NMsmtp, DBGrids, HTTPApp, HTTPProd,
NMHttp, IdBaseComponent, IdComponent, IdTCPConnection, IdTCPClient,
IdHTTP, xmldom, XMLIntf, msxmldom, XMLDoc, {JvBaseDlg,} IdAuthentication, bpcDBBookMarkList,
LMDCustomComponent, lmdcont, bpcSMScriptLibrary_7;
type
TbpcsmDSMapperfunc= function( const sDataSetName: string): string of object;
TbpcSMConfirmOk = Function( var myResult : variant ) : integer of object;
TbpcSMSTPMode=(bpcSMUpdate,bpcSMReplace,bpcSMDelete);
TbpcHTTPSuccess = procedure (Cmd: CmdType) of object;
TbpcValidfunc = function (sOID, sPID: string): string;
TbpcGetPSTTableMasterIndex = function ( Sender: TDataSet ) : string;
{
TGovDupFlags = ( csmHeader, csmQuestionsByRef, csmQuestionsByAct, csmScripts, csmInstances );
TGovDupFlagsSet = set of TGovDupFlags;
TGovMaintHTTPSuccess = procedure (Cmd: CmdType) of object;
TGovMaintUpdateStatusBar = procedure of object;
TGovMaintConfirmOk = Function( var myResult : variant ) : integer of object;
TGovMaintWSLoginDlg = Function( var Username : string; var PassWord : string ) : boolean of object;
TGovMaintInfoErrorAdvice = procedure( myAdvice : string ) of object;
}
TbpcPublicationManager = class(TObject)
private
funUplinkFail: TbpcHTTPSuccess;
funUplinkComplete: TbpcHTTPSuccess;
Activated : boolean;
funGetTargetDLLModuleName: TSimpleStringFunc;
function DummyValidateSMAccess(sOID, sPID: string): string;
public
Owner : TDataModule;
ADOConnection1: TADOConnection;
funDataSetMapperFunc, funOutGoingDataSetMapperFunc : TbpcsmDSMapperfunc;
funValidateAccess : TbpcValidfunc;
funGetTableMasterIndex : TbpcGetPSTTableMasterIndex;
funGetOrgID, funGetPID : TSimpleStringFunc;
ConfirmOkFunc : TbpcSMConfirmOk;
PreviewCommsPackets : boolean;
iniURLDefPublicationSite : string;
{TargetDLLModuleName,} TargetModuleExt, TargetModuleAction : string;
tsDataSetIndexes : TStrings;
IdHTTP1 : TIdHTTP;
XMLDocument1 : TXMLDocument;
constructor create(myOwner : TDataModule; myADOConnection: TADOConnection; myDataSetMapperFunc, myOutGoingDataSetMapperFunc : TbpcsmDSMapperfunc; myConfirmOkFunc : TbpcSMConfirmOk );
destructor destroy;
function GetActive : boolean;
procedure SetActive( value : boolean );
function DataSetMapperFunc( const sDataSetName : string ) : string ;
function OutGoingDataSetMapperFunc( const sDataSetName : string ) : string ;
// Send the current row or all rows of the sender dataset to the URL (or if URL is ommitted, the iniURLDefPublicationSite)
// Defaults the destination to ''.
// bpcBookMarks is an alternative to BookMarks (the latter requiring a grid, while the former does not). If both are set, BookMarks are used.
function SendToPubServer( indexby : string; singleonly:boolean; sendingDS : TDataSet; publishto : string; BookMarks : TBookmarklist=nil; bpcBookMarks : TbpcBookmarkList=nil ): string; overload;
// Send the current row or all rows of the sender dataset to the URL (or if URL is ommitted, the iniURLDefPublicationSite)
// bpcBookMarks is an alternative to BookMarks (the latter requiring a grid, while the former does not). If both are set, BookMarks are used.
// If Confirmer is assigned, the associated TSurvMaintConfirmOk routine will be called before the XML packet is dispatched.
function SendToPubServer( confirmer: TbpcSMConfirmOk ;destination, indexby : string; singleonly:boolean; sendastable:boolean; sendermode: TbpcSMSTPMode; DelFilter: string; sendingDS : TDataSet; publishto : string; BookMarks : TBookmarklist=nil; bpcBookMarks : TbpcBookmarkList=nil ) : string; overload;
// Send the current row or all rows of the sender dataset to the URL (or if URL is ommitted, the iniURLDefPublicationSite)
// bpcBookMarks is an alternative to BookMarks (the latter requiring a grid, while the former does not). If both are set, BookMarks are used.
function SendToPubServer( destination, indexby : string; singleonly:boolean; sendastable:boolean; sendermode: TbpcSMSTPMode; DelFilter: string; sendingDS : TDataSet; publishto : string; BookMarks : TBookmarklist=nil; bpcBookMarks : TbpcBookmarkList=nil ): string; overload;
function SendToPubServer( destination, indexby : string; singleonly:boolean; sendastable:boolean; sendingDS : TDataSet; publishto : string; BookMarks : TBookmarklist=nil; bpcBookMarks : TbpcBookmarkList=nil ) : string; overload;
// Send the all rows of the sending qry (named sourceQryName, or if '' named destination) to the URL (or if URL is ommitted, the iniURLDefPublicationSite)
function SendToPubServer( destination, indexby : string; singleonly:boolean; sendastable:boolean; sendermode: TbpcSMSTPMode; DelFilter: string; sourceQryName, sendingQry : string; publishto : string ) : string; overload;
// Send the all rows of the sending qry (named sourceQryName, or if '' named destination) to the URL (or if URL is ommitted, the iniURLDefPublicationSite)
// If Confirmer is assigned, the associated TGovMaintConfirmOk routine will be called before the XML packet is dispatched.
function SendToPubServer( confirmer: TbpcSMConfirmOk ;destination, indexby : string; singleonly:boolean; sendastable:boolean; sendermode: TbpcSMSTPMode; DelFilter: string; sourceQryName, sendingQry : string; publishto : string ) : string; overload;
// Send the all rows of the sending qry (named sourceQryName, or if '' named destination) to the URL (or if URL is ommitted, the iniURLDefPublicationSite)
// If Confirmer is assigned, the associated TGovMaintConfirmOk routine will be called before the XML packet is dispatched.
// Assumes the sendermode (not present in this version) is the Update (ie. NOT replace mode)
function SendToPubServer( confirmer: TbpcSMConfirmOk ;destination, indexby : string; singleonly:boolean; sendastable:boolean; sourceQryName, sendingQry : string; publishto : string ) : string; overload;
procedure GetFromPubServer(indexby, returnindexby : string; singleonly:boolean; sendingDS : TDataSet; publishto : string ; BookMarks : TBookmarklist=nil; bpcBookMarks : TbpcBookmarkList=nil);
procedure GetSimpleFromPubServer(indexby, returnindexby : string; singleonly:boolean; sendingDS : TDataSet; publishto : string; myParams : Variant );
// Return the index for the dataset (only some known datasets covered)
function GetTableMasterIndex( Sender: TDataSet ) : string;
// Send the current row or all rows of the sender dataset to the URL (or if URL is ommitted, the iniURLDefPublicationSite)
// The correct index will be found in the DataSetIndexes StringList and matched with the dataset prior to sending.
procedure IndexedSendToPubServer( SingleOnly: boolean; Sender: TDataSet; URL : string=''; BookMarks : TBookmarklist=nil; bpcBookMarks : TbpcBookmarkList=nil ); Overload;
// Get the current row or all rows of the sender dataset from the URL (or if URL is ommitted, the iniURLDefPublicationSite)
// The correct index will be found in the DataSetIndexes StringList and matched with the dataset prior to sending.
procedure IndexedGetFromPubServer( SingleOnly: boolean; Sender: TDataSet; URL : string=''; BookMarks : TBookmarklist=nil; bpcBookMarks : TbpcBookmarkList=nil); Overload;
// Send the current row or all rows of the sender dataset to the URL (or if URL is ommitted, the iniURLDefPublicationSite)
// The correct index will be found in the provided myTableIndexes StringList and matched with the dataset prior to sending.
procedure IndexedSendToPubServer(myTableIndexes : TStrings; SingleOnly: boolean; Sender: TDataSet; URL : string=''; BookMarks : TBookmarklist=nil; bpcBookMarks : TbpcBookmarkList=nil); Overload;
// Get the current row or all rows of the sender dataset from the URL (or if URL is ommitted, the iniURLDefPublicationSite)
// The correct index will be found in the provided myTableIndexes StringList and matched with the dataset prior to sending.
procedure IndexedGetFromPubServer(myTableIndexes : TStrings; SingleOnly: boolean; Sender: TDataSet; URL : string=''; BookMarks : TBookmarklist=nil; bpcBookMarks : TbpcBookmarkList=nil); Overload;
property OnGetOrgID : TSimpleStringFunc read funGetOrgID write funGetOrgID;
property OnGetPID : TSimpleStringFunc read funGetPID write funGetPID;
property Active : boolean read GetActive write SetActive;
property EnableCommsPreview : boolean read PreviewCommsPackets write PreviewCommsPackets;
property PublicationSite : string read iniURLDefPublicationSite write iniURLDefPublicationSite;
property OnGetTargetDLL : TSimpleStringFunc read funGetTargetDLLModuleName write funGetTargetDLLModuleName;
property TargetAction : string read TargetModuleAction write TargetModuleAction;
property TargetExt : string read TargetModuleExt write TargetModuleExt;
property IdHTTP : TIdHTTP read IdHTTP1 write IdHTTP1;
property DataSetIndexes : TStrings read tsDataSetIndexes write tsDataSetIndexes;
property OnValidateAccess : TbpcValidfunc read funValidateAccess write funValidateAccess;
property OnConfirmOkFunc : TbpcSMConfirmOk read ConfirmOkFunc write ConfirmOkFunc;
property OnGetTableMasterIndex : TbpcGetPSTTableMasterIndex read funGetTableMasterIndex write funGetTableMasterIndex;
property OnOutGoingDataSetMapper : TbpcsmDSMapperfunc read funOutGoingDataSetMapperFunc write funOutGoingDataSetMapperFunc;
property OnDataSetMapper : TbpcsmDSMapperfunc read funDataSetMapperFunc write funDataSetMapperFunc;
property OnUpLinkComplete : TbpcHTTPSuccess read funUplinkComplete write funUplinkComplete;
property OnUpLinkFail : TbpcHTTPSuccess read funUplinkFail write funUplinkFail;
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
a15dcda59e217ef203c0a3c7830c441c8db9fdf4
BpcSMScriptLibrary 9
0
461
656
2019-09-11T16:46:06Z
Bishopj
1
Created page with "==TStringGrid BPC Library== Language: Delphi 7 - 2007 Library provides routines for manipulating standard Delphi TStringGrids <pre> //////////////////////////////////////..."
wikitext
text/x-wiki
==TStringGrid BPC Library==
Language: Delphi 7 - 2007
Library provides routines for manipulating standard Delphi TStringGrids
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// TStringGrid BPC Library
//////////////////////////////////////////////////////////////////////////////////////////
interface
uses Classes, Types, Windows, Grids, DB;
//////////////////////////////////////////////////
// TStringGrid BPC Library
// Copy the selected rows from the grid into the clipboard as an smXMLPacket
// Returns true if the clipboard is non-empty
function bpcsmXMLSGGrid1Copy( Sender: TStringGrid) : boolean;
// Copy the selected rows from the grid into the clipboard as a CSV packet
// Returns true if the clipboard is non-empty
function bpcCSVSGGrid1Copy( Sender: TStringGrid) : boolean;
function bpcTABSGGrid1Copy( Sender: TStringGrid) : boolean;
// Paste into the stringgrid as a CSV, TAB, XML string and return the string
function bpcXMLSGGrid1Paste( Sender: TStringGrid) : TStringGrid;
function bpcCSVSGGrid1Paste( Sender: TStringGrid) : TStringGrid;
function bpcTABSGGrid1Paste( Sender: TStringGrid) : TStringGrid;
// Write the stringgrid to a string as a CSV file and return the string
function bpcSGWriteTABString(myStringGrid1: TStringGrid ) : string;
// Write the stringgrid to a string as a CSV file and return the string
function bpcSGWriteCSVString(myStringGrid1: TStringGrid ) : string;
// Read in a CSV string - the first row MUST BE the headings. Strings are marked by "" and ',' are separators.
// If bGrowFromLastRow is true, the file will be added to theend of the stringgrid without clearing, if bClearGrid is true all
// columns will be cleared. FixedCols is the number of fixed columns from the left, while StartCol says which col to start
// inserting data, while FixedColsList is a comma separated list of headings for the fixed columns portion (assuming the
// startcol is after the fied cols. The CSVString is the CSV string to be imported. Returns the
// stringgrid.
function bpcSGLoadCSVString( myStringGrid1: TStringGrid; bGrowFromLastRow: boolean; bClearGrid: boolean; FixedCols, StartCol : integer; FixedColsList: string; CSVString: string): TStringGrid; overload;
function bpcSGLoadTABString( myStringGrid1: TStringGrid; bGrowFromLastRow: boolean; bClearGrid: boolean; FixedCols, StartCol : integer; FixedColsList: string; CSVString: string): TStringGrid; overload;
function bpcSGLoadDLMString( myStringGrid1: TStringGrid; bGrowFromLastRow: boolean; bClearGrid: boolean; FixedCols, StartCol : integer; FixedColsList: string; CSVString: string; Delim : char ): TStringGrid; overload;
// Read in a CSV file - the first row MUST BE the headings. Strings are marked by "" and ',' are separators.
// If bGrowFromLastRow is true, the file will be added to theend of the stringgrid without clearing, if bClearGrid is true all
// columns will be cleared. FixedCols is the number of fixed columns from the left, while StartCol says which col to start
// inserting data, while FixedColsList is a comma separated list of headings for the fixed columns portion (assuming the
// startcol is after the fied cols. The FileName is the full path including filename to the csv file to be imported. Returns the
// stringgrid.
// eg: bpcSGLoadCSVFile( StringGrid1,(MergeRowsDSCBX.Checked), (not (MergeRowsDSCBX.Checked or MergeColsDSCBX.Checked)), 1, iff(MergeColsDSCBX.Checked,StringGrid1.ColCount,1), 'Status', FileName );
function bpcSGLoadCSVFile( myStringGrid1: TStringGrid; bGrowFromLastRow: boolean; bClearGrid: boolean; FixedCols, StartCol : integer; FixedColsList: string; FileName: string; QuotesAreDoubled : boolean=false ): TStringGrid;
// Read in a dtatset - the fieild names are used as the headings of the grid.
// If bGrowFromLastRow is true, the file will be added to theend of the stringgrid without clearing, if bClearGrid is true all
// columns will be cleared. FixedCols is the number of fixed columns from the left, while StartCol says which col to start
// inserting data, while FixedColsList is a comma separated list of headings for the fixed columns portion (assuming the
// startcol is after the fied cols. The FileContents is the tdataset to be imported, which will be opened if necessary,
// and repositioned at the first record after completion. Returns the stringgrid.
// eg: bpcSGLoadFromTable( StringGrid1, (MergeRowsDSCBX.Checked), (not (MergeRowsDSCBX.Checked or MergeColsDSCBX.Checked)), 1, iff(MergeColsDSCBX.Checked,StringGrid1.ColCount,1), 'Status', FileContents );
function bpcSGLoadFromTable( myStringGrid1: TStringGrid; bGrowFromLastRow: boolean; bClearGrid: boolean; FixedCols, StartCol : integer; FixedColsList: string; FileContents: TDataSet): TStringGrid;
// Fill a stringgrid column targcol with an (optionally incrementing) Mask of chars, or with GUID's, FromRow ToRowand and return the stringgrid
// OVERLOADED FUNCTION
// bpcSGFillRows( StringGrid1, FromRow, ToRow, MakeGUIDs, Mask, MaskChar, StartString, IncrementBy, OnlyNums )
// eg: bpcSGFillRows( StringGrid1, StringGrid1.Col, FromRow, ToRow, FillColumnDlg.UseGUIDCBX.Checked, FillColumnDlg.FillMask.Text, iff( FillColumnDlg.UseMaskCharCBX.Checked, iff( length(FillColumnDlg.MaskChar.Text)>0, FillColumnDlg.MaskChar.Text[1], '#'), #0 ), trim(FillColumnDlg.StartString.Text) ,iff( FillColumnDlg.IncrementCBX.Checked, FillColumnDlg.Incrementer.AsInteger, 0 ), FillColumnDlg.NumsOnlyCBX.Checked );
function bpcSGFillRows( myStringGrid1: TStringGrid; targcol : integer; FromRow, ToRow : integer; MakeGUIDs : boolean; Mask : string; MaskChar : char; StartString : string; IncrementBy : integer; OnlyNums : boolean ) : TStringGrid; overload;
// Fill a stringgrid column targcol with GUID's FromRow ToRowand and return the stringgrid
// OVERLOADED FUNCTION
function bpcSGFillRows( myStringGrid1: TStringGrid; targcol : integer; FromRow, ToRow : integer; MakeGUIDs : boolean ) : TStringGrid; overload;
// Write the stringgrid to a file called sFileName as a CSV file
procedure bpcSGWriteCSVFile(myStringGrid1: TStringGrid; sFileName : string);
// Insert a new blank col at curcol in a stringgrid col returning the stringgrid
function bpcSGDeleteCol( myStringGrid1: TStringGrid; TargCol : integer ) : TStringGrid;
// Delete the row at targrow in a stringgrid returning the stringgrid
function bpcSGDeleteRow( myStringGrid1: TStringGrid; TargRow : integer ) : TStringGrid;
// Delete the rows in a stringgrid preserving the header row if includeHeader is false and returning the stringgrid
function bpcSGDeleteAllRows( myStringGrid1: TStringGrid; includeHeader : boolean ) : TStringGrid;
// Insert a new blank col at curcol in a stringgrid col returning the stringgrid
function bpcSGInsertCol( myStringGrid1: TStringGrid; CurCol : integer ) : TStringGrid;
// Insert a new blank row at currow in a stringgrid row returning the stringgrid
function bpcSGInsertRow( myStringGrid1: TStringGrid; CurRow : integer ) : TStringGrid;
// Clear the contents of a stringgrid col at targcol
function bpcSGClearCol( myStringGrid1: TStringGrid; targcol : integer ) : TStringGrid;
// Clear the contents of a stringgrid row at targrow
function bpcSGClearRow( myStringGrid1: TStringGrid; targrow : integer ) : TStringGrid;
// Clear all the contents of a stringgrid preserving the header row if includeHeader is false and returning the stringgrid
function bpcSGClearAllRows( myStringGrid1: TStringGrid; includeHeader : boolean ) : TStringGrid;
// Returns the column index of the stringgrid col corresponding to the colname
function bpcSGIndexOfSGCol( myStringGrid1: TStringGrid; colname : string ) : integer;
// Returns true if the stringgrid row is empty
function bpcSGIsEmptyRow( myStringGrid1: TStringGrid; targrow : integer ) : boolean;
// Copies a StringGrid col fromcol tocol and returns the index of the col after the tocol
function bpcSGCopyCol( myStringGrid1: TStringGrid; tocol, fromcol : integer ) : integer;
// Copies a StringGrid row fromrow torow and returns the index of the row after the torow
function bpcSGCopyRow( myStringGrid1: TStringGrid; torow, fromrow : integer ) : integer;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
adde87f66212d1f66a8466ee0c9b497ca7fd5f40
BpcSMScriptLibrary 10
0
462
657
2019-09-11T16:47:17Z
Bishopj
1
Created page with "==TTIWDBAdvWebGrid Manipulation Routines== Language: Delphi 7 - 2007 This library requires the TMS TIWDBAdvWebGrid (from TMS - [http://tmssoftware.com/ http://tmssoftware...."
wikitext
text/x-wiki
==TTIWDBAdvWebGrid Manipulation Routines==
Language: Delphi 7 - 2007
This library requires the TMS TIWDBAdvWebGrid (from TMS - [http://tmssoftware.com/ http://tmssoftware.com/] ) and the Intraweb suite (from AtoZ - [http://www.atozedsoftware.com/index.en.aspx http://www.atozedsoftware.com/index.en.asp ].
It is distributable in source form only free of charge and on request. It will not work without the TMS and Intraweb components.
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// TTIWDBAdvWebGrid Routines
//////////////////////////////////////////////////////////////////////////////////////////
interface
uses Classes, Types, DB, IWDBAdvWebGrid;
// Returns the index of a TTIWDBAdvGrid column given the name.
function bpcGetTIAdvGridColumnIndex( ColCollection: TTIWDBWebGridColumns; ColName : string) : Integer;
// Returns the TTIWDBAdvGrid column given the name.
function bpcGetTIAdvGridColumn( ColCollection: TTIWDBWebGridColumns; ColName : string) : TTIWDBWebGridColumn;
// HTML Edit control can't handle form and div tags - temporary fix (there is a better one in the libraries) //##JB Replace with bpcsmLib routine
function bpcStripFormTag( myHTMLString : string) : string;
// HTML Edit control can't handle form and div tags - temporary fix (there is a better one in the libraries) //##JB Replace with bpcsmLib routine
function bpcRebuildStripFormTag( myHTMLString : string) : string;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
a5e17682e21fe6d6a89de53bc50957394a005344
BpcSMScriptLibrary 11
0
463
658
2019-09-11T16:48:26Z
Bishopj
1
Created page with "==TClientDataSet BPC Library== Language: Delphi 7 - 2007 This library delivers the stringgrid manipulations and enhancements to directly to Delphi ClientDataSets, as well a..."
wikitext
text/x-wiki
==TClientDataSet BPC Library==
Language: Delphi 7 - 2007
This library delivers the stringgrid manipulations and enhancements to directly to Delphi ClientDataSets, as well as a few additional useful facilities.
===Currently Implemented===
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// TClientDataSet BPC Library
//////////////////////////////////////////////////////////////////////////////////////////
interface
uses Classes, Types, Windows, Grids, DB, DBClient;
Type
TbpcCDSLoadDLMStrOnSetFieldValue = procedure ( myTargCDS: TClientDataSet; sImportFieldName : string; var sImportFieldValue : string; var bAllowed : boolean ) of object;
//////////////////////////////////////////////////
// TClientDataSet BPC Library
</pre>
===Not currently implemented===
<pre>
// Copy the selected rows from the grid into the clipboard as an smXMLPacket
// Returns true if the clipboard is non-empty
function bpcsmXMLCDSGrid1Copy( Sender: TClientDataSet) : boolean;
// Copy the selected rows from the grid into the clipboard as a CSV packet
// Returns true if the clipboard is non-empty
function bpcCSVCDSGrid1Copy( Sender: TClientDataSet) : boolean;
function bpcTABCDSGrid1Copy( Sender: TClientDataSet) : boolean;
// Paste into the stringgrid as a CSV, TAB, XML string and return the string
function bpcXMLCDSGrid1Paste( Sender: TClientDataSet) : TClientDataSet;
function bpcCSVCDSGrid1Paste( Sender: TClientDataSet) : TClientDataSet;
function bpcTABCDSGrid1Paste( Sender: TClientDataSet) : TClientDataSet;
</pre>
===Currently Implemented===
<pre>
// Write the TClientDataSet to a string as a CSV file and return the string
function bpcCDSWriteTABString(myStringGrid1: TClientDataSet ) : string;
// Write the TClientDataSet to a string as a CSV file and return the string
function bpcCDSWriteCSVString(myStringGrid1: TClientDataSet ) : string;
// Read in a CSV string - the first row MUST BE the headings. Strings are marked by "" and ',' are separators.
// If bGrowFromLastRow is true, the file will be added to theend of the stringgrid without clearing, if bClearGrid is true all
// columns will be cleared. FixedCols is the number of fixed columns from the left, while StartCol says which col to start
// inserting data, while FixedColsList is a comma separated list of headings for the fixed columns portion (assuming the
// startcol is after the fied cols. The CSVString is the CSV string to be imported. Returns the
// stringgrid.
function bpcCDSLoadFromTable( myStringGrid1: TClientDataSet; bGrowFromLastRow: boolean; bClearGrid: boolean; StartCol : integer; FileContents: TDataSet): TClientDataSet; overload;
function bpcCDSLoadCSVFile( myStringGrid1: TClientDataSet; bGrowFromLastRow: boolean; bClearGrid: boolean; StartCol : integer; FileName: string; QuotesAreDoubled : boolean ): TClientDataSet; overload;
function bpcCDSLoadDLMString( myStringGrid1: TClientDataSet; csvFieldList : array of string; bNullOnEmptyVal : boolean; bGrowFromLastRow: boolean; bClearGrid: boolean; StartCol : integer; MyOnFieldWrite : TbpcCDSLoadDLMStrOnSetFieldValue; CSVString: string; Delim : char ): TClientDataSet; overload;
function bpcCDSLoadCSVString( myStringGrid1: TClientDataSet; csvFieldList : array of string; bNullOnEmptyVal : boolean; bGrowFromLastRow: boolean; bClearGrid: boolean; FixedCols, StartCol : integer; FixedColsList: string; MyOnFieldWrite : TbpcCDSLoadDLMStrOnSetFieldValue; CSVString: string): TClientDataSet; overload;
function bpcCDSLoadTABString( myStringGrid1: TClientDataSet; csvFieldList : array of string; bNullOnEmptyVal : boolean; bGrowFromLastRow: boolean; bClearGrid: boolean; FixedCols, StartCol : integer; FixedColsList: string; MyOnFieldWrite : TbpcCDSLoadDLMStrOnSetFieldValue; CSVString: string): TClientDataSet; overload;
function bpcCDSLoadTABString( myStringGrid1: TClientDataSet; csvFieldList : array of string; bNullOnEmptyVal : boolean; bGrowFromLastRow: boolean; bClearGrid: boolean; StartCol : integer; MyOnFieldWrite : TbpcCDSLoadDLMStrOnSetFieldValue; CSVString: string): TClientDataSet; overload;
function bpcCDSLoadTABString( myStringGrid1: TClientDataSet; bGrowFromLastRow: boolean; bClearGrid: boolean; StartCol : integer; CSVString: string): TClientDataSet; overload;
//function bpcCDSLoadDLMString( myStringGrid1: TClientDataSet; bGrowFromLastRow: boolean; bClearGrid: boolean; FixedCols, StartCol : integer; FixedColsList: string; CSVString: string; Delim : char ): TClientDataSet; overload;
// Read in a CSV file - the first row MUST BE the headings. Strings are marked by "" and ',' are separators.
// If bGrowFromLastRow is true, the file will be added to theend of the stringgrid without clearing, if bClearGrid is true all
// columns will be cleared. FixedCols is the number of fixed columns from the left, while StartCol says which col to start
// inserting data, while FixedColsList is a comma separated list of headings for the fixed columns portion (assuming the
// startcol is after the fied cols. The FileName is the full path including filename to the csv file to be imported. Returns the
// stringgrid.
// eg: bpcCDSLoadCSVFile( StringGrid1,(MergeRowsDSCBX.Checked), (not (MergeRowsDSCBX.Checked or MergeColsDSCBX.Checked)), 1, iff(MergeColsDSCBX.Checked,StringGrid1.ColCount,1), 'Status', FileName );
//function bpcCDSLoadCSVFile( myStringGrid1: TClientDataSet; bGrowFromLastRow: boolean; bClearGrid: boolean; FixedCols, StartCol : integer; FixedColsList: string; FileName: string; QuotesAreDoubled : boolean=false ): TClientDataSet;
// Read in a dtatset - the fieild names are used as the headings of the grid.
// If bGrowFromLastRow is true, the file will be added to theend of the stringgrid without clearing, if bClearGrid is true all
// columns will be cleared. FixedCols is the number of fixed columns from the left, while StartCol says which col to start
// inserting data, while FixedColsList is a comma separated list of headings for the fixed columns portion (assuming the
// startcol is after the fied cols. The FileContents is the tdataset to be imported, which will be opened if necessary,
// and repositioned at the first record after completion. Returns the stringgrid.
// eg: bpcCDSLoadFromTable( StringGrid1, (MergeRowsDSCBX.Checked), (not (MergeRowsDSCBX.Checked or MergeColsDSCBX.Checked)), 1, iff(MergeColsDSCBX.Checked,StringGrid1.ColCount,1), 'Status', FileContents );
//function bpcCDSLoadFromTable( myStringGrid1: TClientDataSet; bGrowFromLastRow: boolean; bClearGrid: boolean; FixedCols, StartCol : integer; FixedColsList: string; FileContents: TDataSet): TClientDataSet;
// Returns the column index of the stringgrid col corresponding to the colname
function bpcCDSIndexOfCDSCol( myStringGrid1: TClientDataSet; colname : string ) : integer;
</pre>
===Deprecated Routines===
These are stringrid routines that are not relevant in CDS or are considered unsafe in database contexts. These rountines have been removed from the V3 libraries onwards.
<pre>
// Fill a stringgrid column targcol with an (optionally incrementing) Mask of chars, or with GUID's, FromRow ToRowand and return the stringgrid
// OVERLOADED FUNCTION
// bpcCDSFillRows( StringGrid1, FromRow, ToRow, MakeGUIDs, Mask, MaskChar, StartString, IncrementBy, OnlyNums )
// eg: bpcCDSFillRows( StringGrid1, StringGrid1.Col, FromRow, ToRow, FillColumnDlg.UseGUIDCBX.Checked, FillColumnDlg.FillMask.Text, iff( FillColumnDlg.UseMaskCharCBX.Checked, iff( length(FillColumnDlg.MaskChar.Text)>0, FillColumnDlg.MaskChar.Text[1], '#'), #0 ), trim(FillColumnDlg.StartString.Text) ,iff( FillColumnDlg.IncrementCBX.Checked, FillColumnDlg.Incrementer.AsInteger, 0 ), FillColumnDlg.NumsOnlyCBX.Checked );
function bpcCDSFillRows( myStringGrid1: TClientDataSet; targcol : integer; FromRow, ToRow : integer; MakeGUIDs : boolean; Mask : string; MaskChar : char; StartString : string; IncrementBy : integer; OnlyNums : boolean ) : TClientDataSet; overload;
// Fill a stringgrid column targcol with GUID's FromRow ToRowand and return the stringgrid
// OVERLOADED FUNCTION
function bpcCDSFillRows( myStringGrid1: TClientDataSet; targcol : integer; FromRow, ToRow : integer; MakeGUIDs : boolean ) : TClientDataSet; overload;
// Write the stringgrid to a file called sFileName as a CSV file
procedure bpcCDSWriteCSVFile(myStringGrid1: TClientDataSet; sFileName : string);
// Insert a new blank col at curcol in a stringgrid col returning the stringgrid
function bpcCDSDeleteCol( myStringGrid1: TClientDataSet; TargCol : integer ) : TClientDataSet;
// Delete the row at targrow in a stringgrid returning the stringgrid
function bpcCDSDeleteRow( myStringGrid1: TClientDataSet; TargRow : integer ) : TClientDataSet;
// Delete the rows in a stringgrid preserving the header row if includeHeader is false and returning the stringgrid
function bpcCDSDeleteAllRows( myStringGrid1: TClientDataSet; includeHeader : boolean ) : TClientDataSet;
// Insert a new blank col at curcol in a stringgrid col returning the stringgrid
function bpcCDSInsertCol( myStringGrid1: TClientDataSet; CurCol : integer ) : TClientDataSet;
// Insert a new blank row at currow in a stringgrid row returning the stringgrid
function bpcCDSInsertRow( myStringGrid1: TClientDataSet; CurRow : integer ) : TClientDataSet;
// Clear the contents of a stringgrid col at targcol
function bpcCDSClearCol( myStringGrid1: TClientDataSet; targcol : integer ) : TClientDataSet;
// Clear the contents of a stringgrid row at targrow
function bpcCDSClearRow( myStringGrid1: TClientDataSet; targrow : integer ) : TClientDataSet;
// Clear all the contents of a stringgrid preserving the header row if includeHeader is false and returning the stringgrid
function bpcCDSClearAllRows( myStringGrid1: TClientDataSet; includeHeader : boolean ) : TClientDataSet;
</pre>
<pre>
{
// Returns true if the stringgrid row is empty
function bpcCDSIsEmptyRow( myStringGrid1: TClientDataSet; targrow : integer ) : boolean;
// Copies a StringGrid col fromcol tocol and returns the index of the col after the tocol
function bpcCDSCopyCol( myStringGrid1: TClientDataSet; tocol, fromcol : integer ) : integer;
// Copies a StringGrid row fromrow torow and returns the index of the row after the torow
function bpcCDSCopyRow( myStringGrid1: TClientDataSet; torow, fromrow : integer ) : integer;
}
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
5cdecf9b632ca4a34ea54589858d4e74a7e786b6
BpcSMScriptLibrary 12
0
464
659
2019-09-11T16:49:29Z
Bishopj
1
Created page with "==MHTML MS CDO Interface - BPC Library== Language: Delphi 7 - 2007 ALL ROUTINES IN THIS LIBRARY REQUIRE MS CDO and ADO 2.5+ to be available on the system MS CDO can be foun..."
wikitext
text/x-wiki
==MHTML MS CDO Interface - BPC Library==
Language: Delphi 7 - 2007
ALL ROUTINES IN THIS LIBRARY REQUIRE MS CDO and ADO 2.5+ to be available on the system
MS CDO can be found in Cdosys.dll this is a standard component of XP + W2000 + W2003
systems. Exchange 2007 does not include CDO / Mapi support by default so the library
*may* have to be downloaded separately from Microsoft.
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// MHTML MS CDO Interface - BPC Library
//////////////////////////////////////////////////////////////////////////////////////////
// ALL ROUTINES IN THIS LIBRARY REQUIRE MS CDO and ADO 2.5+ to be available on the system
// MS CDO can be found in Cdosys.dll this is a standard component of XP + W2000 + W2003
// systems. Exchange 2007 does not include CDO / Mapi support by default so the library
// *may* have to be downloaded separately from Microsoft.
interface
uses Classes;
// Save the AURL to AFileName as an MHTML document
function SaveToMHTMLFile(const AUrl,AFileName: string; var ErrorMessage : string ) : boolean;
// Save the AURL to a Stream as an MHTML document
function SaveToMHTMLStream(const AUrl: string; Var MHTStream : TStream; var ErrorMessage : string ) : boolean;
// Save the AURL to a string as an MHTML document (return True on success, or False and '')
function SaveToMHTMLString(const AUrl: string; var ErrorMessage : string ) : string; overload;
// Save the AURL to a string as an MHTML document (return the string or '')
function SaveToMHTMLString(const AUrl: string; Var MHTMLString : String; var ErrorMessage : string ) : boolean; overload;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
7d0efd857abd25cdd1e415d8fed0ff4bb350547d
BpcSMScriptLibrary 13
0
465
660
2019-09-11T16:50:24Z
Bishopj
1
Created page with "==Convert an ADO Recordset to XML and back again - BPC Library ADOXMLUnit== Author: Dmitry Lifatov Language: Delphi 7 - 2007 <pre> ////////////////////////////////////////..."
wikitext
text/x-wiki
==Convert an ADO Recordset to XML and back again - BPC Library ADOXMLUnit==
Author: Dmitry Lifatov
Language: Delphi 7 - 2007
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// Convert an ADO Recordset to XML and back again - BPC Library ADOXMLUnit
//////////////////////////////////////////////////////////////////////////////////////////
// ALL ROUTINES IN THIS LIBRARY REQUIRE MS ADO 2.5+ to be available on the system
// Author: Dmitry Lifatov
interface
uses
Classes, ADOInt;
// Convert an ADO recordset into an XML string
// Use: Memo1.Lines.Text:=RecordsetToXML(ADOQuery1.Recordset);
function RecordsetToXML(const Recordset: _Recordset): string;
// Convert a properly form XML string into an ADO recordset
// Use: ADOQuery1.Recordset:=RecordsetFromXML(Memo1.Lines.Text);
function RecordsetFromXML(const XML: string): _Recordset;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
c988296d23e03fb52984bbaf814913f97a75501a
BpcSMScriptLibrary 14
0
466
661
2019-09-11T16:51:23Z
Bishopj
1
Created page with "==Graphics manipulation and conversion routines JPG and BMP== Language: Delphi 7 - 2007 <pre> //////////////////////////////////////////////////////////////////////////////..."
wikitext
text/x-wiki
==Graphics manipulation and conversion routines JPG and BMP==
Language: Delphi 7 - 2007
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// Graphics manipulation routines
//////////////////////////////////////////////////////////////////////////////////////////
// Author: JG Bishop
interface
uses Classes, Types;
//Convert a JPG file to a BMP file
//Returns the BMP file name
function bpcJPG_To_BPM(const JpgFileName : string) : string; overload
//Convert a JPG file to a BMP stream (replaces original stream)
//Returns the altered input stream
function bpcJPG_To_BPM( MyStream : TStream ) : TStream; overload;
//Convert a BMP file to a JPG file
//1 = low quality , 100 good quality
//Returns the jpg file name
function bpcBmp_To_Jpg (BmpFileName : String; ; Comp : Integer) : string; overload;
//Convert a BMP stream to a JPG stream (replace content of input stream)
//1 = low quality , 100 good quality
//Returns the jpg stream
function bpcBmp_To_Jpg(MyStream : TStream; Comp : Integer) : TStream; overload;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
9af6c1780c286ef250b280e8cf6620513985ef1c
BpcSMScriptLibrary 15
0
467
662
2019-09-11T16:52:24Z
Bishopj
1
Created page with "==TDBAdvGrid Routines== Language: Delphi 7 - 2007 These routines provide additional manipulation and support for the TMS TDBAdvGrid and TAdvGrid components from TMS Softwa..."
wikitext
text/x-wiki
==TDBAdvGrid Routines==
Language: Delphi 7 - 2007
These routines provide additional manipulation and support for the TMS TDBAdvGrid and TAdvGrid components from TMS Software. They require these components to function. The TMS components are available from TMS Software [http://tmssoftware.com/ http://tmssoftware.com/]
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// TDBAdvGrid Routines
//////////////////////////////////////////////////////////////////////////////////////////
interface
uses Classes, Types, DB, Grids, BaseGrid, AdvGrid, DBAdvGrid, AdvObj;
Type
TbpcTMSHiddenRowList = class (TIntList )
public
function MapRowToRealRow( VisRow: Integer; VisRowCount: integer): integer;
end;
// Returns the index of a TDBAdvGrid column given the name. -1 on failure
function bpcGetDBAdvGridColumnIndex( ColCollection: TDBGridColumnCollection; ColName : string) : Integer;
// Returns the TDBAdvGrid column given the name. nil on failure
function bpcGetDBAdvGridColumn( ColCollection: TDBGridColumnCollection; ColName : string) : TDBGridColumnItem;
// Returns the index of a TDBAdvGrid column given the field name. -1 on failure
function bpcGetDBAdvGridColumnByFieldIndex( ColCollection: TDBGridColumnCollection; FieldName : string) : Integer;
// Returns the TDBAdvGrid column given the Field name. nil on failure
function bpcGetDBAdvGridColumnByField( ColCollection: TDBGridColumnCollection; FieldName : string) : TDBGridColumnItem;
// Group and Filter Safe DBAdvGrid Page Mode Reload (Reloads Non page mode grid from dataset, after optional dataset refresh
// Reestablishes groups and filters and position.
function bpcDBAdvGridSafeReload( DBAdvGrid1: TDBAdvGrid; WithDataSetRefresh : boolean=False ) : TDBAdvGrid;
// Similar to TDBAdvGrid.Narrow, except that this one adds (logical and) filters rather than replacing filters, so with
// each call the additional filter applies to the last filtered result. To clear the filters simply assign a condition of ''.
// FLastFilter is used to store the last filter string for the current filter. If the new codition varies by only a character from the previous filter,
// it updates the previous filter rather than adding an additional filter. Clearing FLastFilter with each call prevents this behaviour.
function bpcDBAdvGridNarrowDown( MyGrid : TDBAdvGrid; Var FLastFilter : string; ACondition: string; AColumn: integer = -1) : TDBAdvGrid;
// Safe vis row to real row index conversion (This handles incrementally applied filters where the hidden row list is an accumulated list of multiple
// succeeding ordered hidden row lists. It essentially deconstructs that portion of the rows it needs by reversing algorythmically the
// row visibility changes applied to date. The second routine allows a marginal performance saving where mutliple access are
// required to the same grid filter state. Note bpcDBAdvGridSafeReload without dataset refresh rebuilds the hidden list from scratch and makes
// the TMS real coord routines safe to use again - as an alternative to this routine.
function bpcDBAdvGridVisRowToRealRow( DBAdvGrid1: TDBAdvGrid; VisRow : integer ) : integer; overload;
function bpcDBAdvGridVisRowToRealRow( DBAdvGrid1: TDBAdvGrid; VisRow : integer; var ListHidden: TbpcTMSHiddenRowList ) : integer; overload;
// Return the values for a column as a comma separated list of rows
function bpcDBAdvGridVisColToRowList( DBAdvGrid1: TDBAdvGrid; const FieldName : string; Sep : string=''; const QuoteMe : string='' ) : string;
function bpcDBAdvGridVisColToQuotedRowList( DBAdvGrid1: TDBAdvGrid; const FieldName : string; Sep : string='' ) : string;
// True if the grid is currently empty of datarows - equivalent to tdatasource is empty.
function bpcDBAdvGridIsEmpty( DBAdvGrid1: TDBAdvGrid ) : boolean;
function bpcDBAdvGridGetFieldList( DBAdvGrid1: TDBAdvGrid; bKeepEmptyFields : boolean=False; StartAtCol : integer=1 ) : TStringDynArray ;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
dc9919e878c7db534d5920ed823c7f13d28f426c
BpcSMScriptLibrary 16
0
468
663
2019-09-11T16:53:30Z
Bishopj
1
Created page with "==TwwDBGrid Routines== Language: Delphi 7 - 2007 This library provides additional support for the infopower grid from Wol2Wol Software. [http://www.woll2woll.com/ http://w..."
wikitext
text/x-wiki
==TwwDBGrid Routines==
Language: Delphi 7 - 2007
This library provides additional support for the infopower grid from Wol2Wol Software.
[http://www.woll2woll.com/ http://www.woll2woll.com/]
<pre>
//////////////////////////////////////////////////////////////////////////////////////////
//////// TwwDBGrid Routines
//////////////////////////////////////////////////////////////////////////////////////////
interface
uses Classes, Types, DB, Grids, Wwdbigrd, Wwdbgrid ;
function bpcDBwwGridGetFieldList( DBAdvGrid1: TwwDBGrid; bKeepEmptyFields : boolean=False; StartAtCol : integer=0 ) : TStringDynArray ;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
83d0b84d42a066bc7f145637d422ed01739fde8d
SpareTemplatePage
0
469
664
2019-09-11T16:54:33Z
Bishopj
1
Created page with "== == Language: Delphi 7 - 2007 <pre> </pre> =BackLinks= {{#dpl: linksto={{FULLPAGENAME}} }}"
wikitext
text/x-wiki
== ==
Language: Delphi 7 - 2007
<pre>
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
003f0fdeb0f7b15f1a7274046aa2910a76043103
BpcStringList
0
470
665
2019-09-11T16:56:23Z
Bishopj
1
Created page with "==TStringlist Name/Value pair manipulation== Language: Delphi 7 - 2007 <pre> interface uses Classes, Types ; type TbpcStringList = class(TStringList) public function Ge..."
wikitext
text/x-wiki
==TStringlist Name/Value pair manipulation==
Language: Delphi 7 - 2007
<pre>
interface
uses Classes, Types ;
type
TbpcStringList = class(TStringList)
public
function GetValue(index : integer) : string;
function ReadValue(index : string; defval : string) : string;
procedure SetValue(Index: Integer; const Value: String);
procedure AddStrArray( strArray : TStringDynArray );
property Value[Index:Integer] : string read GetValue write SetValue;
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
d55387eda240f3d7e7ff4cf47494a083f581bb06
BpcDBBookMarkList
0
471
666
2019-09-11T16:57:18Z
Bishopj
1
Created page with "==TbpcBookmarkList - manages dataset bookmark lists== Language: Delphi 7 - 2007 TbpcBookmarkList loads a bookmark list from a TDataSet and supports management of the list i..."
wikitext
text/x-wiki
==TbpcBookmarkList - manages dataset bookmark lists==
Language: Delphi 7 - 2007
TbpcBookmarkList loads a bookmark list from a TDataSet and supports management of the list independent of the TDataset including location, selection, tracking and deletion of underlying bookmarked dataset rows.
<pre>
uses SysUtils, Db, Classes;
type
TbpcBookmarkList = class
private
FList: TStringList;
FDSet: TDataSet;
FCache: TBookmarkStr;
FCacheIndex: Integer;
FCacheFind: Boolean;
FLinkActive: Boolean;
function GetCount: Integer;
function GetCurrentRowSelected: Boolean;
function GetItem(Index: Integer): TBookmarkStr;
procedure SetCurrentRowSelected(Value: Boolean);
procedure StringsChanged(Sender: TObject);
protected
function CurrentRow: TBookmarkStr;
function Compare(const Item1, Item2: TBookmarkStr): Integer;
procedure LinkActive(Value: Boolean);
public
constructor Create(ADSet: TDataSet);
destructor Destroy; override;
procedure Clear; // free all bookmarks
procedure Delete; // delete all selected rows from dataset
function Find(const Item: TBookmarkStr; var Index: Integer): Boolean;
function IndexOf(const Item: TBookmarkStr): Integer;
function Refresh: Boolean;// drop orphaned bookmarks; True = orphans found
property Count: Integer read GetCount;
property CurrentRowSelected: Boolean read GetCurrentRowSelected
write SetCurrentRowSelected;
property Items[Index: Integer]: TBookmarkStr read GetItem; default;
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
22c05c2ff0861310a218a50d52ed752991e793c5
BpcADSI
0
472
667
2019-09-11T16:58:31Z
Bishopj
1
Created page with "==Active Directory support through ADSI interface== Language: Delphi 7 - 2007 ===Overview=== Provides active directory support for login and password authentication using..."
wikitext
text/x-wiki
==Active Directory support through ADSI interface==
Language: Delphi 7 - 2007
===Overview===
Provides active directory support for login and password authentication using one of 2 modes - bpcADSIWinNT (winNT lookup), bpcADSILdap (LDAP Lookup), as well as discovery of the current user. This component will work in a loaded DLL on a server as well as a desktop application.
Registers the TbpcADSI component in the Delphi IDE.
Example Canonical Strings:
(Used for LDAP authentication): CN=Fred,DC=bishopphillips,DC=com
===IMPORTANT NOTE FOR AUTHENTICATION MODES===
* IF FAuthMode=bpcADSILdap:
Use where user accounts have bad login lockout enabled.
While WinNT could do the entire access check and return the user object in one step given the username and password,
it defaults to kerberos first and if that fails it uses nt. That means that a bad password
will receive at least 2, possibly 3 login attempts which will trigger the acount lockout flag on normal account lockout settings. So..we have to use LDAP to test the password...BUT you can't access the user object in AD using LDAP if you don't know the canonical name (full name) which means that merely having the username is insufficient. So we access the Users container instead under LDAP using username and password to authenticate, and if that works we use the WinNT (with either
the cached LDAP login, or the launching user login - not sure whether winnt can see ldap caches) to access the user object.
* IF FAuthMode=bpcADSIWinNT:
Use where user accounts DO NOT have bad login lockout enabled. Uses WinNT only (Faster).
<pre>
uses SysUtils, Classes, ActiveX, Windows, Types, ComCtrls, ExtCtrls, ActiveDs_TLB, adshlp, oleserver, Variants;
type
TbpcADPassword = record
Expired: boolean;
NeverExpires: boolean;
CannotChange: boolean;
end;
type
TbpcADSIUserInfo = record
UID: string;
UserName: string;
Description: string;
Password: TbpcADPassword;
Disabled: boolean;
LockedOut: boolean;
Groups: string; //CSV
end;
type
TbpcADSIAuthMode = ( bpcADSIWinNT, bpcADSILdap );
TbpcADSI = class(TComponent)
private
FUserName: string;
FPassword: string;
FCurrentUser: string;
FCurrentDomain: string;
FAuthMode : TbpcADSIAuthMode;
FLDAPCanonical : string;
function GetCurrentUserName: string;
function GetCurrentDomain: string;
protected
{ Protected declarations }
public
AnonWinNTError : boolean;
constructor Create(AOwner: TComponent); override;
destructor Destroy; override;
property CurrentUserName: string read FCurrentUser;
property CurrentDomain: string read FCurrentDomain;
function GetUser(Domain, UserName: string; var ADSIUser: TbpcADSIUserInfo): boolean;
function Authenticate(Domain, UserName, Group: string): boolean;
published
property LoginUserName: string read FUserName write FUserName;
property LoginPassword: string read FPassword write FPassword;
property LDAPCanonical : string read FLDAPCanonical write FLDAPCanonical;
property AuthMode : TbpcADSIAuthMode read FAuthMode write FAuthMode;
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
c92b24b52919d73c7f1860d32e7bf74904f3c21f
BpcWin32Service
0
473
668
2019-09-11T16:59:33Z
Bishopj
1
Created page with "==Win32 Service Management== Language: Delphi 7 - 2007 Win32 service control - start, stop, status check, keyname enquiry, displayname enquiry, and listing. <pre> uses Ty..."
wikitext
text/x-wiki
==Win32 Service Management==
Language: Delphi 7 - 2007
Win32 service control - start, stop, status check, keyname enquiry, displayname enquiry, and listing.
<pre>
uses Types, Classes;
function bpcWin32ServiceStart( sMachine, sService : string ) : boolean;
function bpcWin32ServiceStop( sMachine, sService : string ) : boolean;
function bpcWin32ServiceGetStatus( sMachine, sService : string ) : DWord;
function bpcWin32ServiceStopped( sMachine, sService : string ) : boolean;
function bpcWin32ServiceRunning( sMachine, sService : string ) : boolean;
function bpcWin32ServiceGetKeyName( sMachine, sServiceDispName : string ) : string;
function bpcWin32ServiceGetDisplayName( sMachine, sServiceKeyName : string ) : string;
function bpcWn32ServiceGetList( sMachine : string; dwServiceType, dwServiceState : DWord; slServicesList : TStrings ) : boolean;
const
//
// Service Types
//
SERVICE_KERNEL_DRIVER = $00000001;
SERVICE_FILE_SYSTEM_DRIVER = $00000002;
SERVICE_ADAPTER = $00000004;
SERVICE_RECOGNIZER_DRIVER = $00000008;
SERVICE_DRIVER =
(SERVICE_KERNEL_DRIVER or
SERVICE_FILE_SYSTEM_DRIVER or
SERVICE_RECOGNIZER_DRIVER);
SERVICE_WIN32_OWN_PROCESS = $00000010;
SERVICE_WIN32_SHARE_PROCESS = $00000020;
SERVICE_WIN32 =
(SERVICE_WIN32_OWN_PROCESS or
SERVICE_WIN32_SHARE_PROCESS);
SERVICE_INTERACTIVE_PROCESS = $00000100;
SERVICE_TYPE_ALL =
(SERVICE_WIN32 or
SERVICE_ADAPTER or
SERVICE_DRIVER or
SERVICE_INTERACTIVE_PROCESS);
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
5c998df1c2e4749d56bcd86f47cf1fc5a1d8be46
ExportADOTable
0
474
669
2019-09-11T17:00:42Z
Bishopj
1
Created page with "==ADO Based Table Exporter== Language: Delphi 7 - 2007 Exports the content of a table connected via ADO using various export formats. Registers TExportADOTable <pre> u..."
wikitext
text/x-wiki
==ADO Based Table Exporter==
Language: Delphi 7 - 2007
Exports the content of a table connected via ADO using various export formats.
Registers TExportADOTable
<pre>
uses
Windows, Messages, SysUtils, Classes, Graphics, Controls, Forms, Dialogs,
Db, ADODB;
type
TExportADOTable = class(TADOTable)
private
{ Private declarations }
//TADOCommand component used to execute the SQL exporting commands
FADOCommand: TADOCommand;
protected
{ Protected declarations }
public
{ Public declarations }
constructor Create(AOwner: TComponent); override;
//Export procedures
//"FiledNames" is a comma separated list of the names of the fields you want to export
//"FileName" is the name of the output file (including the complete path)
//if the dataset is filtered (Filtered = true and Filter <> ''), then I append
//the filter string to the sql command in the "where" directive
//if the dataset is sorted (Sort <> '') then I append the sort string to the sql command in the
//"order by" directive
procedure ExportToExcel(FieldNames: string; FileName: string;
SheetName: string; IsamFormat: string);
procedure ExportToHtml(FieldNames: string; FileName: string);
procedure ExportToParadox(FieldNames: string; FileName: string; IsamFormat: string);
procedure ExportToDbase(FieldNames: string; FileName: string; IsamFormat: string);
procedure ExportToTxt(FieldNames: string; FileName: string);
published
{ Published declarations }
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
82cee3e498bc90999adf7301617a79e13c87387f
BpcMSSpellCheck
0
475
670
2019-09-11T17:02:04Z
Bishopj
1
Created page with "==Late Binding MS Word Based Spell Checker for Normal and Rich Text== Language: Delphi 7 - 2007 <pre> // ==================================================================..."
wikitext
text/x-wiki
==Late Binding MS Word Based Spell Checker for Normal and Rich Text==
Language: Delphi 7 - 2007
<pre>
// =============================================================================
// MS Word COM Interface to Spell Check and Synonyms
// Original Version: Mike Heydon Dec 2000
// mheydon@eoh.co.za
// Updated and Expanded: JGBishop 2005,2007,2008
// =============================================================================
uses
Windows,
SysUtils,
Classes,
ComObj,
Dialogs,
Forms,
StdCtrls,
Controls,
Buttons,
Graphics,
ComCtrls;
type
// Event definitions
TbpcMSSpellCheckBeforeCorrection = procedure(Sender : TObject;
MispeltWord : string;
Suggestions : TStrings) of object;
TbpcMSSpellCheckAboutToChange = procedure(Sender : TObject;
FocusControl : TObject;
MispeltWord : string; CorrectedWord : string;
var AllowChange : boolean; Var CancelChanges : boolean) of object;
TbpcMSSpellCheckAfterCorrection = procedure(Sender : TObject;
MispeltWord : string;
CorrectedWord : string) of object;
TbpcMSSpellCheckOnCorrection = procedure(Sender : TObject;
var WordToCorrect : string) of object;
TbpcMSSpellCheckOnHideSelection = procedure(sender : TObject; editcontrol : TObject; OnOrOff : boolean) of object;
TbpcMSSpellCheckOnReadHideSelection = procedure(sender : TObject; editcontrol : TObject; var OnOrOff : boolean) of object;
TbpcMSSpellCheckOnReadText = procedure(sender : TObject; editcontrol : TObject; var StrBuf : string) of object;
// Property types
TbpcMSSpellCheckReplacement = (repDefault,repUser);
TbpcMSSpellCheckLetters = set of char;
TbpcMSSpellCheckLanguage = (wdLanguageNone,wdNoProofing,wdDanish,wdGerman,
wdSwissGerman,wdEnglishAUS,wdEnglishUK,wdEnglishUS,
wdEnglishCanadian,wdEnglishNewZealand,
wdEnglishSouthAfrica,wdSpanish,wdFrench,
wdFrenchCanadian,wdItalian,wdDutch,wdNorwegianBokmol,
wdNorwegianNynorsk,wdBrazilianPortuguese,
wdPortuguese,wdFinnish,wdSwedish,wdCatalan,wdGreek,
wdTurkish,wdRussian,wdCzech,wdHungarian,wdPolish,
wdSlovenian,wdBasque,wdMalaysian,wdJapanese,wdKorean,
wdSimplifiedChinese,wdTraditionalChinese,
wdSwissFrench,wdSesotho,wdTsonga,wdTswana,wdVenda,
wdXhosa,wdZulu,wdAfrikaans,wdArabic,wdHebrew,
wdSlovak,wdFarsi,wdRomanian,wdCroatian,wdUkrainian,
wdByelorussian,wdEstonian,wdLatvian,wdMacedonian,
wdSerbianLatin,wdSerbianCyrillic,wdIcelandic,
wdBelgianFrench,wdBelgianDutch,wdBulgarian,
wdMexicanSpanish,wdSpanishModernSort,wdSwissItalian);
// Main TbpcMSSpellCheck Class
TbpcMSSpellCheck = class(TComponent)
private
FLetterChars : TbpcMSSpellCheckLetters;
FFont : TFont;
FColor : TColor;
FReplaceDialog : TbpcMSSpellCheckReplacement;
FCompletedMessage,
FActive : boolean;
FLanguage : TbpcMSSpellCheckLanguage;
FForm : TForm;
FEbox : TEdit;
FLbox : TListBox;
FCancelBtn,
FChangeBtn : TBitBtn;
FBeforeCorrection : TbpcMSSpellCheckBeforeCorrection;
FAfterCorrection : TbpcMSSpellCheckAfterCorrection;
FOnCorrection : TbpcMSSpellCheckOnCorrection;
FOnHideSelection : TbpcMSSpellCheckOnHideSelection;
FOnReadHideSelection : TbpcMSSpellCheckOnReadHideSelection;
FOnReadTextBuf : TbpcMSSpellCheckOnReadText;
FAboutToChange : TbpcMSSpellCheckAboutToChange;
FRPCErrorCount : integer;
FUseExistingInstance : boolean;
procedure SetFFont(NewValue : TFont);
protected
procedure MakeForm;
procedure CloseForm;
procedure SuggestedClick(Sender : TObject);
public
MsWordApp,
MsSuggestions : OleVariant;
constructor Create(AOwner : TComponent); override;
destructor Destroy; override;
function GetSynonyms(StrWord : string; Synonyms : TStrings) : boolean;
function CheckWordSpelling(StrWord : string;
Suggestions : TStrings) : boolean;
procedure CheckTextSpelling(var StrText : string);
procedure CheckRichTextSpelling(RichEdit : TCustomRichEdit; bLineAdjust : boolean=False);
procedure CheckMemoTextSpelling(Memo : TCustomMemo);
procedure CheckEditTextSpelling(Memo : TCustomEdit);
Function Connect : Boolean;
Function DisConnect : Boolean;
property Active : boolean read FActive;
property LetterChars : TbpcMSSpellCheckletters read FLetterChars write FLetterChars;
published
property Language : TbpcMSSpellCheckLanguage read FLanguage
write FLanguage;
property CompletedMessage : boolean read FCompletedMessage
write FCompletedMessage;
property Color : TColor read FColor write FColor;
property Font : TFont read FFont write SetFFont;
property BeforeCorrection : TbpcMSSpellCheckBeforeCorrection
read FBeforeCorrection
write FBeforeCorrection;
property AboutToChange : TbpcMSSpellCheckAboutToChange
read FAboutToChange
write FAboutToChange;
property AfterCorrection : TbpcMSSpellCheckAfterCorrection
read FAfterCorrection
write FAfterCorrection;
property OnCorrection : TbpcMSSpellCheckOnCorrection
read FOnCorrection
write FOnCorrection;
property OnHideSelection : TbpcMSSpellCheckOnHideSelection
read FOnHideSelection
write FOnHideSelection;
property OnReadHideSelection : TbpcMSSpellCheckOnReadHideSelection
read FOnReadHideSelection
write FOnReadHideSelection;
property OnReadTextBuf : TbpcMSSpellCheckOnReadText
read FOnReadTextBuf
write FOnReadTextBuf;
property ReplaceDialog : TbpcMSSpellCheckReplacement
read FReplaceDialog
write FReplaceDialog;
property UseExistingInstance : boolean read FUseExistingInstance
write FUseExistingInstance;
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
c1f3cf77b1c4b8dfb640427b8b11574ad8c313eb
BpcwwRichEdSpellChck
0
476
671
2019-09-11T17:03:10Z
Bishopj
1
Created page with "==bpcwwRichEdSpellChck - InfoPower RichEdit mod for spell checking== Language: Delphi 7 - 2007 Wol2Wol's richedit component comes witb an MS spell checker version, but it u..."
wikitext
text/x-wiki
==bpcwwRichEdSpellChck - InfoPower RichEdit mod for spell checking==
Language: Delphi 7 - 2007
Wol2Wol's richedit component comes witb an MS spell checker version, but it uses early binding which is problematic if the target computer does not have MS Word installed or uses a version earlier than the one built into the component (or the Dev machine has a different MS Word server library installed.
This version of the W2W rich edit control has been modified to use the BPC late binding MS Spellcheck and Thesaurius Wrapper
It requires the W2W / InfoPower rich edit controls. Due to copyright restrictions, the code for this control can not be provided to non BPC Developers and requires a valid developer license for RichEdit.
Refer [http://www.woll2woll.com/ http://www.woll2woll.com/]
<pre>
uses Windows, Messages, SysUtils, variants, classes, wwriched, ComCtrls, bpcMSSpellCheck, StdCtrls, ExtCtrls;
type
TbpcwwDBRichEditMSWord = class(TwwDBRichEdit)
private
// OrigWin32MajorVersion: integer; reintroduce;
FbpcMSSpellCheck : TbpcMSSpellCheck ;
function Validserver : boolean;
// procedure PopupMenuPopup(Sender: TObject); reintroduce;
public
WDocWin: OleVariant;
FileName:string;
Function MSWordSpellChecker: boolean; override;
Procedure CopyRichEditTo(val: TCustomRichEdit); override;
published
property BPCMSSpellChecker : TbpcMSSpellCheck read FbpcMSSpellCheck write FbpcMSSpellCheck;
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
bb6b5f66cd793512ca8826304f3e8f1de4d90069
BPCPageControl
0
477
672
2019-09-11T17:04:15Z
Bishopj
1
Created page with "==BPCPageControl - Modified MS Windows page/tab control to support customised colouring of tabs and borders== Language: Delphi 7 - 2007 The standard MS Page/Tab controller..."
wikitext
text/x-wiki
==BPCPageControl - Modified MS Windows page/tab control to support customised colouring of tabs and borders==
Language: Delphi 7 - 2007
The standard MS Page/Tab controller does not allow non OS controlled tab and border colouring. This colouring defaults to grey or theme controlled settings. This version does uses the page colours to paint the control tabs and borders. Otherwise it looks and behaves exactly like the standard MS Tab/Page Control.
<pre>
uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms,
Dialogs, CommCtrl, ComCtrls;
type
TBPCPageControl = class(TPageControl)
private
FThickFrame: Boolean;
FBPCFrame : Boolean;
FFontInactive : TColor;
procedure SetThickFrame(const Value: Boolean);
procedure SetBPCFrame(const Value: Boolean);
protected
procedure WndProc(var Msg: TMessage); override;
procedure CreateParams(var Params: TCreateParams); override;
procedure WMPaint(var Message: TWMPaint); message WM_PAINT;
procedure Change; override;
public
constructor Create(AOwner: TComponent); override;
procedure CustomDrawPageControl;
published
property FontInactive: TColor read FFontInactive write FFontInactive default clGray;
property ThickFrame: Boolean read FThickFrame write SetThickFrame default true;
property BPCFrame: Boolean read FBPCFrame write SetBPCFrame default True;
// property BPCTabStyle: Boolean read FBPCFrame write SetBPCFrame default False;
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
89da6047255999880a9e30167de8096920c7e250
WSXDCompressUtilities1
0
478
673
2019-09-11T17:05:17Z
Bishopj
1
Created page with "==WSXDCompressUtilities1 - Streaming Compression using LHA== Language: Delphi 7 - 2007 <pre> uses SysUtils, Types, Registry, Classes, complhs ; // Compress a binary strin..."
wikitext
text/x-wiki
==WSXDCompressUtilities1 - Streaming Compression using LHA==
Language: Delphi 7 - 2007
<pre>
uses SysUtils, Types, Registry, Classes, complhs ;
// Compress a binary string stream into a binary array of data
function Compress(var myOutputStream : TStringStream; const DataPacket : string ) : TByteDynArray;
// Compress a binary string stream into a binary array of data
function CompressGenStream(var myOutputStream : TStringStream; const DataPacket : TStream ) : TByteDynArray;
// Decompress a binary array of data into a string
function DeCompressS(const DataPacket : TByteDynArray ) : string;
// Decompress a string into a string
function DeCompressSS(const DataPacket : string ) : string;
// Decompress a binary array of data into a binary array
function DeCompressB(const DataPacket : TByteDynArray ) : TByteDynArray;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
b7d331a23d081747762d972fa895c1053958e02e
BpcDBGrid
0
479
674
2019-09-11T17:06:25Z
Bishopj
1
Created page with "==bpcDBGrid - Modified InfoPower WW grid to support column sorting == Language: Delphi 7 - 2007 Another modified W2W InfoPower component. This time to support "click on tit..."
wikitext
text/x-wiki
==bpcDBGrid - Modified InfoPower WW grid to support column sorting ==
Language: Delphi 7 - 2007
Another modified W2W InfoPower component. This time to support "click on title" column sorting in DB aware grids.
<pre>
uses
Windows, Messages, SysUtils, Classes, Controls, Grids, Wwdbigrd, Wwdbgrid, Forms,
ADODB, DB, Dialogs;
type
TbpcDBGrid = class(TwwDBGrid)
private
{ Private declarations }
FTitleSort: boolean;
FslColumns : TStringList;
FOriginalCommandText : string;
procedure MyOnTitleButtonClick(Sender: TObject; AFieldName: string);
function ValidSortField(AFieldName: string) : boolean;
function ValidFieldDataType(AFieldDataType : TDataType): Boolean;
protected
{ Protected declarations }
procedure SortGrid(AFieldName : string);
procedure StoreOriginalCommandText;
public
{ Public declarations }
procedure CancelSort;
published
{ Published declarations }
constructor Create(AOwner: TComponent); override;
destructor Destroy; override;
property TitleSort : boolean read FTitleSort write FTitleSort default True;
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
093e8c04cfebe8e3ffaabcc628956d19672fe482
RMSQLAdminLib
0
480
675
2019-09-11T17:07:29Z
Bishopj
1
Created page with "==RMSQLAdminLib - BPC DataBase Desktop Support Library== Language: Delphi 7 - 2007 This library does most of the heavy lifting for the shipped RM DB Manager shipped with BPC..."
wikitext
text/x-wiki
==RMSQLAdminLib - BPC DataBase Desktop Support Library==
Language: Delphi 7 - 2007
This library does most of the heavy lifting for the shipped RM DB Manager shipped with BPC RM/SM systems as a database desktop/enterprise manager/sgl management studio replacement.
<pre>
uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms,
Dialogs, DB, ADODB, StdCtrls;
type
TbpcRMDBReportProgress = procedure(bDisplay: boolean; DisplayString: string; Step, OfSteps: Integer; Msg: string; var bCancelMe: boolean) of object;
TbpcRMDBReportError = function(sMSG : string) : boolean of object; // True if stop required
function bpcRMSQLGetSystemRegistryValue(const AKeyPath, AKey : String) : String;
function bpcRMSQLGetRegistryValue(const AKey : String) : String;
procedure bpcRMSQLSetRegistryValue(const AKey, AValue : String);
function bpcRMSQLGetRegistryValueWithDefault(const AKey, ADefaultValue : String) : String;
procedure GetFileListing(const AFolderName : String; var AStringList : TStringList);
// Replace special characters with spaces
// Remove sql comment lines
function ReadSQLFileContents(const AFileName: String): String;
// Modify the ado connection, and return the OLD catalog name;
function bpcSQLModifyConnectionCatalogName(AADOConnection: TADOCOnnection; const ANewCatalogName: String) : string;
procedure bpcMSSSQLRunSQLScript(AADOCommand: TADOCommand; const ASQLScriptText: String; ReportProgress : TbpcRMDBReportProgress = nil; bIgnoreErrors: boolean = FALSE; ReportError : TbpcRMDBReportError=nil);
function bpcSQLServerIs2005Up( AADOConnection: TADOConnection ) : boolean;
function bpcMSSQLBackupDatabase( AADOConnection: TADOConnection; bIs2005 : boolean; sDatabaseName, sBackupFileName, sBackupDeviceName : String; var ErrMsg : string ) : boolean;
function bpcMSSQLRestoreDatabase( AADOConnection: TADOConnection; bIs2005 : boolean; sDatabaseName, sBackupDeviceName : String; var ErrMsg : string ) : boolean;
function bpcMSSQLCopyDatabase( AADOConnection: TADOConnection; bIs2005 : boolean; sOptionalMSSQLBackupDirectory : string; sNewDBName, sDatabaseToCopy : String; var ErrMsg : string ) : boolean;
// Kill all db processes assigned to a database so database has no locks on it (in order to restore over it)
// zzz Incomplete
procedure bpcMSSQLKillDatabaseProcesses(const ADatabaseName: String);
function bpcMSSQLConnectToDB( AADOConnection: TADOConnection; const FDatabaseName : string ) : TADOConnection;
function bpcMSSQLConnectToMasterDB( AADOConnection: TADOConnection ) : TADOConnection;
function bpcSQLMakeADOCommand( AADOConnection: TADOConnection ) : TADOCommand;
procedure bpcMSSQLGetDBProperties(AADOConnection: TADOConnection; const ADBToCopyName : String; out ASourceDBDataFileName, ASourceDBLogFileName : String);
// Drops the named backup device
function bpcMSSQLDropDevice( AADOConnection: TADOConnection; sBackupDeviceName : String; var ErrMsg : string ) : boolean;
// Create builtin SQL Accounts on server
function bpcMSSQLCreateRMSQLAccounts( AADOConnection: TADOConnection; sriskmanuserpwd : string; var ErrMsg : string ) : boolean;
// Attempt a repair of builtin RM SQL Accounts on server given a connection string and the sa password:
// Extracts the current riskmanuser password from the connection string,
// Connects as sa and attempts to create SQL accounts with that password on the server
// Adds standard users and standard roles to the database if missing and assigns roles to standard accounts
// Ties the standard users to the standard sql accounts if simply orphaned.
// Works on both 2000 and 2005
function bpcMSSQLRMRepairAccountOnRMConnection( AdoConnectionString, sapwd : string; var ErrMsg : string ) : boolean;
resourcestring
rsRegKey = 'Software\Bishop Phillips Consulting\RiskDBManager';
rsMSSQL2000SetupKey='SOFTWARE\Microsoft\MSSQLServer\Setup';
rsMSSQL2005SetupKeyOptA='SOFTWARE\Microsoft\Microsoft SQL Server\SQLEXPRESS\Setup'; //This is not really a key that holds much - here just in case we missed a set up option
rsMSSQL2005MSSQL1SetupKey='SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\Setup';
rsMSSQL2005MSSQL1Key='SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\MSSQLServer';
rsMSSQL2000MSSQLKey='SOFTWARE\Microsoft\MSSQLServer\MSSQLServer' ;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
026f4c9da16ddeae8ecf6817fffb2c1a7b688969
DataTranADO
0
481
676
2019-09-11T17:08:57Z
Bishopj
1
Created page with "==DataTranADO - Data transfer library for moving data between a central DB and many remote DB's == Language: Delphi 7 - 2007 DataTranADO - Data transfer library for moving d..."
wikitext
text/x-wiki
==DataTranADO - Data transfer library for moving data between a central DB and many remote DB's ==
Language: Delphi 7 - 2007
DataTranADO - Data transfer library for moving data between a central DB and many remote DB's with support for synchronising across DB's when DB's cannot use an ADO connection. This library is the heart of the CRIS remote connection module. It provides a method for many remote desktop systems to synchronise across remote networks - even across dial up lines, where an ADO Connector can not be established between the databases.
Because of its approach, the library can be used to synchronise data between differing versions or even types of databases, provided the databases at each end can use ADO connectors. It essentially converts all actions to SQL statements.
Registers TDataTranADODataSet on the Palette.
<pre>
uses
Windows, Messages, SysUtils, Classes, DB, ADODB, Dialogs, Variants, JclStrings;
type
{ Data transfer scripting --> Parsing options.
Options to only evaluate user variables only (ignore executing stored procedures and resolving table field values) }
erExpressionRestrictions = (eetAll, eetUserVarsOnly);
{ Which fields from a table get included in a SQL update or insert clause:
1. All fields (such as an insert)
2. Writeable only (such as a remote user only uploading changes to fields which they have publication access to)
3. Writeable and exclude keys (such as a set clause which doesn't require the key fields)
4. ** No longer in use **
Description:
All except exclude fields which are exclusive to the interactive remote user and the key fields (such as an update sent
from central to a remote user which must exclude fields which the remote user has exclusive access to - don't want to
overwrite pending updates which have not yet been transferred - and the key fields)
Problem:
Central was only sending updates for fields to inspectors which central and licensees have publication access to OR the field is for multi-use.
Changes made by one inspector were being transferred to central BUT central was then not transferring the change to other inspectors.
This change only applies to central where it has special rules; it must act as an information brocker.
5. All fields and exclude keys (applicable to central acting as an information broker) }
isfIncludeSQLFields = (isfAll, isfWriteableOnly, isfWriteableAndExcludeKeys, isfExcludeExclusiveToRemoteUserAndKeys, isfAllAndExcludeKeys);
// Data Transfer scripting keywords -> VAR[], FIELD[], EXEC[]
dtKeywordType = (dtVar, dtField, dtSPExec);
TEvalExpression = class(TObject)
private
FADOQuery : TADOQuery;
protected
public
constructor Create(AOwner: TComponent; AADOConnection: TADOConnection); reintroduce;
destructor Destroy; override;
function Eval(const AExpression : String; out AErrorFlag : Boolean;
out AErrorMessage : String) : Boolean;
end;
TDataSharingRules = class(TObject)
private
FADOConnection : TADOConnection;
FDT_CentralBrokerFlag: Boolean;
FDT_UserNo: String;
FDT_UserType: String;
FDT_SharedTableName: String;
FScript: String;
FSourceDataSet: TDataSet;
procedure SetDT_CentralBrokerFlag(const Value: Boolean);
procedure SetDT_SharedTableName(const Value: String);
procedure SetDT_UserNo(const Value: String);
procedure SetDT_UserType(const Value: String);
procedure SetScript(const Value: String);
function Peek(const AScript : String; APosition : Integer; const AKeyword : String) : Boolean;
procedure SetSourceDataSet(const Value: TDataSet);
procedure SetADOConnection(const Value: TADOConnection);
protected
function GetKeyValue(const AScript : String; APosition : Integer; out ASubstitutedLength : Integer; AKeyWordType : dtKeywordType) : String;
function SubstituteKeyVariable(const AVariableName : String) : String;
function SubstituteKeyField(const AFieldName : String) : String;
function ExecDTStoredProc(const AScript : String) : Boolean;
function ExtractStoredProcNameFromScript(const AScript : String) : String;
public
property ADOConnection : TADOConnection read FADOConnection write SetADOConnection;
property DT_SharedTableName : String read FDT_SharedTableName write SetDT_SharedTableName;
property DT_UserNo : String read FDT_UserNo write SetDT_UserNo;
property DT_UserType : String read FDT_UserType write SetDT_UserType;
property DT_CentralBrokerFlag : Boolean read FDT_CentralBrokerFlag write SetDT_CentralBrokerFlag;
property Script : String read FScript write SetScript;
property SourceDataSet : TDataSet read FSourceDataSet write SetSourceDataSet;
procedure Clear;
function GetDTExpression(out AUserVarsOnly_InUse : Boolean; AExpressionRestrictions : erExpressionRestrictions = eetAll) : String;
function EvaluateUser(const AEvalExpression : String; out AErrorEncountered : Boolean) : Boolean;
function GetPrimaryKeyFieldsList : String;
procedure ReadDataSharingProperties(out ASharedTable, AMastertListTable, APublisher : Boolean;
out AAfterScrollPublicationScript : String; out AOriginator, APublishAllFields : Boolean);
function ApplyRecordLocking : Boolean;
procedure GetSharedFieldRules(APublishFieldsList, AReadOnlyFieldsList : TStringList;
AEvalUserVarsOnly : Boolean; out AFieldScriptUseUserVarsOnly : Boolean);
procedure AssignTableSubscribers(AUserNoList, AUserTypeList : TStringList);
procedure LogTransferRecord(AUpdateKind : TUpdateKind; const AKeyFieldNames, AKeyFieldValues : String; const ARestrictTransferFieldNames : String = '');
function GetDTStoredProc_ExecExpresssion(const AScript : String;
ASourceDataSet: TDataSet; ATransferRecordsDataSet : TDataSet) : String;
end;
TTransferData = class(TObject)
private
FADOConnection: TADOConnection;
FDT_CentralBrokerFlag: Boolean;
FDT_UserType: String;
FDT_UserNo: String;
FSavedShortDateFormat : String;
FSavedLongDateFormat : String;
FDT_TransferToRemoteUserType: String;
FDT_TransferToRemoteUserNo: String;
FDataSharingRules: TDataSharingRules;
FSQLWhereKeyClause : String;
procedure SetDT_CentralBrokerFlag(const Value: Boolean);
procedure SetDT_UserNo(const Value: String);
procedure SetDT_UserType(const Value: String);
procedure SetADOConnection(const Value: TADOConnection);
procedure SetDT_TransferToRemoteUserNo(const Value: String);
procedure SetDT_TransferToRemoteUserType(const Value: String);
procedure SetDataSharingRules(const Value: TDataSharingRules);
protected
procedure AssignFieldNames(const ASharedTableName : String; ADataSet : TDataSet;
AFieldNameList : TStringList; AIncludeSQLFields : isfIncludeSQLFields; ATransferRestrictedFieldList : String = '');
procedure AssignFieldValues(ADataSet : TDataSet; AFieldNameList, AFieldValueList : TStringList);
function GetSQLValidStringValueFromField(AField : TField) : String;
function GetSQLValidStringValueFromString(const AFieldAsString : String) : String;
function GetDeleteAllSQL(const ASharedTableName : String) : String;
function GetInsertAllSQL(const ASharedTableName : String) : String;
function GetInsertSQL(const ASharedTableName : String; ADataSet : TDataSet; AIncludeSQLFields : isfIncludeSQLFields) : String;
function GetUpdateSQL(const ASharedTableName : String; ADataSet : TDataSet; AIncludeSQLFields : isfIncludeSQLFields;
ATransferRestrictedFieldList : String) : String;
function GetUploadSQL(const ASharedTableName : String; ASourceDataSet: TDataSet; ATransferRecordsDataSet : TDataSet) : String;
function GetSQLWhereClauseFromKeyFields(const ASharedTableName : String; ADataSet : TDataSet) : String;
function GetSQLWhereClauseFromTransferRecordFields(const AKeyFieldNames, AKeyFieldValues : String) : String;
function GetSetSQLClause(AFieldNameList : TStringList; ADataSet : TDataSet) : String;
function GetUploadTimeStamp(const ASharedTableName : String; ADataSet : TDataSet) : String;
function GetRecordLocking(const ASharedTableName : String; ADataSet : TDataSet) : String;
function GetTransferRecordAsUploadedState(ARecordNumber, APostedState : Integer) : String;
procedure ResetPartUploadedTransferRecords;
function GetExecStoredProcDistributeChanges(const ASharedTableName: String;
ASourceDataSet: TDataSet; ATransferRecordsDataSet : TDataSet) : String;
function GetRunSQLFieldName(const ASharedTableName : String) : String;
procedure ExecuteLocalSQL(const ASQLText : String);
// System Date-Time variable management
procedure SetStandardTransferDateTimeFormat;
procedure ResetDateTimeFormat;
// TDataSharingRules object to read data sharing properties from the database <data transfer> system tables.
property DataSharingRules : TDataSharingRules read FDataSharingRules write SetDataSharingRules;
public
constructor Create;
destructor Destroy; override;
property ADOConnection : TADOConnection read FADOConnection write SetADOConnection;
property DT_UserNo : String read FDT_UserNo write SetDT_UserNo;
property DT_UserType : String read FDT_UserType write SetDT_UserType;
property DT_CentralBrokerFlag : Boolean read FDT_CentralBrokerFlag write SetDT_CentralBrokerFlag;
property DT_TransferToRemoteUserNo : String read FDT_TransferToRemoteUserNo write SetDT_TransferToRemoteUserNo;
property DT_TransferToRemoteUserType : String read FDT_TransferToRemoteUserType write SetDT_TransferToRemoteUserType;
// Transaction Management
function BeginDBTransaction : Boolean;
function CommitDBTransaction : Boolean;
function RollbackDBTransaction : Boolean;
// Date-Time System Checking
function GetLocalSystemDateTime(out LocalDateTime : TDateTime) : Boolean;
function ValidateLocalDateTime(CentralDateTime : TDateTime) : Boolean; // Run on remote systems
// Data Transfer
procedure Transfer_Import(out AErrorMessage : String; out AErrorFlag : Boolean; Const AImportString : AnsiString);
function Transfer_ProcessUpload(out AErrorMessage : String; out AErrorFlag : Boolean) : AnsiString;
function Transfer_ProcessDBUpgradeUpload(out AErrorMessage : String; out AErrorFlag : Boolean) : AnsiString; // Run on central system only
procedure Transfer_PostUploaded(out AErrorMessage : String; out AErrorFlag : Boolean);
procedure Transfer_UnPostUploaded(out AErrorMessage : String; out AErrorFlag : Boolean);
function RecordCentralTransferHistory(AIgnoreDateSynchFlag, ATransferIncompleteFlag, AErrorFlag : Boolean;
const AErrorDescript, ASecurityStatus, ATransferDetails : String) : Boolean; // Run on central system only
end;
TDataTranADODataSet = class(TADODataSet)
private
FDataTran_SharedTable : Boolean;
FDataTran_PublishAllFields: Boolean;
FDataTran_Publisher: Boolean;
FAfterScrollPublicationScript : String;
FEvaluateFieldRulesAfterScroll : Boolean;
FDataTran_Originator: Boolean;
FDataTran_PublishFieldsList: TStringList;
FDataSharingRules: TDataSharingRules;
FDataTran_ReadOnlyFieldsList: TStringList;
FDT_CentralBrokerFlag: Boolean;
FDT_UserNo: String;
FDT_UserType: String;
FDT_SharedTableName: String;
FDataTran_MastertListTable: Boolean;
FSavedShortDateFormat : String;
FSavedLongDateFormat : String;
FDataTran_RestrictTransferFields: Boolean;
FDataTran_RestrictTransferFieldNames: String;
procedure SetDataTran_SharedTable(const Value: Boolean);
procedure SetDataTran_Originator(const Value: Boolean);
procedure SetDataTran_PublishAllFields(const Value: Boolean);
procedure SetDataTran_Publisher(const Value: Boolean);
procedure SetDataTran_PublishFieldsList(const Value: TStringList);
procedure SetDataSharingRules(const Value: TDataSharingRules);
procedure SetDataTran_ReadOnlyFieldsList(const Value: TStringList);
procedure SetDT_CentralBrokerFlag(const Value: Boolean);
procedure SetDT_SharedTableName(const Value: String);
procedure SetDT_UserNo(const Value: String);
procedure SetDT_UserType(const Value: String);
procedure SetDataTran_MastertListTable(const Value: Boolean);
procedure SetAfterScrollPublicationScript(const Value: String);
procedure SetDataTran_RestrictTransferFieldNames(
const Value: String);
procedure SetDataTran_RestrictTransferFields(const Value: Boolean);
{ Private declarations }
protected
{ Protected declarations }
procedure DoBeforeOpen; override;
procedure DoOnNewRecord; override;
procedure DoAfterPost; override;
procedure DoBeforeDelete; override;
procedure DoBeforeEdit; override;
procedure DoAfterOpen; override;
procedure DoAfterScroll; override;
procedure AssignRecordKeyProperties(out AKeyFieldList, AKeyFieldValues : String);
procedure SetStandardTransferDateTimeFormat;
procedure ResetDateTimeFormat;
public
{ Public declarations }
constructor Create(AOwner: TComponent); override;
destructor Destroy; override;
procedure ReReadSharedFieldRules;
// Is the table shared?
property DataTran_SharedTable : Boolean read FDataTran_SharedTable write SetDataTran_SharedTable;
// Is the table a master list?
property DataTran_MastertListTable : Boolean read FDataTran_MastertListTable write SetDataTran_MastertListTable;
// Can I write to the table?
property DataTran_Publisher : Boolean read FDataTran_Publisher write SetDataTran_Publisher;
// If the condition under which I can write to a table depends upon field values when I scroll to different records then what is the condition script??
// If this variable is blank then do not evaluate if I can publish after scrolling records.
property AfterScrollPublicationScript : String read FAfterScrollPublicationScript write SetAfterScrollPublicationScript;
// Can I insert new records?
property DataTran_Originator : Boolean read FDataTran_Originator write SetDataTran_Originator;
// Does publish all fields apply to this table?
property DataTran_PublishAllFields : Boolean read FDataTran_PublishAllFields write SetDataTran_PublishAllFields;
// If publish all fields does not apply then which fields can I edit?
property DataTran_PublishFieldsList : TStringList read FDataTran_PublishFieldsList write SetDataTran_PublishFieldsList;
// If publish all fields does not apply then which fields are read only?
property DataTran_ReadOnlyFieldsList : TStringList read FDataTran_ReadOnlyFieldsList write SetDataTran_ReadOnlyFieldsList;
// TDataSharingRules object to read data sharing properties from the database <data transfer> system tables.
property DataSharingRules : TDataSharingRules read FDataSharingRules write SetDataSharingRules;
property DataTran_RestrictTransferFields : Boolean read FDataTran_RestrictTransferFields write SetDataTran_RestrictTransferFields;
property DataTran_RestrictTransferFieldNames : String read FDataTran_RestrictTransferFieldNames write SetDataTran_RestrictTransferFieldNames;
published
{ Published declarations }
// My user number, EG: Licensee number, inspector region ID
property DT_UserNo : String read FDT_UserNo write SetDT_UserNo;
// My user type, EG: CEN, INS, LIC
property DT_UserType : String read FDT_UserType write SetDT_UserType;
// Am I the central broker?
property DT_CentralBrokerFlag : Boolean read FDT_CentralBrokerFlag write SetDT_CentralBrokerFlag;
// The table name used to read shared table name properties
property DT_SharedTableName : String read FDT_SharedTableName write SetDT_SharedTableName;
end;
const
CKeyword_VAR = 'VAR';
CKeyword_FIELD = 'FIELD';
CKeywordRunSP_EXEC = 'EXEC';
CRemoteTimeStamp = 'remote_time_stamp';
CCentralTimeStamp = 'central_time_stamp';
CLicenseeTimeStamp = 'licensee_time_stamp';
CInspectorTimeStamp = 'inspector_time_stamp';
CNotUploadedState = 0;
CPartUploadedState = 1;
CPostUploadedState = 2;
CInspectorUserType = 'INS';
CLicenseeUserType = 'LIC';
CDateTimeToleranceValue = 1/12; // 2 hours tolerance above and below (a 4 hour window)
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
4849e368b6f1aaa85a2fbc65e6d2f0e86dcc139a
BpcMailLib1
0
482
677
2019-09-11T22:51:06Z
Bishopj
1
Created page with "==DNS and Mail Authority resolvers and Email address syntax check== Language: Delphi 7 - 2007 Locates the DNS server associated with the curent computer and looks up the DN..."
wikitext
text/x-wiki
==DNS and Mail Authority resolvers and Email address syntax check==
Language: Delphi 7 - 2007
Locates the DNS server associated with the curent computer and looks up the DNS to resolve a domain address, and/or find the URL of the server with mail authority, as well as providing basic email address syntax checking.
<pre>
// Simple DNS resolver - returns a string containing the resolved domain/address provided in resolveme (or '' if the matter can't
// be resolved) according to the dnsrec request string containing a DNS RR record request.
// EG: 'MX' or 'A' or 'NS' etc. dnshost is the address of a dns host to use.
function bpcDNSResolve( dnshost, resolveme, dnsrec : string ) : string;
// Return the real Email exchange for this email address
function bpcResolveEmailHost( dnsHost, EmailAddress : string ) : string;
// Get the list of DNS servers from the local machine as a comma delimited string
// Returns '' if failed/none listed OR Raises exceptions if out of memory or no sizing data received
// Uses GetNetworkParams winapi call.
function bpcGetLocalDNS : string;
// Returns True if valid email address syntax
function bpcIsEmailAddressSyntax( emailaddress : string ) : boolean;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
1fd95a047755dd5737935f48c323e0e5c6963ba4
SMTPIndySendMail
0
483
678
2019-09-11T22:52:29Z
Bishopj
1
Created page with "==Smart email sender== Language: Delphi 7 - 2007 Provides a wrapper for Indy10 email library and supports html & text, attachments and embedded images in email. The routi..."
wikitext
text/x-wiki
==Smart email sender==
Language: Delphi 7 - 2007
Provides a wrapper for Indy10 email library and supports html & text, attachments and embedded images in email.
The routine automatically generates a text version of HTML emails and sets the MIME attribute to text/alternative, handling A links to provide the HREF portion if the descriptive portion is different from the HREF portion in the text version.
The routine is therefore spam-assassin safe.
HTML emails are BASE 64 MIME encoded, while the text part is UTF-8 and Base 64 MIMe Encoded.
Requires: TMailMessage.
<pre>
uses EMailUnit, IdEMailAddress, Messages, SysUtils, Classes;
type
//This class will be used to store information about one picture for embedded images in emails.
TbpcHTMLImageItem = class (TCollectionItem)
public
Stream: TMemoryStream;
Name, ContentType: AnsiString; //TRVAnsiString
constructor Create(Collection: TCollection); override;
destructor Destroy; override;
end;
//Collection of TbpcHTMLImageItem for embedded images in emails
TbpcHTMLImagesCollection = class (TCollection)
private
function GetItem(Index: Integer): TbpcHTMLImageItem;
procedure SetItem(Index: Integer; const Value: TbpcHTMLImageItem);
public
constructor Create;
property Items[Index: Integer]: TbpcHTMLImageItem read GetItem write SetItem; default;
end;
// Main Mail Sender
function MailToSMTP(AMailMessage : TMailMessage; out AErrorMessage : String;
ASMTPPort : Integer; const ASMTPHostAddress, ASMTPUserID, ASMTPUserPWD, ASMTPFromAddress, ASMTPFromName : String; ASMTPReplyToEmail : string=''; ASMTPReplyToName : string='';
Attachments : TStringList=nil; HTMLImages : TbpcHTMLImagesCollection=nil ) : Boolean;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
9d896b52eba2c4b05c3d607fdd6e20bfeb2f252a
OutlookSendMail
0
484
679
2019-09-11T22:53:45Z
Bishopj
1
Created page with "==OutlookSendMail - Sends email via Outlook == Language: Delphi 7 - 2007 Use as an alternative for desktop apps when SMTP is not available. <pre> uses Outlook8, OleServer..."
wikitext
text/x-wiki
==OutlookSendMail - Sends email via Outlook ==
Language: Delphi 7 - 2007
Use as an alternative for desktop apps when SMTP is not available.
<pre>
uses Outlook8, OleServer, sysutils, EMailUnit;
function MailToOutlook(AMailMessage : TMailMessage; out AErrorMessage : String) : Boolean;
function SendTestMessageOutlook(out AErrorMessage : String) : Boolean;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
0a6b0c7629fb1fb13ad530cfc4930d5ff198b0a6
EMailUnit
0
485
680
2019-09-11T22:55:25Z
Bishopj
1
Created page with "==Email Message Wrapper Class== Language: Delphi 7 - 2007 Simple message wrapper with builtin styles bug fix. <pre> type TMailConnectType = (mctNotAssigned, mctSMTP, mct..."
wikitext
text/x-wiki
==Email Message Wrapper Class==
Language: Delphi 7 - 2007
Simple message wrapper with builtin styles bug fix.
<pre>
type
TMailConnectType = (mctNotAssigned, mctSMTP, mctOutlook);
TMailMessage = class
private
FBCC: String;
FCC: String;
FSendTo: String;
function GetBCC : String;
function GetCC : String;
function GetSendTo : String;
function GetMIMEType : String; //JGB 21/3/07 added mime handling
procedure SetMIMEType( sValue : String); //JGB 21/3/07 added mime handling
protected
// Parse the input E-mail recipient string and replace ';' with ',' character.
function GetRecipientValue(const AInputValue : String) : String;
public
From,
Subject,
Body,
MIMEType : String; // JGB 21/3/07 - access this field directly to find out the MIMEType
procedure FixHTMLStyleLineStart; // JGB 22/3/07 - Hack to repair a transmission error that results in the leading '.' of a line being dropped in styles during transmission of HTML
procedure ClearVariableFields;
property SendTo : String read GetSendTo write FSendTo;
property CC : String read GetCC write FCC;
property BCC : String read GetBCC write FBCC;
property ContentType : String read GetMIMEType write SetMIMEType; //JGB 21/3/07 added mime handling. Automatically translates
//'plain','html','xml','rtf', 'pdf' and allows anything else through
///as-is
end;
const
CMailConnectNotAssigned = 'Mail connection is not assigned';
</pre>
The wrapper uses a simplified MIME type specifier. To assist the conversion code is preproduced below:
<pre>
//JGB 21/3/07 added mime handling
// Simple routine to map short strings to full mime types - just simplifies enduser handling a bit
// Automatically translates 'plain','html','xml','rtf' and allows anything else through as-is
function TMailMessage.GetMIMEType: String;
begin
result := '';
case bpcWSIndexOfList( MIMEType, ['','text/plain','text/html','text/xml','text/richtext','application/pdf'] ) of
-1 : result := MIMEType; // Unknown Type - return it anyway
0 : result := 'plain';
1 : result := 'plain';
2 : result := 'html';
3 : result := 'xml';
4 : result := 'rtf';
5 : result := 'pdf';
end;
end;
//JGB 21/3/07 added mime handling
// Simple routine to map short strings to full mime types - just simplifies enduser handling a bit
// Automatically translates 'plain','html','xml','rtf' and allows anything else through as-is
procedure TMailMessage.SetMIMEType(sValue: String);
begin
case bpcWSIndexOfList( sValue, ['','plain','html','xml','rtf','pdf'] ) of
-1 : MIMEType := sValue; // Unknown Type - allow direct assignment anyway
0 : MIMEType := 'text/plain';
1 : MIMEType := 'text/plain';
2 : MIMEType := 'text/html';
3 : MIMEType := 'text/xml';
4 : MIMEType := 'text/richtext';
5 : MIMEType := 'application/pdf';
end;
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
d9fe9b99b198e71a522c43373148f10be38d76ed
SendMailThreadUnit
0
486
681
2019-09-11T22:56:47Z
Bishopj
1
Created page with "==SendMailThreadUnit - Threaded emailer for sending emails in the background for SMTPIndySendMail== Language: Delphi 7 - 2007 <pre> uses SysUtils, Classes, EMailUnit, DB,..."
wikitext
text/x-wiki
==SendMailThreadUnit - Threaded emailer for sending emails in the background for SMTPIndySendMail==
Language: Delphi 7 - 2007
<pre>
uses SysUtils, Classes, EMailUnit, DB, Provider, DBClient, ADODB;
type
TSendMailThread = class(TThread)
private
FMailConnectType : TMailConnectType;
FWorkMailMessage: TMailMessage;
public
FConfigSMTP_PortNo : Integer;
FConfigSMTP_Host : String;
FConfigSMTP_UserID : String;
FConfigSMTP_UserPWD : string;
FConfigSMTP_FromAddress : String;
FConfigSMTP_FromName : String;
ErrorMessage : String;
SendResult : Boolean;
procedure Execute; override;
constructor Create(MailConnectType : TMailConnectType; WorkMailMessage: TMailMessage; myConfigSMTP_PortNo : Integer; myConfigSMTP_Host : String; myConfigSMTP_UserID : String; myConfigSMTP_UserPWD : string; myConfigSMTP_FromAddress : String; myConfigSMTP_FromName : String );
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
38e58a88d9df595deddd8ee8da1b0b4238e94dea
BpcHTMLEditDesigner
0
487
682
2019-09-11T22:58:19Z
Bishopj
1
Created page with "==HTML Editor== Language: Delphi 7 - 2007 This is a straight implementation of Lindso Larsen's wrapper for MSHTML. Refer [http://www.euromind.com/iedelphi http://www.eurom..."
wikitext
text/x-wiki
==HTML Editor==
Language: Delphi 7 - 2007
This is a straight implementation of Lindso Larsen's wrapper for MSHTML. Refer [http://www.euromind.com/iedelphi http://www.euromind.com/iedelphi]
This implementation has substantially been superceded by Balsa EmbeddedWB
<pre>
//***********************************************************
// TbpcHTMLEditDesigner ver 1.00 (Jan. 14, 2000) *
// *
// Freeware Component *
// by *
// Per Lindsø Larsen *
// per.lindsoe@larsen.dk *
// *
// Documentation and updated versions: *
// *
// http://www.euromind.com/iedelphi *
//***********************************************************
unit bpcHTMLEditDesigner;
interface
uses
Mshtml_Ewb {mshtml_tlb}, Windows, Messages, SysUtils, Classes, Graphics, Controls, Forms, Dialogs;
const
SID_SHTMLEditServices: TGUID = (D1: $3050f7f9; D2: $98b5; D3: $11cf; D4: ($bb, $82, $00, $AA, $00, $bd, $ce, $0b));
type
TbpcHTMLPreHandleEvent = function(inEvtDispId: Integer; const pIEventObj: IHTMLEventObj): HResult of object;
TbpcHTMLPostHandleEvent = function(inEvtDispId: Integer; const pIEventObj: IHTMLEventObj): HResult of object;
TbpcHTMLTranslateAccelerator = function(inEvtDispId: Integer; const pIEventObj: IHTMLEventObj): HResult of object;
TbpcHTMLPostEditorEventNotify = function(inEvtDispId: Integer; const pIEventObj: IHTMLEventObj): HResult of object;
TbpcHTMLEditDesigner = class(TComponent, IUnknown, IHtmlEditDesigner)
private
FPreHandleEvent: TbpcHTMLPreHandleEvent;
FPostHandleEvent: TbpcHTMLPostHandleEvent;
FTranslateAccelerator: TbpcHTMLTranslateAccelerator;
FPostEditorEventNotify: TbpcHTMLPostEditorEventNotify;
{ Private declarations }
protected
{ Protected declarations }
function PreHandleEvent(inEvtDispId: Integer; const pIEventObj: IHTMLEventObj): HResult; stdcall;
function PostHandleEvent(inEvtDispId: Integer; const pIEventObj: IHTMLEventObj): HResult; stdcall;
function TranslateAccelerator(inEvtDispId: Integer; const pIEventObj: IHTMLEventObj): HResult; stdcall;
function PostEditorEventNotify(inEvtDispId: Integer; const pIEventObj: IHTMLEventObj): HResult; stdcall;
public
{ Public declarations }
published
property OnPreHandleEvent: TbpcHTMLPreHandleEvent read FPreHandleEvent write FPreHandleEvent;
property OnPostHandleEvent: TbpcHTMLPostHandleEvent read FPostHandleEvent write FPostHandleEvent;
property OnPostEditorEventNotify: TbpcHTMLPostEditorEventNotify read FPostEditorEventNotify write FPostEditorEventNotify;
property OnTranslateAccelerator: TbpcHTMLTranslateAccelerator read FTranslateAccelerator write FTranslateAccelerator;
{ Published declarations }
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
127f4d26dcf9d836fca1dace53775432818c47ed
BpcHTMLEditHost
0
488
683
2019-09-11T23:00:12Z
Bishopj
1
Created page with "==HTML Edit Host== Language: Delphi 7 - 2007 <pre> //*********************************************************** // TEdithost ver 1.00 (Jan. 14, 2000)..."
wikitext
text/x-wiki
==HTML Edit Host==
Language: Delphi 7 - 2007
<pre>
//***********************************************************
// TEdithost ver 1.00 (Jan. 14, 2000) *
// *
// by *
// Per Lindsø Larsen *
// per.lindsoe@larsen.dk *
// *
// Documentation and updated versions: *
// *
// http://www.euromind.com/iedelphi *
//***********************************************************
unit bpcHTMLEditHost;
interface
uses
{mshtml_tlb} Mshtml_Ewb, Windows, Messages, SysUtils, Classes, Graphics, Controls, Forms, Dialogs;
const
SID_SHTMLEditHost: TGUID = (D1: $3050F6A0; D2: $98B5; D3: $11CF; D4: ($BB, $82, $00, $AA, $00, $BD, $CE, $0B));
type
TbpcHTMLSnapRect = function (const pIElement: IHTMLElement; var prcNew: tagRECT; eHandle: _ELEMENT_CORNER): HResult of object;
TbpcHTMLEditHost = class(TComponent, IUnknown, IHTMLEditHost)
private
{ Private declarations }
FSnapRect : TbpcHTMLSnapRect;
FEnabled : Boolean;
protected
{ Protected declarations }
function SnapRect(const pIElement: IHTMLElement; var prcNew: tagRECT; eHandle: _ELEMENT_CORNER): HResult; stdcall;
public
{ Public declarations }
published
{ Published declarations }
property OnSnapRect : TbpcHTMLSnapRect read FSnapRect write FSnapRect;
property Enabled : Boolean read FEnabled write FEnabled;
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
0bf30641d2e5f2ee2e2179ac1ed27d951161fcde
BpcDBEmbeddedWB
0
489
684
2019-09-11T23:01:09Z
Bishopj
1
Created page with "==DB Aware Embedded Browser== Language: Delphi 7 - 2007 This is a BPC modification to the TEmbeddedWB from Balsa [http://www.bsalsa.com/ http://www.bsalsa.com/]. The stand..."
wikitext
text/x-wiki
==DB Aware Embedded Browser==
Language: Delphi 7 - 2007
This is a BPC modification to the TEmbeddedWB from Balsa [http://www.bsalsa.com/ http://www.bsalsa.com/]. The standard component implements an advanced wrapper for the IE browser API. This version from BPC adds DB Aware functionality to the component including drag/drop and MHTML display, and enables reading and writing of the HTML content to display from a database blob field. Think of it like a DB aware RichView memo, only for HTML content instead of RTF content.
This version is exclusive to BPC, and is available on request with source from BPC. You will need the Balsa freeware components as well.
It registers two components - a URL Registry and the DBAware embedded browser.
Works with IE 6,7 and 8.
<pre>
uses
{$IFDEF DELPHI_6_UP}Variants, {$ENDIF}
SysUtils, Classes, Windows, Types, Controls, OleCtrls, SHDocVw_EWB, EwbCore, EmbeddedWB, DBCtrls,
DB, Messages, ComObj, ActiveX, UrlMon;
const
Class_bpcEWBDBNSHandler: TGUID = '{7D0847DC-7367-4EA0-AC30-2F39BFD9C862}';
EWNameSpace = 'ewbdbfield';
type
TbpcEWBGetMimeExtProc= procedure ( sender : TObject; Var MimeExt : string; var bShowBlank : boolean ) of object;
TbpcEWBGetDocName= procedure ( sender : TObject; Var DocName : string ) of object;
TbpcEWBGetDocURL= procedure ( sender : TObject; Var URL : string ) of object;
EDBEWBEditError = class(Exception);
TbpcDBEmbeddedWB = class(TEmbeddedWB)
private
{ Private declarations }
FDataLink: TFieldDataLink;
FReadOnly: boolean;
FFocused: Boolean;
FDefMimeExt : string;
FOnGetMimeExt : TbpcEWBGetMimeExtProc;
FOnGetDocName : TbpcEWBGetDocName;
FOnGetDocURL : TbpcEWBGetDocURL;
FDBConnected : boolean;
procedure ActiveChange(Sender: TObject);
procedure DataChange(Sender: TObject);
procedure EditingChange(Sender: TObject);
function GetDataField: string;
function GetDataSource: TDataSource;
function GetField: TField;
function GetFieldText: string;
procedure SetDataField(const Value: string);
procedure SetDataSource(const Value: TDataSource);
procedure CMGetDataLink(var Message: TMessage); message CM_GETDATALINK;
function GetReadOnly: Boolean;
procedure SetReadOnly(const Value: Boolean);
function GetDBConnected: Boolean;
procedure SetDBConnected(const Value: Boolean);
procedure UpdateData(Sender: TObject);
protected
{ Protected declarations }
function GetLabelText: string; //##JB Delete?
procedure Notification(AComponent: TComponent;
Operation: TOperation); override;
procedure ValidateEdit;
procedure ValidateError;
public
{ Public declarations }
LastURL : string;
constructor Create(AOwner: TComponent); override;
destructor Destroy; override;
procedure SetFocused(Value: Boolean);
procedure Loaded; override;
function ExecuteAction(Action: TBasicAction): Boolean; override;
function UpdateAction(Action: TBasicAction): Boolean; override;
Procedure DataRefresh;
property Field: TField read GetField;
// From EmbeddedWB
property Modified;
{$IFDEF USE_EwbTools}
property SelLength; // By M.Grusha
property SelText; // By M.Grusha
property SelTextHTML; // By M.Grusha
{$ENDIF}
// End From EmbeddedWB
published
{ Published declarations }
property DataField: string read GetDataField write SetDataField;
property DataSource: TDataSource read GetDataSource write SetDataSource;
property ReadOnly: Boolean read GetReadOnly write SetReadOnly default True;
property DBConnected: boolean read GetDBConnected write SetDBConnected default True;
// From EmbeddedWB
property About;
property HostCSS;
property HostNS;
property EnableMessageHandler;
property DisabledPopupMenuItems;
property DisableErrors;
property DialogBoxes;
property OnCloseQuery;
property OnShowDialog;
property PrintOptions;
property ProxySettings;
property ShortCuts;
property UserAgent;
property VisualEffects;
// End From EmbeddedWB
property DefMimeExt : string read FDefMimeExt write FDefMimeExt ; // the default mime extension '.html'
property OnGetMimeExt : TbpcEWBGetMimeExtProc read FOnGetMimeExt write FOnGetMimeExt; // Allow override of the default DefMimeExt
property OnGetDocName : TbpcEWBGetDocName read FOnGetDocName write FOnGetDocName; // Allow override of the default generated document name
property OnGetDocURL : TbpcEWBGetDocURL read FOnGetDocURL write FOnGetDocURL; // Allow override of db field retrieval with an URL
end;
// Name space handler - required so that MHT files can be read and converted
TbpcEWBDBNSHandler = class(TComObject, IInternetProtocol)
private
Url: string;
Written, TotalSize: Integer;
ProtSink: IInternetProtocolSink;
DataStream: IStream;
DataField : TField;
RegisterURLIndex : Integer;
protected
// IInternetProtocol Methods
function Start(szUrl: PWideChar;
OIProtSink: IInternetProtocolSink;
OIBindInfo: IInternetBindInfo;
grfPI, dwReserved: DWORD): HResult; stdcall;
function Continue(const ProtocolData:
TProtocolData): HResult; stdcall;
function Abort(hrReason: HResult;
dwOptions: DWORD): HResult; stdcall;
function Terminate(dwOptions: DWORD): HResult; stdcall;
function Suspend: HResult; stdcall;
function Resume: HResult; stdcall;
function Read(pv: Pointer; cb: ULONG;
out cbRead: ULONG): HResult; stdcall;
function Seek(dlibMove: LARGE_INTEGER;
dwOrigin: DWORD; out libNewPosition: ULARGE_INTEGER): HResult; stdcall;
function LockRequest(dwOptions: DWORD): HResult; stdcall;
function UnlockRequest: HResult; stdcall;
// Helper functions
procedure GetDataFromDataField(Url: string);
public
end;
// The EWB - URL registry is needed to allow he namespace handler to find which field
// is supplying the data for a given URL. As the namespace handle is a run time factory
// made object there is no direct way to pass the identity of the initiating EWB to the factory
// object..so the namespace handler does not know which field to use to get the data requested.
// This resgistry object holds the global registry of URLs and Fields. It will be most reliable when
// the EWBDB is allowed to manufacture the URL without inerference by the user, other than by
// setting the mime extension through the call back function.
// This should be placed on the mainform. Only one of these should exist in the application
TbpcEWBDBURLRegistry = class(TComponent)
public
Registry : TStringlist;
constructor Create(AOwner: TComponent); override;
destructor Destroy; override;
procedure RegisterURL( URL : string; Field : TField);
procedure UnRegisterURL( URL : string ); overload;
procedure UnRegisterURL( Index : integer ); overload;
procedure ClearRegistry;
function FindField( URL : string; var Index: Integer ) : TField;
end;
// Return a pointer to the global URL registry
function bpcEWBURLRegistry : TbpcEWBDBURLRegistry;
// Make a global URL registry and return a pointer to it
function bpcEWBURLRegistryInit : TbpcEWBDBURLRegistry;
// Dstroy the global URL Registry
procedure bpcEWBURLRegistryFree;
// Assign a URL Register to the global registry variable for future use
// WARNING: All Accesses from this point will be directed to the new registry
// WARNING: DOES NOT destroy the previous registry - this must be done manually
procedure bpcEWBURLRegistryAssign( value : TbpcEWBDBURLRegistry);
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
c9bc5a3bf0ed37b168707fb58dcc6cf494cfd677
BpcDBEmbeddedWB INIT
0
490
685
2019-09-11T23:02:38Z
Bishopj
1
Created page with "==Factory initialiser for bpcDBEmbeddedWB== Language: Delphi 7 - 2007 <pre> procedure bpcInitialiseDBEmbeddedWBHandlerFactory; procedure bpcFinaliseDB..."
wikitext
text/x-wiki
==Factory initialiser for bpcDBEmbeddedWB==
Language: Delphi 7 - 2007
<pre>
procedure bpcInitialiseDBEmbeddedWBHandlerFactory;
procedure bpcFinaliseDBEmbeddedWBHandlerFactory;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
935961f9ee57265d3a8f0a242e0f1a96efd9fe3a
GUIDEx
0
491
686
2019-09-11T23:04:06Z
Bishopj
1
Created page with "==GUIDEx - Control for manipulating GUID's== Language: Delphi 7 - 2007 TGuidEx - Delphi class for manipulating Guid values [http://delphi.about.com/library/weekly/aa022205..."
wikitext
text/x-wiki
==GUIDEx - Control for manipulating GUID's==
Language: Delphi 7 - 2007
TGuidEx - Delphi class for manipulating Guid values
[http://delphi.about.com/library/weekly/aa022205a.htm http://delphi.about.com/library/weekly/aa022205a.htm]
A Guid type represent a 128-bit integer value.
The TGuidEx class exposes class (static) methods
that help operate GUID values and TGuidField
database field types.
Author: Zarko Gajic
<pre>
interface
uses SysUtils;
type
TGuidEx = class
class function NewGuid : TGuid;
class function EmptyGuid : TGuid;
class function IsEmptyGuid(Guid : TGuid) : boolean;
class function ToString(Guid : TGuid) : string;
class function ToQuotedString(Guid : TGuid) : string;
class function FromString(Value : string) : TGuid;
class function EqualGuids(Guid1, Guid2 : TGuid) : boolean;
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
dec8eac313911a538393a411e8dc8f44f7d943dd
ParseRequest
0
492
687
2019-09-11T23:05:03Z
Bishopj
1
Created page with "==TrwRequestContent - Improved Multipart/Form-data handling in Delphi == Language: Delphi 7 Refer: [http://www.winwright.ca/index.html http://www.winwright.ca/index.html]..."
wikitext
text/x-wiki
==TrwRequestContent - Improved Multipart/Form-data handling in Delphi ==
Language: Delphi 7
Refer:
[http://www.winwright.ca/index.html http://www.winwright.ca/index.html]
This unit can be used to parse out data returned from
html forms with ENCTYPE="multipart/form-data" and Method=POST
Do NOT use on any other encoding.
Reason for unit is the TWebRequest class in Delphi does not
provide correct handling for multipart data. These helper
classes provide this handling.
==Description:==
Create an instance of TrwRequestContent passing to the
constructor either the complete content from a TWebRequest
object or the TWebRequest instance itself. In the latter case
TrwRequestContent will take care of making sure all content
has been retrieved from the client.
TrwRequestContent will first parse the boundary string used to
delimit each form item, then use that to parse the content of
all the individual fields. For each field parsed, that specific
content is passed to the constructor of a TrwRequestItem which
then parses it further to pull out the individual elements. For
most fields this simply consists of Name and Content. For
multi-select listbozes, the content will contain the choices
separated by semi-colons (;). For images, the FileName and
ContentType properties will also be provided. The Content will
be the actual image data and can be directly saved to file as
the appropriate type (e.g. .jpg or .gif).
Once created, you can iterate the list of names using the
FieldName property, individual TrwRequestItems using the
Field property, or get the entire TStringList containing the names and
TrwRequestItem objects via the Fields property. FieldCount and
Contentlength are also available.
Freeing the TrwRequestContent object will free all the
TrwRequestItem objects.
<pre>
{
Copyright © 2000 Winwright Inc. (Canada) All rights reserved.
This unit may be freely used and distributed by anyone for use in
any application commercial or otherwise as long as these comments
including copyright are kept intact.
This unit can be used to parse out data returned from
html forms with ENCTYPE="multipart/form-data" and Method=POST
Do NOT use on any other encoding.
Reason for unit is the TWebRequest class in Delphi does not
provide correct handling for multipart data. These helper
classes provide this handling.
Description:
Create an instance of TrwRequestContent passing to the
constructor either the complete content from a TWebRequest
object or the TWebRequest instance itself. In the latter case
TrwRequestContent will take care of making sure all content
has been retrieved from the client.
TrwRequestContent will first parse the boundary string used to
delimit each form item, then use that to parse the content of
all the individual fields. For each field parsed, that specific
content is passed to the constructor of a TrwRequestItem which
then parses it further to pull out the individual elements. For
most fields this simply consists of Name and Content. For
multi-select listbozes, the content will contain the choices
separated by semi-colons (;). For images, the FileName and
ContentType properties will also be provided. The Content will
be the actual image data and can be directly saved to file as
the appropriate type (e.g. .jpg or .gif).
Once created, you can iterate the list of names using the
FieldName property, individual TrwRequestItems using the
Field property, or get the entire TStringList containing the names and
TrwRequestItem objects via the Fields property. FieldCount and
Contentlength are also available.
Freeing the TrwRequestContent object will free all the
TrwRequestItem objects.
}
interface
uses Classes, SysUtils, HTTPApp;
type
{
TrwRequestItem:
- Contains data about a single item returned from an html form.
- There's no need to create these manually, they are created for
you by calling the constructor of the TrwRequestContent class.
}
TrwRequestItem = class
private
FName: string;
FContentType: string;
FFileName: string;
FContent: string;
FContentLength: integer;
public
constructor Create(const AContent: string);
procedure AddValue(const AContent: string);
property Name: string read FName;
property ContentType: string read FContentType;
property FileName: string read FFileName;
property Content: string read FContent;
property ContentLength: integer read FContentLength;
end;
{
TrwRequestContent:
- Passed either the Content property of a TWebRequest class,
or an instance of a TWebRequest class, will parse out the
individual fields.
}
TrwRequestContent = class
private
FList: TStrings;
FBoundary: string;
FContentLength: cardinal;
FContent: string;
procedure ClearList;
procedure ParseFields(const data: string);
function GetFieldCount: integer;
function GetField(const index: string): TrwRequestItem;
function GetFieldValue(const index: string): string;
function GetName(index: integer): string;
function GetNames: TStrings;
public
constructor Create(req: TWebRequest); overload;
constructor Create(const AData: string); overload;
destructor Destroy; override;
property ContentLength: cardinal read FContentLength;
property Content: string read FContent;
property FieldCount: integer read GetFieldCount;
property Field[const index: string]: TrwRequestItem read GetField;
property FieldValue[const index: string]: string read GetFieldValue;
property FieldName[index: integer]: string read GetName;
property FieldNames: TStrings read GetNames;
end;
</pre>
=BackLinks=
{{#dpl: linksto={{FULLPAGENAME}} }}
969c627f1656890d8e70f1353577f97ac0097add
Category:Management Science
14
493
688
2019-09-11T23:23:21Z
Bishopj
1
Created page with "Articles in this category of the BPC RiskWiki cover general topics in management theory and practice. The category aims to consolidate all articles covering items of manageme..."
wikitext
text/x-wiki
Articles in this category of the BPC RiskWiki cover general topics in management theory and practice. The category aims to consolidate all articles covering items of management theory so such articles are drawn both from entire topic range of the BPC RiskWiki - including Governance subjects.
9894cfda0f67a3a05383038c221f84f88d010bab
Category:Business Process Reengineering
14
494
689
2019-09-11T23:24:31Z
Bishopj
1
Created page with "Articles about the techniques of business process engineering and reengineering. This covers methods of charting organisations, of modeling processes and of process tuning an..."
wikitext
text/x-wiki
Articles about the techniques of business process engineering and reengineering. This covers methods of charting organisations, of modeling processes and of process tuning and design.
c18a9f084908fa7cacff580824e4cd140cf6c4d6
Category:Internal Audit
14
495
690
2019-09-11T23:30:10Z
Bishopj
1
Created page with "Articles relating to Internal Audit and its associated support functions and skills. These articles include opinion pieces, position papers, policy and procedure manuals, tech..."
wikitext
text/x-wiki
Articles relating to Internal Audit and its associated support functions and skills. These articles include opinion pieces, position papers, policy and procedure manuals, technical training manuals, and various other resources.
4bdee38533cb40926e47fb59886714d73fb25e4b
Category:Internal Audit - RIAM
14
496
691
2019-09-11T23:31:13Z
Bishopj
1
Created page with "Articles relating to the BPC Rational Internal Audit Method and its associated support functions and skills. These articles include opinion pieces, position papers, policy and..."
wikitext
text/x-wiki
Articles relating to the BPC Rational Internal Audit Method and its associated support functions and skills. These articles include opinion pieces, position papers, policy and procedure manuals, technical training manuals, and various other resources.
1d3320c77bb6119df2efacb4e18b0453ba904b09
Category:BPC SurveyManager Web Client Manual
14
497
692
2019-09-11T23:34:49Z
Bishopj
1
Created page with "The BPC SurveyManager Web Client Manual provides step by step instructions on the use of the web client for the management of survey enterprise, regions and organisations, the..."
wikitext
text/x-wiki
The BPC SurveyManager Web Client Manual provides step by step instructions on the use of the web client for the management of survey enterprise, regions and organisations, the management users and responders, and the management of surveys form creation, through deployment, publication, distribution, tracking and reporting.
The BPC SurveyManager web client is one of a number of software clients designed to work with the BPC SurveyManager engine for management of all things to do with surveys.
<noinclude>
</noinclude>
d15098a02822a897b1ec28dda442e8f9851ab6ec
Category:Mergers and Acquisitions
14
498
693
2019-09-11T23:38:56Z
Bishopj
1
Created page with "Articles on mergers and acquisitions"
wikitext
text/x-wiki
Articles on mergers and acquisitions
ff68e2d435e73109d6c6c52edcca896520a9f092
Category:Risk Management - Applied Cases
14
499
694
2019-09-11T23:41:28Z
Bishopj
1
Created page with "Articles on specific risk management real world topics."
wikitext
text/x-wiki
Articles on specific risk management real world topics.
866bd044cfe17f34bf0c03cee79aabc998184158
Category:Stakeholder Community Network Model
14
500
695
2019-09-12T02:28:20Z
Bishopj
1
Created page with "This category contains articles that comprise the theory of stakeholder community networks and its analytical models."
wikitext
text/x-wiki
This category contains articles that comprise the theory of stakeholder community networks and its analytical models.
9ae79f75ae2d57be6b5f5be3ddf8e5ad109d779b
The Stakeholder Community Network Model
0
288
696
504
2019-09-12T02:32:21Z
Bishopj
1
wikitext
text/x-wiki
=Introduction - What is the Stakeholder Community Network Model?=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2019 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this and other topics. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Re engineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, style and the detail provided, as the original text was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time.
'''''Author's Note:''''' The stakeholder community network concept was originally mapped out in the mid to late 1990's and reflected both my own search for a paradigm for online and virtual corporations. It effectively pre-dates the rise of cloud computing and social network sites as a component of business (for which it almost seems to have been designed) by some five to eight years. It did, however, benefit from existence of the fore-runners of these concepts. It was developed in the context of the observed behaviours of successful online ventures such as DELL and CISCO, the Victorian whole of government reform agenda, the tail end of the TQM experiment, shift from paper to online work flow both intra and inter business, rise of risk management, progressive adoption of balanced score cards, appearance of network trading organisations (groups of independent complementary businesses that traded together as a unit cross-feeding work and niching away from each other through specialisation - they flourished briefly locally in the mid-1990's), and the rise of on-line portals, peer managed corporate forums, application service providers, enterprise scale ERP and CRM systems, and web based B2B systems and the emergence of cataloguing standards. I have used it heavily over the years. It has been modified over time, to accommodate learnings from organisations that survived economic, technological social and political reversals and fertilised throughout by proven tactical and management philosophies, the stakeholder community network model would now seem to have come of age.
</noinclude>
==What and Why==
===What is the Community Network Theory of Organisations?===
====Organisational Community Network Theory====
'''''Organisational Community Network Theory premises that an organisation is a network of one or more communities existing in a network of other communities. The network links communities along lines of exchange such as communication, dependence, and obligation. Communities are collections autonomous agents and/or other communities that interact and share a sense of group identity, or share at least one purpose in common.'''''
Agents are essentially people, but the category could easily accommodate AI devices as these develop appropriate capabilities.
====Characteristics of a Community in Organisational Design====
Communities provide a natural, spontaneously-forming, self-organising, and evolving human organisational structure that forms because something is shared by the participants. Through the things the participants share in common, the community unit provides a framework for standardisation, streamlining, automating, and specialising in delivery of services and products to meet the shared purposes and operational needs of the individual community, and groups of communities.
Communities form initially because there is one or more needs in common among the participants (possibly only the need to identify and classify each other!). They are not inherently permanent structures, however there are some communities, that because of their survival through multiple generations or over multiple business cycles are effectively permanent. Such a list might include cities, countries, religions, professional associations, sporting clubs, and some government agencies, for example. At the other end of the continuum are communities that form spontaneously and last for little longer than the span of the first and only meeting. Examples might include emergency assemblies, concerts, demonstrations, staff inductions and rallies, etc.
Members of a community may be individuals or other communities. Communities contain eight non-exclusive classes of participant:
# Members - All participants are members, regardless of whether they are also members of the other classes.
# Beneficiaries - Information, goods and services consumers
# Suppliers - Information, goods and services providers
# Patrons - Funding providers who therefore also tend to direct
# Governors - Providers who administer, moderate, direct, control access, monitor, and tune.
# Custodians - Provide the infrastructure, durable assets, information warehouse, community tools.
# Partners - Provide compatible, complementary non competitive services or goods consumed by members in association with those of the community, but not as part of the community.
# Public - Comprised of potential participants, and participants who may also spontaneously form communities that compete with or otherwise influence the context of the community.
The more mature the community, the more clearly these roles are differentiated and actively operating. For a community to reach stability over an extended time the more important it is for these duties implied in these roles to be fulfilled.
Members of a community:
*share in a communal identity,
*have a shared purpose with other members,
*need similar access to information, and
*draw from a common set of tools.
The community will interact with other communities both individually and as a group. The more cohesive and mature the community is, however, the more likely it is that it will interact as community with other communities through nominated representatives.
The community is the fundamental building block of an organisation, but communities are structurally recursive and fluid. Communities themselves naturally subdivide into teams that service particular interests or needs of the community. These teams from their own communities, and together these internal communities form a network of interacting communities. The larger and more heterogeneous the parent community the more noticeable, numerous, segregated, larger and autonomous these internal communities become.
These internal communities may also interact directly with external communities, and have external participants in otherwise internal communities. The more predominant the external participation is, the more likely is the internal community to transition though the parent community boundary to become an external community (with respect to the originating parent community). Similarly the higher the proportion of community participation from a single community in an external community, the more likely that external community will transition to an internal contextually constrained community.
Each community is, therefore, comprised of a fluid network of communities contextually constrained by, and in some way supporting the activities of the parent community.
Community based organisational structures extend horizontally through unconstrained networks of interactions and vertically through community subdivision and absorption into constrained networks of specialised communities.
====Making and Strengthening a Community====
The longer a community survivies - the more mature it becomes - the more clearly the community identity, roles and rules become. For example, a group of people with a common interest in a playing of cricket meet by chance through visits to a local field - perhaps looking for a game being played. Over time they tend to arrive more regularly and predictably at around the same time in greater numbers. Some start bringing equipment and start a game, while others join in fielding or watching. As the predictability of the presence of other interested parties grows, participants start arriving in the expectation that others will also be present, while other participants bring supporting material - like refreshments, etc. Gradually, a community is forming with self nominated and perhaps suggested or allocated roles.
Eventually the group might suggest a common name - the Sometimes Cricket Club - and others might attempt to organise more sophisticated or permanent resources, and eventually the funding needs of the group might dictate an expansion in its membership and the need to more formally manage finances on behalf of the group, etc. Rules might initially be common-sense and unspoken (like not stealing the bat and ball from the guy that supplied it), others may be agreed through shared experience. Sharing or common interests and the need to improve predicatbility of participants in games will encourage the group members to share contact details and channels of communication. The more individuals invest their time, energy and resources on behalf of the group, the more they will expect later joining members to make a catch-up contribution for the existing investment - and the community may start placing barriers to entry in the form of membership criteria and fees.
As the group grows handshake agreements may need to be formally agreed and recorded, and individuals will be formally allocated roles and leadership agreed. Along the way as disagreements arise (like who should bat first) dispute resolution mechanisms will be required.
Thus a community has been formed and gradually self-organised. If the initial casual group fails to ever define roles, find equipment supplier(s), it will be most unlikely to ever get to the stage of even the first game. If it fails to agree its meeting place and times of meetings it will probably not achieve the second game. If it fails to identify its membership and establish an identity (and therefore a brand) and all the other functions of a cricket club it will be unlikely to last out a season.
To make an effective long term community we need to pay attention to the characteristics that form a community and ensure that these characteristics are serviced. From the simple example above we see that a community has:
*Members
*Shared resources
*Identity / Brand
*Communication
*Define and shared purpose
*Location - a meeting place (which may be virtual)
*Roles
*Rules
*Governance structure
*Barriers to entry (note this might be as small as deciding to participate)
*Patron (implied or formal)
We grow and strengthen a community by addressing these characteristics directly. Ignoring any one of these will result in the failure of the community over time. For a community that assembles for a single purpose for only a short period of time - such as a demonstration, or an entertainment event this may not be a concern. If we wish the community to have any kind of longevity we will need to consider how we enable the defining charcteristices of the community.
It is with some surprise that we note that when we look at the permanent communities within many organisations we will find that several of these characteristics are only weakly addressed - if at all - rarely understood, and even more rarely considered. Herein lies the key to the internal structural failure of many organisations that have grown much beyond the oversite of their founders splitting into many semi-autonomous communities.
====The Organisation as a Community====
Here we distinguish a physical organisation from the organisation of its operations and resources.
A physical organisation - such as a company, government agency, not-for-profit, or even a political party - is:
# a community containing a network of communities,
# a patron of both internal and external communities
# a custodian of information and provider of infrastructure for communities
# a governor of community mandate, direction, performance, and culture, etc.
The physical organisation is, by definition, a community, but its boundaries may be so fuzzily defined that as a community it is little more than a container for a network of communities, whose primary allegiances are directed outside of the physical organisational boundary. Some communities in the organisation's network are planned and facilitated communities, while others are not planned but facilitated (such as professional associations, unions, standards bodies) and others are neither planned nor facilitated (but, perhaps, accommodated) (such as schools, sporting clubs, arts groups, social movements, etc.).
As a patron the physical organisation plays it primary role. Patronage is provided through a funded pool of resources that can be applied to communities as participants and enablers of community infrastructure, and through direct funding of community operations, or through funding infrastructure provision, etc. Patronage is about funding, and every gift "in kind" of resources or equipment, etc is an implied gift of funding as well. Patronage is accompanied by some ability to influence direction - if only from the implied threat of future funding cessation.
As a custodian, the physical organisation will also provide services to communities of storing knowledge, providing and maintaining technical and physical infrastructure used by communities, and management of liquid assets, etc. These are called custodian functions because they are about the preservation of assets, wealth, capability and capacity.
In its governance function the physical organisation imposes accountability for patronage, standards, policy compliance, legal compliance, strategic direction, performance measurement, financial control and resource utilisation, etc
All organisations are simultaneously intersected by many special interest communities:
*The average workforce is riddled with communities some intersecting the organisation, some not - union(s), professional bodies, schools (if staff have school age children), political, sporting, social, OHS cases, divisional, project, etc.
*Industrial associations, standards committees, regulators, etc.
*The company is surrounded by public interest groups, political and semi political groups, consumer advocacy groups, and the public relations industries.
*Internally the organisation might have communities of buyers, marketing and sales, logistics, process & quality improvement, governance, safety, research and development, financial control, etc.
Communities do not respect the conventional boundaries of corporate or governmental agencies. Communities that interact with external stakeholders, for example, draw in members of the public and convert them into organisational stakeholders in the process, but not employees (at least in the conventional sense).
====The Advantages from using Communities to Model Organisations====
In some organisational theories, communities are represented as external and internal forces or drivers, but are not directly modelled into the organisational structure. The organisation is seen as a collection of consumer-provider relationships - whether those relationships are about transmitting instructions, funding, goods, services, resources, etc. The relationships are essentially hierarchical - even in matrix organisations - and feed back and feed forward control systems have to be imposed on the structures to make them work. Structural entropy gradually causes the structure to disassemble without constant maintenance on the organisation structure itself.
The community is an advance on the classic consumer-provider interactive model, because it:
*assumes most business relationships are multi-directional exchanges between the provider and the consumer and other providers and consumers extending over a period of time;
*recognises that all transactions between parties involve a series of micro exchanges going in both directions, not a single uni-directional exchange. For example, a purchase involves the consumer providing information (identity, location, preferences, competitor data, demand level, buying cycle, etc.) and possibly funding, a sales team matching the need to available offerings and defining and providing the promise, a legal team defining the obligations, a delivery team to deliver the good or service, a quality and support team providing quality management, logistics team providing transport, etc. All of these are participants of the same community involved in meeting client needs.
*delivers the benefits of the one-stop-shop process models, without the training cost, and inherent quality variability, by forming a community of specialists to collectively provide the single point solution.
*provides a model for structuring the online presence of an organisation.
*provides an organisational architecture that distributes the costs of providing and consuming goods and services across the community rather than exclusively concentrated in the larger party. For example, a buying community might assume some of the costs of sales by providing their details online directly into the client database, select from available product (by watching videos, reading information and product comparisons provided from central location), or submit special orders online, respond to questions from other clients in hosted forums, and advertise the organisation's products and quality in organised reviewer sites, or social networking sites.
*places the provider and consumer into the same "team" and positions them as jointly trying to meet a need. The community model facilitates all participants contributing jointly and sharing ownership of the outcome - rather than one party meeting the needs of the other.
Each community is a collection of participants (members) who share common operational characteristics, goals, interests and/or functional needs. The greater the extent to which the participants share characteristics, interests, needs and goals in common the greater the cohesion in and resilience of the community - in simple terms the community is active, "tight", involved, and the members share a sense of identity, belonging and, most importantly, ownership.
Communities are semi-autonomous, self-selecting, self directed, and inclusive. This does not mean communities are necessarily "open-access". In fact communities with higher barriers to entry often have the highest sense of cohesion because membership is something hard to attain and therefore something of value. Cohesion does not necessarily mean active, however, and lack of activity generally makes a community less interesting organisationally. Communities survive by exchanging things. The greater the volume of services, tangible goods or intangible goods (such as information), that flows through and around the community the stronger the community becomes. In the community model an organisation therefore benefits by fostering participation and particularly communication among all its members.
===What is the Stakeholder Community Network Model?===
'''''The stakeholder community network model is an organisational design and analysis paradigm that sees the organisation as a network of co-dependent stakeholder communities positioned in a larger network of interacting (but not necessarily co-dependent) communities. Within this paradigm, all of an organisation's services, functions and facilities exist to service the needs of the various stakeholder communities in the network.'''''
It should be noted from the outset, that co-dependent does not mean cooperative. As with domestic co-dependent relationships, the community network may include some positively destructive co-dependent community relationships.
The model defines an organisation as consisting of a network of operations that may extend beyond the boundaries of the organisation's body corporate. One such situation might arise in franchised operations or trading networks where an external entity provides critical services on which the corporate organisation depends.
The model works as an organisational design paradigm, a process design framework, an IT strategic design paradigm and a risk and performance analysis framework. It is directly suited to modern network, online, virtual, service operational models as well as bricks and mortar industries including utilities, government, general and project manufacturing, and education. It has not been tested in the resources sector or transport sector.
As an analysis tool identification and labelling of existing implicit and explicit communities and the physical and virtual flows between them against current planning, score cards, policies, performance measurement systems, service agreements, compliance frameworks, risk models, quality, control and feedback systems highlight areas of dysfunction, duplication, redundant effort, counter-productive strategies, missed opportunities, and structural inefficiency and ineffectiveness.
As a design tool it results in the alignment of organisation wide activities to identifiable purposes with targeted participants and measurable performance. It facilitates structurally many different and potentially divergent simultaneous strategies while painting a boundary and direction for such divergence. Such support in organisational design is essential for dealing in global, highly cyclic, or political markets where cultures, rules and geographic features may require the ability to operate as "her to him and him to her", and to retire and replace entire limbs rapidly.
As a customer, partner and supplier service process model it results in bound customers and suppliers and well integrated partners while distributing a significant portion of the organisations costs to the participants.
As an IT systems framework it provides an efficient protocol for defining shared services, community portal service architectures, intra-cloud and cloud services, virtualisation clusters, etc.
==Definitions==
===The Organisation===
Organisations are networks of communities. These communities are comprised of members drawn from inside and outside the organisation's corporate legal identity, and may include communities of which the organisation has no effective control (in traditional terms).
Under the stakeholder community network model we view an organisation as a community comprised exclusively of interconnected sub-communities of people providing and consuming goods and services. Each sub-community forms multiple sub-sub-communities within it, and the community subdivision continues recursively until the costs of organising communities out way the benefits gained from the additional community.
Contrast this view of an organisation with that of other models that classify organisations in terms of bureaucratic, divisional, matrix, and similar structures. Under the stakeholder network view all of these structures can coexist in an organisation simultaneously as they are simply overlapping communities defined around structural paradigms. The stakeholder community network model does not replace such paradigms - it absorbs them.
In the stakeholder community view an organisation is a free-flowing evolving network of teams forming and disbanding as required, with some acquiring near-permanent status, while others enjoy but a single day in the sunshine. Community membership is not exclusive and it is normal for members of one community to also be members of other communities.
===The Community===
The model first defines a structural unit (the community) that possesses identifiable and comparable characteristics, such as focus, information need, functional need, etc. Secondly, the model looks to the mechanisms of facilitating stakeholder communities in a cost effective and consistently reliable and predictable way, utilising common services designed to enable and utilise the shared or distinguishing characteristics. So initially, at least, the model is community structure agnostic.
Communities form for multiple reasons, including:
*shared geographic proximity
*shared heritage
*shared communications technology
*shared language
*shared interests
*shared skills
The things we share are like gravitational attractors around which people cluster in self organising social units we are calling communities.
As communities grow beyond a few members they form sub communities whose members service the parent community or concentrate in some specialised capacity in addition their other roles as members of the community.
The communities in which we are most commonly interested (in the general organisational performance improvement context) are those forming around shared interests and skills. Within an organisation the geographic, and language communities may be crucially important, and in some contexts would be directly accommodated but they will also usually need some form of communities formed around skills and interests (like, at the very least, consuming or providing something), in order to assist the organisation achieve its purpose.
Within each community formed around shared interests or skills are a further set shared interests such as membership, meeting space, information, branding, commercial services, engagement, arbitration, and support. As these needs are common (with minor variations) across all communities they are an attractive first target for shared service provision across all communities. In designing these shared services one should remember that a properly harnessed community can be self managing, peer supporting and self selecting. Shared service provided to communities should be designed to encourage this ownership by the community membership.
A community model assumes a multi-way conversation within the community among the community members - not a massively parallel bilateral conversation between the community members and the organisation. The latter is a client-supplier relationship and by excluding inter-member interaction it embeds the costly push model of marketing, sales and service delivery. By encouraging intra-community conversation we harness the consumers in the community into one or more of the many supply roles in the community. In a customer/client oriented community supply roles span such things as marketing assistance with reviews, discussions and forum participation to support assistance in peer help spaces, and even product improvement and testing such as in software Beta programmes. On the supplier and partner side, supplier side community roles include online supply of certifications, supplier self-registration of details, self selection of available contracts, online invoice entry directly by suppliers, and suppliers providing new product information feeds matching community standardised classifications and measures, etc.
===The Stakeholder Community===
A stakeholder community, is a collection of people, agencies, or units of an agency, that share three traits in common:
# They have an interest in the organisation being modelled or analysed (IE: they are stakeholders).
# As a group, they are co-dependent with other groups of the same organisation. (IE: the groups can not operate with complete autonomy as they depend on each other for their functioning and survival).
# They possess additional distinguishing dimensions of their interest in the organisation that allow them to be functionally separated from some members of the collection and similarly grouped with others (IE: they form an identifiable and functionally similar subgroup of stakeholders).
A stakeholder community of an organisation might be defined as geographically based, and representing all customers within a geographic area, or it might be an enterprise wide collection of staff injured in forklift truck accidents, or a worldwide extra net of ECL policy advisers, or suppliers and corporate buyers for raw materials,... or any one of a long list of possible organisation specific or related groupings.
We call the members of a community "Resources". A resource may be a person or another collection of resources such an organisation, a unit of an organisation, another community. In all cases where a collection of resources is a member of a community, that collection will participate through one or more "community representatives". So in a sense resources can be seen as ultimately comprising people (even though they may be members fulfilling constrained roles).
===The Stakeholder Community Network===
A stakeholder community network is a collection of stakeholder communities that form a network of loosely co-dependent communities.
The communities comprising the network preserve the rules of membership of a stakeholder community domain (as defined above). The links between member communities represent the co-dependencies. The dependencies are functional in nature and may be about information, goods or services - provision or supply, etc. They therefore represent the first layer of potential service level agreements in an organisation.
Technically speaking, the graph connecting all members of the stakeholder network is a digraph (directed graph) when the functional attribute of the network relationship is included in the inter-community link definition.
===The Well-formed Stakeholder Network===
In the universe consisting of all possible stakeholder communities of an organisation, a complete network would include all communities in the network topography. Such a network is said to be "theoretically complete".
Theoretical completeness is neither practical nor possible to achieve in practice. We can not know, and thus enumerate, every possible stakeholder community as each resource and every possible combination of two or more resources up to and including the entire membership of the organisation's stakeholder domain is potentially a community.
Another way of viewing completeness is to first test to ensure that all members of the stakeholder community are also members in one or more of the other communities in the network. This network is then complete in terms of a organisation's resource coverage.
It is worth noting that an organisation's stakeholder resource list may include both members of the public and entities that have no direct dealing with the organisation as well as staff, clients and suppliers (etc.) of an organisation.
===The Stakeholder Community Network Model===
The stakeholder community network model views an organisation in terms of stakeholder communities with shared needs, interests and/or purposes.
The model is a government and business meta-organisational model for organisational design, performance analysis and competitive strategy. It founded on a theory of operational design that embraces networked co-dependent business structures (such as outsourcing, join-ventures and social networking), while not mandating them. The step into communities, however, fundamentally changes the organisational focus from internal structure management to external service delivery. By rejecting all activity not designed to service an identifiable community it forces the entire enterprise to embrace a service culture at every level - everybody is a client of somebody else and in a stakeholder relationship (and usually responsible to someone, or responsible for something) with many other people.
The community structure inherently distributes some of the costs of marketing, sales and servicing, from the net providers to the net consumers within the community, but is effectively a premium willingly paid by community net consumers for greater influence over service form, more relevant and timely information, improved service speed, and risk perception confirmation (the role of public forums), etc.
Communities are essentially self determining and semi-autonomous so a community network modelled organisation naturally accommodates multiple value streams simultaneously. The ability for a community to recursively sub-divide into smaller overlapping specialised communities means the enclosing community structure can accommodate not only multiple value streams internally, but also multiple agendas. Thus financial performance can be enhanced, while quality improvement, social policy or research (and other long term strategies) are driven with equal priority. Further, new value streams can be added to the structure without compromising the integrity or culture of the existing structure.
The semi autonomous nature of communities means that both competitive and and non-competitive business architectures are compatible with the community network model.
We say it is a "meta-organisational model" because, while you might design your physical organisational structure around the model (particularly at the business unit level, or in the online context), it is more common to use it to redesign the roles, service agreements and strategies of existing organisational structures in an organisation. The meta-organisational model is one that floats through a physical organisation providing a new virtualisation of the organisation by re-engineering the service agreements, social networks and logistical networks in an organisation.
One way to think of this is that the impact of applying the community stakeholder thought process is to rearrange the plumbing, the lifts, the corridors and the internal doorways inside a heritage listed building. It is still the same building on the outside, but now you don't get lost inside it, and clients and customers start sharing your destination, not just what you do.
Sure you could tear down the building and replace it with a campus that modelled your stakeholder community structure exactly, but you do not need to do so to get the benefits, and in fact doing so might be counter productive to your market.
The model does tend to have certain organisational impacts - even as a thought exercise:
*The model encourages networked structures and specialisation of semi-autonomous co-dependent internal units.
*The communities share common servicing needs and efficiency dictates some form of shared service provision for these common needs. These structures imply additional cost, which in a zero-sum change process implies that resources will have to be transferred from somewhere else.
*The network model will tend to reach across multiple divisions of an organisation in defining communities.
In the normal entity (government or business) an individual or even business unit might participate in multiple stakeholder communities at once. So the communities are not necessarily defining an organisational structure as much as a set of interlocking co-dependence structures around which services can be consolidated and streamlined, duplication identified and removed, and context specific organisational purposes can be clearly articulated.
=Applying the Stakeholder Community Network in Practice=
==Step 1. Identifying and Defining Stakeholder Communities==
We must fist decide whether we a looking for a directed outcome such "quality improvement" or an undirected (normal) outcome. This impacts the design of every community.
In a directed outcome model the directed outcome becomes a community in its own right that is automatically a participant in every other community. This allows consideration of the requirements of the directed outcome community to be capture and implemented in every other community structure.
In the undirected model no such imposed membership is mandated and the community architecture is left to optimise the framework with which it has been equipped.
In most situations we use the undirected model for analysis and the directed model in conceptual design (refactoring into an undirected model once the directed redesign has been finished).
==Step 2. Identifying and Defining the Community Ennoblement Functions==
In the model, the central object of the organisation is to ensure communities are facilitated, serviced, and harnessed for the purposes of the organisation as best it can, or otherwise "actively managed". The model sees only communities - so every participant within and without the organisation must be able to be defined as falling into one or more stakeholder communities if the model is to be considered "well-formed" (read "complete").
Within the model, the aim of the enterprise is to facilitate communities (generally) and a defined set of communities specifically - which translates into:
*identifying stakeholder communities
*mapping new and existing stakeholder communities to organisations objectives, mandate and purpose as they change
*mapping inter-community work flows testing and identifying duplicated communities, duplicated flows, and under resourcing, etc
*seeding communities as required
*funding stakeholder communities (eg seed capital, cross charging, external billing, etc)
*organising stakeholder communities
*branding stakeholder communities
*fostering community participation and outcome ownership
*providing the and possibly managing the infrastructure for community self organisation
*liaising/interfacing between stakeholder communities (eg. client community versus customer community)
*delivering the community's requested service or goods
*harnessing community ownership of the service/product improvement process
*trapping and archiving expert knowledge from both internal (to the organisation) and external community participants over time
Within an organisation adopting the stakeholder community network paradigm operationally, the stakeholder community network must be actively managed. This means it must be facilitated, moderated and funded. Resourcing is required to make it fast and efficient to implement and equip new communities and retire existing ones. Part of equipping a community is establishing its charter, budget, performance measures, governance, operating rules (constitution), core membership, decision model, meeting space, common (shared) tools and specialised applications or services need.
This necessitates the creation of a new centralised or distributed role of community facilitator(s) and a central role of community registrar (manager). The former is about equipping and assisting new communities, identifying and seed communities as required and advising and improving existing communities. The latter is about containing, policing, funding, planning, judging and budgeting communities.
==Step 3. Considerations in Designing the Stakeholder Community Analytical Structure==
Once we have a standard definition of the community concept as it applies in our analysis and organisation, the next step is to define a framework of communities through which to analyse the organisation.
As each community shares facilities between their members, the fewer top level communities there are the better the efficiency gains in the entire model will be. Unless, of course, their are too few and the resulting groupings are not homogeneous over sufficient characteristics, or the communities are badly chosen with many shared characteristics between the groups rather than within the groups.
Secondly, the choice of communities can slant the servicing view internally or externally, or indeed could simply mirror existing organisation structures. None of these effects are likely to produce efficiency gains sufficient to justify the operational overhead of the stakeholder community support systems. The gain comes from achieving 100% coverage of participants, with communities comprised of both external and internal participants, with the minimum need for intra-community process or system customisation. By demanding the mixing of internal and external members aim to eliminate duplication between external and internal systems and processes servicing the same need.
So, ultimately, the choice of top level stakeholder communities proves to be crucial to the outcome of the model - on all fronts.
In our experience, if the model is well designed the chosen top level community groups will tend to be highly co-dependent which automatically provides a structure and focus for service level agreements, and intra-community risk profiles will be highly consistent.
The choice of stakeholder communities used is prima-facie up to the organisation and the purpose of the analysis. While generalisation is possible at the highest level, as the view descends through the communities into their member sub-communities the groupings become quite specific to an organisation.
After many years of using and refining the concept we have settled on a standard top level stakeholder community model we call SCNM03. It has proven to be work predictably in both government and commercial agencies in both physical (eg manufacturing) and virtual (eg software) organisations. Alternative models include the groupings under Porter's Theory of Competitive Advantage.
=Standard Stakeholder Community Network Model: SCNM03 in Practice=
==SCNM03: Bishop's Model Stakeholder Network==
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
In the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
==Risk and the Stakeholder Community Network Model==
Risk in the model tends to vary with time and the degree of influence the organsiation (the meta-community) has in the specific community being examined. This influence will vary over time.
Consequently, in the longer time frames (ie. the strategic time frame) the Public and Governance communities are usually the highest inherent strategic risk communities in the model. The organisation tends to have the least influence over the sub-communities contained there-in and may participate only as a guest (information receiver, price-taking customer, subject of legislation, etc.), or not at all. Public attitudes can swing against the activities of the organisation, and influence the legislaters, who, in turn, can legislate the marketplace or the organisation out of existance. COnsumer preferences can change as technology progresses, making the organisation's business model irrelevant. The stakeholder network model therefore naturally tends to encourage both lobbying and active public relations management (or the exact opposite: invisibility!), and participation in external communities for information gathering.
Where timeframes being considered are shorter, ie. from an operational or tactical risk perspective, Workforce will rank as one of the highest risk spaces. If we think of Workforce as being comprised of smaller communities - say contractors and employees, and then each of these in turn being comprised of even smaller communities - say divisions, units and ultimately individuals we see that the more we subdivide the group the closer we get to a community of one member - the individual. In the very short term humans thus represent a highly variable factor.
In the micro-community of one person. the only member of the community that exists inside the employee's head is him or her self. All the risk minimisation and behaviour modification controls naturally present in a larger commnunity are dependent on that one member. In that community one person fulfills all the roles of the multi-member community. Strategies such as training, and standard processes work over an extended time frame to reduce the probability of incidents and create predictability across the workforce as a group, but in the very short or immediate timeframe the individual is still entirely responsible for each action with little chance for other community members to intercede (because there aren't any!). In the instant, this micro-community can make an unsafe decision that impacts the well being of the larger organisation (as well as themselves). Planning, thorough and extended training, careful member selection, 'idiot-proof' machine and user interface design will improve the predictability of the individual - but all these strategies take time to design, implement and achieve their effects. So, over the shortest unit of time - say, a second into the future - the individual can make a very bad decision with disastrous outcomes. This is a technical way of saying that people do dumb things that can be prevented with enough preparation and training - but only if enough time is available.
==Competition and the Stakeholder Community Network Model==
The SCNM03 model captures a deliberately divergent view of competitive strategy from that presented by many earlier authors. In this model, competitors are seen as potential suppliers, partners, clients, customers or workforce and strategies to bring them into one or more of those communities would be pursued.
Crucial to understanding the SCNM03 stakeholder model is that, purely applied, the model sees the entire universe in terms of these communities. It starts with the ideal vision built-in and therefore models a best fit to that scenario.
One obvious issue, then, is that there is clearly no community of "competitors". Under the pure SCNM03 stakeholder network model our aim is to make competitors a member of one or more of the other communities. We are therefore encouraged to both define our service offering away from competition and structure ourselves as complementary to another's offering or needs. The extent to which we are not able to achieve this influences the inherent risk that lays in the public communities.
We do not loose the unresolved participants, instead they appear as sub-communities of the public community and are subject to a range of risk mitigation strategies.
==Stakeholder Communities and Sub-Communities in SCNM03==
Each of these 8 communities is comprised of smaller communities with more specialised shared needs. For example, workforce is comprised of two specialised communities: contractors and staff (or other appropriate terminology). While many requirements of these groups are the same, there are specific differences in engagement, management, ancillary services, social interaction and disclosure levels between these groups to warrant separate community identities.
Conceptually the stakeholder network organisation is (almost) a franchiser of community management systems within a defined product/service space and in a given organisational cultural context. An organisation adopting this model will naturally look to standardise the managerial and technological profile of the communities it manages.
Applying the stakeholder network model in process design, performance analysis, compliance management or risk assessment often results in process structures and views that differ dramatically from the Divisional, Matrix, Hierarchical and Service models under which the organisation may operate. The community network model is agnostic when it comes to organisational structure (with the one exception being an organisation exactly mirroring the network model itself).
By way of example, an organisation that produces widgets, might traditionally see itself in terms of functions and processes concerning widgets. It has widget raw materials planning and acquisition, inventory management, widget production, widget distribution, widget order management and sales, etc. The same organisation in the stakeholder network model would see the world in terms of satisfying the needs of defined stakeholder groups first - not the things they were manufacturing.
In the SCNM03 stakeholder network model the natural home of the manufacturing functions is in the customer community where they are firmly focused to the customer (note - not client) desires, and materials acquisition function might be seen to contract the services of both the partner and supplier communities to satisfy material demand.
A couple of outcomes of the model are immediately apparent from this example, the model blurs the distinction between internal sourcing and external sourcing,
From a computing perspective, the model automatically leads to service portal based architectures, systems consolidation, cloud structuring (whether internal or externally hosted), and highlights the places where inter-system integration and system standardisation are needed. From an operations perspective it leads to service focused organisational architectures with defined client groups and document service standard agreements.
==The SCNM03 Communities Explained==
An individual is often a member of multiple communities (eg Customers and Clients). Our standard stakeholder communities (which in 12 years have yet to be wrong) are:
{|
|-
|Clients
|style="padding-bottom: 10px; padding-top: 10px; border-bottom: 1px solid black;bottommargin:10px;"|Stakeholders who receive or deliver services Clients are interested in rapidly finding information, requesting service, reporting hazards / incidents / events / ideas.
A classic result of the client stakeholder focus are client portals. In a local government these might take the form of a resident portal, where a city rate payer can find in one spot all the online systems for garbage collection, events, bylaws, parking permits, voting, pet registration, planning applications and objection lodgment, etc. In a direct-to-customer manufacturer the client might have access to a portal with product information, product enhancements, support, manuals, training, online-store, peer forums, product reviews, newsletter/blog, and peer/expert hints and suggestions all in one spot.
|-
|Customers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Stakeholders who pay for services that clients receive. This is separation is very common.
Customers want to pay for things in as convenient and consolidated a way as possible, and have mechanisms available for enquiring, revoking or monitoring services for which they pay. Companies that send multiple bills for the different services they provide are examples of firms that seriously need to look at their customers as a stakeholder group.
Governments provide the classic examples of customer and client separation: A State Government might pay for (or part-pay for) some services that are received by citizens of a city government. The state government is the customer, while the citizen is the client.
|-
|Suppliers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Suppliers of services and materials to the organisation. Suppliers have common service interests such as finding tenders, quotes, interfacing supply catalogues to purchase order systems, checking on payment status, locating standard contracts, etc.
|-
|Partners
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Partners are providers of complementary services. A “meals on wheels” charity provider may function as a partner to a local government, delivering services complementary to those of the city government, but funded by non-City sources.
Partners are mainly interested in ensuring their services stay complementary and not competitive with the organisation. So information on strategies, management of joint projects, identification of opportunities, etc are of interest.
Roads constructions authorities are partners who provide accident minimisation services, and traffic impact control services, etc. that complement those of the local or city government roads teams.
The relationship between insurance companies and the fire service is another example of a partnering structure. Insurance companies have an interest in facilitating the fire control services as they reduce their insured risks.
Franchised sales teams for a retailer, independent software manufacturers for a computer or games console manufacturer, and joint-ventures are all examples of partner community networks.
|-
|Workforce
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Workforce include both employees, contractors and consultants. HR systems, payroll, contract management, OHS, incident management, etc. are examples of services needed by this community.
|-
|Treasury/Custodians
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Treasury & other custodians are always an internal community. Their members are charged with maintaining assets and lowest level enabling systems for the other communities.
IT/IS, Building Management, Maintenance and Treasury are always members of the custodians group. They protect assets and provide the infrastructure on which the community specific applications reside.
Email, communications, data storage, server management clearly fit under this group.
|-
|Governance
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The governance community, like the workforce community includes multiple sub-communities, such as the executive, regulators, government bodies, risk management, compliance management, etc. These communities use services related to the provision of control and performance monitoring. Finance, council management, boards, executive team, performance review committee, inter-government reporting, risk, and compliance systems, and planning/budgeting systems are typically included here. Governance community members are both internal and external bodies with which the organisation has an accounting and reporting relationship.
|-
|The Public
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The public includes everyone else. This is a very important community as it has the ultimate power to remove the entire organisation from existence, or cause government to legislate it out of existence.
It is also the group from which all the other stakeholders originally come. From a strategic perspective, the aim of every organisation should be to get every member of the public community to transition to one of the other stakeholder groups.
The public need to know about the services an organisation provides, its ethics, and social performance.
While most membership of this community is reasonably obvious, the presence of public relations teams, lobbying and marketing in this community may be less so.
An organisation is always a member of the public stakeholder communities of all other organisations.
|}
=Applying the Stakeholder Network Model=
The stakeholder networks model is recursive. It applies organisation wide and through each sub grouping down to the individual business unit level (in fact it can also work at the individual level – but not generally in an IS context). Just as the organisation has these broad stakeholder groups, each business unit has the same stakeholder breakdown, all be it with most stakeholders in the various communities being internal to the organisation – rather than external to the City.
The stakeholder community network has clear relationships between the elements - particularly as realised in SCNM03 - and provides a model under which social networking and portal systems naturally fit. The model leads naturally to both network organisations (those using mixed in and out-sourcing, shared service models and joint-ventures as their standard business model.
The stakeholder community model has a number of applications:
#As an IT system design paradigm and idea promoter.
#As a full organisational modelling paradigm. In this form it results in dramatically different organisation models from those in general usage and is thus often too radical for executive comfort.
#As an analytic “best practice” benchmark it is outstanding, and even when only partly applied results in improved and more cost efficient process design.
#In designing and online and web service business presence. With a little thought it should be apparent how effective the stakeholder model is in designing an online presence and structuring of mutual obligation social networks.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
8b1b55e7db00fc4a5f4ae9a3699fe6d9a6d49bb4
697
696
2019-09-12T02:37:32Z
Bishopj
1
wikitext
text/x-wiki
=Introduction - What is the Stakeholder Community Network Model?=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2019 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
'''''Special Note:''''' We have received a number of enquiries as to whether there is more detail available on this and other topics. The answer is yes - a lot more. We are endeavouring to convert the balance of the Process Re engineering Method into wiki format as quickly as possible and we promise to have the rest of the manual loaded to the wiki soon. The conversion effort requires changes to the text format, style and the detail provided, as the original text was written as a manual for staff and clients, and makes some assumptions about pre-existing knowledge. We are also updating the content at the same time.
'''''Author's Note:''''' The stakeholder community network concept was originally mapped out in the mid to late 1990's and reflected my own search for a paradigm for both online and virtual corporations. It effectively pre-dates the rise of cloud computing and social network sites as a component of business (for which it almost seems to have been designed) by some five to eight years. It did, however, benefit from existence of the fore-runners of these concepts. It was developed in the context of the observed behaviours of successful online ventures such as DELL and CISCO, the Victorian whole of government reform agenda, the tail end of the TQM experiment, shift from paper to online work flow both intra and inter business, rise of risk management, progressive adoption of balanced score cards, appearance of network trading organisations (groups of independent complementary businesses that traded together as a unit cross-feeding work and niching away from each other through specialisation - they flourished briefly locally in the mid-1990's), and the rise of on-line portals, peer managed corporate forums, application service providers, enterprise scale ERP and CRM systems, and web based B2B systems and the emergence of cataloguing standards. I have used it heavily over the years. It has been modified over time, to accommodate learnings from organisations that survived economic, technological social and political reversals and fertilised throughout by proven tactical and management philosophies, the stakeholder community network model would now seem to have come of age.
</noinclude>
==What and Why==
===What is the Community Network Theory of Organisations?===
====Organisational Community Network Theory====
'''''Organisational Community Network Theory premises that an organisation is a network of one or more communities existing in a network of other communities. The network links communities along lines of exchange such as communication, dependence, and obligation. Communities are collections autonomous agents and/or other communities that interact and share a sense of group identity, or share at least one purpose in common.'''''
Agents are essentially people, but the category could easily accommodate AI devices as these develop appropriate capabilities.
====Characteristics of a Community in Organisational Design====
Communities provide a natural, spontaneously-forming, self-organising, and evolving human organisational structure that forms because something is shared by the participants. Through the things the participants share in common, the community unit provides a framework for standardisation, streamlining, automating, and specialising in delivery of services and products to meet the shared purposes and operational needs of the individual community, and groups of communities.
Communities form initially because there is one or more needs in common among the participants (possibly only the need to identify and classify each other!). They are not inherently permanent structures, however there are some communities, that because of their survival through multiple generations or over multiple business cycles are effectively permanent. Such a list might include cities, countries, religions, professional associations, sporting clubs, and some government agencies, for example. At the other end of the continuum are communities that form spontaneously and last for little longer than the span of the first and only meeting. Examples might include emergency assemblies, concerts, demonstrations, staff inductions and rallies, etc.
Members of a community may be individuals or other communities. Communities contain eight non-exclusive classes of participant:
# Members - All participants are members, regardless of whether they are also members of the other classes.
# Beneficiaries - Information, goods and services consumers
# Suppliers - Information, goods and services providers
# Patrons - Funding providers who therefore also tend to direct
# Governors - Providers who administer, moderate, direct, control access, monitor, and tune.
# Custodians - Provide the infrastructure, durable assets, information warehouse, community tools.
# Partners - Provide compatible, complementary non competitive services or goods consumed by members in association with those of the community, but not as part of the community.
# Public - Comprised of potential participants, and participants who may also spontaneously form communities that compete with or otherwise influence the context of the community.
The more mature the community, the more clearly these roles are differentiated and actively operating. For a community to reach stability over an extended time the more important it is for these duties implied in these roles to be fulfilled.
Members of a community:
*share in a communal identity,
*have a shared purpose with other members,
*need similar access to information, and
*draw from a common set of tools.
The community will interact with other communities both individually and as a group. The more cohesive and mature the community is, however, the more likely it is that it will interact as community with other communities through nominated representatives.
The community is the fundamental building block of an organisation, but communities are structurally recursive and fluid. Communities themselves naturally subdivide into teams that service particular interests or needs of the community. These teams from their own communities, and together these internal communities form a network of interacting communities. The larger and more heterogeneous the parent community the more noticeable, numerous, segregated, larger and autonomous these internal communities become.
These internal communities may also interact directly with external communities, and have external participants in otherwise internal communities. The more predominant the external participation is, the more likely is the internal community to transition though the parent community boundary to become an external community (with respect to the originating parent community). Similarly the higher the proportion of community participation from a single community in an external community, the more likely that external community will transition to an internal contextually constrained community.
Each community is, therefore, comprised of a fluid network of communities contextually constrained by, and in some way supporting the activities of the parent community.
Community based organisational structures extend horizontally through unconstrained networks of interactions and vertically through community subdivision and absorption into constrained networks of specialised communities.
====Making and Strengthening a Community====
The longer a community survivies - the more mature it becomes - the more clearly the community identity, roles and rules become. For example, a group of people with a common interest in a playing of cricket meet by chance through visits to a local field - perhaps looking for a game being played. Over time they tend to arrive more regularly and predictably at around the same time in greater numbers. Some start bringing equipment and start a game, while others join in fielding or watching. As the predictability of the presence of other interested parties grows, participants start arriving in the expectation that others will also be present, while other participants bring supporting material - like refreshments, etc. Gradually, a community is forming with self nominated and perhaps suggested or allocated roles.
Eventually the group might suggest a common name - the Sometimes Cricket Club - and others might attempt to organise more sophisticated or permanent resources, and eventually the funding needs of the group might dictate an expansion in its membership and the need to more formally manage finances on behalf of the group, etc. Rules might initially be common-sense and unspoken (like not stealing the bat and ball from the guy that supplied it), others may be agreed through shared experience. Sharing or common interests and the need to improve predicatbility of participants in games will encourage the group members to share contact details and channels of communication. The more individuals invest their time, energy and resources on behalf of the group, the more they will expect later joining members to make a catch-up contribution for the existing investment - and the community may start placing barriers to entry in the form of membership criteria and fees.
As the group grows handshake agreements may need to be formally agreed and recorded, and individuals will be formally allocated roles and leadership agreed. Along the way as disagreements arise (like who should bat first) dispute resolution mechanisms will be required.
Thus a community has been formed and gradually self-organised. If the initial casual group fails to ever define roles, find equipment supplier(s), it will be most unlikely to ever get to the stage of even the first game. If it fails to agree its meeting place and times of meetings it will probably not achieve the second game. If it fails to identify its membership and establish an identity (and therefore a brand) and all the other functions of a cricket club it will be unlikely to last out a season.
To make an effective long term community we need to pay attention to the characteristics that form a community and ensure that these characteristics are serviced. From the simple example above we see that a community has:
*Members
*Shared resources
*Identity / Brand
*Communication
*Define and shared purpose
*Location - a meeting place (which may be virtual)
*Roles
*Rules
*Governance structure
*Barriers to entry (note this might be as small as deciding to participate)
*Patron (implied or formal)
We grow and strengthen a community by addressing these characteristics directly. Ignoring any one of these will result in the failure of the community over time. For a community that assembles for a single purpose for only a short period of time - such as a demonstration, or an entertainment event this may not be a concern. If we wish the community to have any kind of longevity we will need to consider how we enable the defining charcteristices of the community.
It is with some surprise that we note that when we look at the permanent communities within many organisations we will find that several of these characteristics are only weakly addressed - if at all - rarely understood, and even more rarely considered. Herein lies the key to the internal structural failure of many organisations that have grown much beyond the oversite of their founders splitting into many semi-autonomous communities.
====The Organisation as a Community====
Here we distinguish a physical organisation from the organisation of its operations and resources.
A physical organisation - such as a company, government agency, not-for-profit, or even a political party - is:
# a community containing a network of communities,
# a patron of both internal and external communities
# a custodian of information and provider of infrastructure for communities
# a governor of community mandate, direction, performance, and culture, etc.
The physical organisation is, by definition, a community, but its boundaries may be so fuzzily defined that as a community it is little more than a container for a network of communities, whose primary allegiances are directed outside of the physical organisational boundary. Some communities in the organisation's network are planned and facilitated communities, while others are not planned but facilitated (such as professional associations, unions, standards bodies) and others are neither planned nor facilitated (but, perhaps, accommodated) (such as schools, sporting clubs, arts groups, social movements, etc.).
As a patron the physical organisation plays it primary role. Patronage is provided through a funded pool of resources that can be applied to communities as participants and enablers of community infrastructure, and through direct funding of community operations, or through funding infrastructure provision, etc. Patronage is about funding, and every gift "in kind" of resources or equipment, etc is an implied gift of funding as well. Patronage is accompanied by some ability to influence direction - if only from the implied threat of future funding cessation.
As a custodian, the physical organisation will also provide services to communities of storing knowledge, providing and maintaining technical and physical infrastructure used by communities, and management of liquid assets, etc. These are called custodian functions because they are about the preservation of assets, wealth, capability and capacity.
In its governance function the physical organisation imposes accountability for patronage, standards, policy compliance, legal compliance, strategic direction, performance measurement, financial control and resource utilisation, etc
All organisations are simultaneously intersected by many special interest communities:
*The average workforce is riddled with communities some intersecting the organisation, some not - union(s), professional bodies, schools (if staff have school age children), political, sporting, social, OHS cases, divisional, project, etc.
*Industrial associations, standards committees, regulators, etc.
*The company is surrounded by public interest groups, political and semi political groups, consumer advocacy groups, and the public relations industries.
*Internally the organisation might have communities of buyers, marketing and sales, logistics, process & quality improvement, governance, safety, research and development, financial control, etc.
Communities do not respect the conventional boundaries of corporate or governmental agencies. Communities that interact with external stakeholders, for example, draw in members of the public and convert them into organisational stakeholders in the process, but not employees (at least in the conventional sense).
====The Advantages from using Communities to Model Organisations====
In some organisational theories, communities are represented as external and internal forces or drivers, but are not directly modelled into the organisational structure. The organisation is seen as a collection of consumer-provider relationships - whether those relationships are about transmitting instructions, funding, goods, services, resources, etc. The relationships are essentially hierarchical - even in matrix organisations - and feed back and feed forward control systems have to be imposed on the structures to make them work. Structural entropy gradually causes the structure to disassemble without constant maintenance on the organisation structure itself.
The community is an advance on the classic consumer-provider interactive model, because it:
*assumes most business relationships are multi-directional exchanges between the provider and the consumer and other providers and consumers extending over a period of time;
*recognises that all transactions between parties involve a series of micro exchanges going in both directions, not a single uni-directional exchange. For example, a purchase involves the consumer providing information (identity, location, preferences, competitor data, demand level, buying cycle, etc.) and possibly funding, a sales team matching the need to available offerings and defining and providing the promise, a legal team defining the obligations, a delivery team to deliver the good or service, a quality and support team providing quality management, logistics team providing transport, etc. All of these are participants of the same community involved in meeting client needs.
*delivers the benefits of the one-stop-shop process models, without the training cost, and inherent quality variability, by forming a community of specialists to collectively provide the single point solution.
*provides a model for structuring the online presence of an organisation.
*provides an organisational architecture that distributes the costs of providing and consuming goods and services across the community rather than exclusively concentrated in the larger party. For example, a buying community might assume some of the costs of sales by providing their details online directly into the client database, select from available product (by watching videos, reading information and product comparisons provided from central location), or submit special orders online, respond to questions from other clients in hosted forums, and advertise the organisation's products and quality in organised reviewer sites, or social networking sites.
*places the provider and consumer into the same "team" and positions them as jointly trying to meet a need. The community model facilitates all participants contributing jointly and sharing ownership of the outcome - rather than one party meeting the needs of the other.
Each community is a collection of participants (members) who share common operational characteristics, goals, interests and/or functional needs. The greater the extent to which the participants share characteristics, interests, needs and goals in common the greater the cohesion in and resilience of the community - in simple terms the community is active, "tight", involved, and the members share a sense of identity, belonging and, most importantly, ownership.
Communities are semi-autonomous, self-selecting, self directed, and inclusive. This does not mean communities are necessarily "open-access". In fact communities with higher barriers to entry often have the highest sense of cohesion because membership is something hard to attain and therefore something of value. Cohesion does not necessarily mean active, however, and lack of activity generally makes a community less interesting organisationally. Communities survive by exchanging things. The greater the volume of services, tangible goods or intangible goods (such as information), that flows through and around the community the stronger the community becomes. In the community model an organisation therefore benefits by fostering participation and particularly communication among all its members.
===What is the Stakeholder Community Network Model?===
'''''The stakeholder community network model is an organisational design and analysis paradigm that sees the organisation as a network of co-dependent stakeholder communities positioned in a larger network of interacting (but not necessarily co-dependent) communities. Within this paradigm, all of an organisation's services, functions and facilities exist to service the needs of the various stakeholder communities in the network.'''''
It should be noted from the outset, that co-dependent does not mean cooperative. As with domestic co-dependent relationships, the community network may include some positively destructive co-dependent community relationships.
The model defines an organisation as consisting of a network of operations that may extend beyond the boundaries of the organisation's body corporate. One such situation might arise in franchised operations or trading networks where an external entity provides critical services on which the corporate organisation depends.
The model works as an organisational design paradigm, a process design framework, an IT strategic design paradigm and a risk and performance analysis framework. It is directly suited to modern network, online, virtual, service operational models as well as bricks and mortar industries including utilities, government, general and project manufacturing, and education. It has not been tested in the resources sector or transport sector.
As an analysis tool identification and labelling of existing implicit and explicit communities and the physical and virtual flows between them against current planning, score cards, policies, performance measurement systems, service agreements, compliance frameworks, risk models, quality, control and feedback systems highlight areas of dysfunction, duplication, redundant effort, counter-productive strategies, missed opportunities, and structural inefficiency and ineffectiveness.
As a design tool it results in the alignment of organisation wide activities to identifiable purposes with targeted participants and measurable performance. It facilitates structurally many different and potentially divergent simultaneous strategies while painting a boundary and direction for such divergence. Such support in organisational design is essential for dealing in global, highly cyclic, or political markets where cultures, rules and geographic features may require the ability to operate as "her to him and him to her", and to retire and replace entire limbs rapidly.
As a customer, partner and supplier service process model it results in bound customers and suppliers and well integrated partners while distributing a significant portion of the organisations costs to the participants.
As an IT systems framework it provides an efficient protocol for defining shared services, community portal service architectures, intra-cloud and cloud services, virtualisation clusters, etc.
==Definitions==
===The Organisation===
Organisations are networks of communities. These communities are comprised of members drawn from inside and outside the organisation's corporate legal identity, and may include communities of which the organisation has no effective control (in traditional terms).
Under the stakeholder community network model we view an organisation as a community comprised exclusively of interconnected sub-communities of people providing and consuming goods and services. Each sub-community forms multiple sub-sub-communities within it, and the community subdivision continues recursively until the costs of organising communities out way the benefits gained from the additional community.
Contrast this view of an organisation with that of other models that classify organisations in terms of bureaucratic, divisional, matrix, and similar structures. Under the stakeholder network view all of these structures can coexist in an organisation simultaneously as they are simply overlapping communities defined around structural paradigms. The stakeholder community network model does not replace such paradigms - it absorbs them.
In the stakeholder community view an organisation is a free-flowing evolving network of teams forming and disbanding as required, with some acquiring near-permanent status, while others enjoy but a single day in the sunshine. Community membership is not exclusive and it is normal for members of one community to also be members of other communities.
===The Community===
The model first defines a structural unit (the community) that possesses identifiable and comparable characteristics, such as focus, information need, functional need, etc. Secondly, the model looks to the mechanisms of facilitating stakeholder communities in a cost effective and consistently reliable and predictable way, utilising common services designed to enable and utilise the shared or distinguishing characteristics. So initially, at least, the model is community structure agnostic.
Communities form for multiple reasons, including:
*shared geographic proximity
*shared heritage
*shared communications technology
*shared language
*shared interests
*shared skills
The things we share are like gravitational attractors around which people cluster in self organising social units we are calling communities.
As communities grow beyond a few members they form sub communities whose members service the parent community or concentrate in some specialised capacity in addition their other roles as members of the community.
The communities in which we are most commonly interested (in the general organisational performance improvement context) are those forming around shared interests and skills. Within an organisation the geographic, and language communities may be crucially important, and in some contexts would be directly accommodated but they will also usually need some form of communities formed around skills and interests (like, at the very least, consuming or providing something), in order to assist the organisation achieve its purpose.
Within each community formed around shared interests or skills are a further set shared interests such as membership, meeting space, information, branding, commercial services, engagement, arbitration, and support. As these needs are common (with minor variations) across all communities they are an attractive first target for shared service provision across all communities. In designing these shared services one should remember that a properly harnessed community can be self managing, peer supporting and self selecting. Shared service provided to communities should be designed to encourage this ownership by the community membership.
A community model assumes a multi-way conversation within the community among the community members - not a massively parallel bilateral conversation between the community members and the organisation. The latter is a client-supplier relationship and by excluding inter-member interaction it embeds the costly push model of marketing, sales and service delivery. By encouraging intra-community conversation we harness the consumers in the community into one or more of the many supply roles in the community. In a customer/client oriented community supply roles span such things as marketing assistance with reviews, discussions and forum participation to support assistance in peer help spaces, and even product improvement and testing such as in software Beta programmes. On the supplier and partner side, supplier side community roles include online supply of certifications, supplier self-registration of details, self selection of available contracts, online invoice entry directly by suppliers, and suppliers providing new product information feeds matching community standardised classifications and measures, etc.
===The Stakeholder Community===
A stakeholder community, is a collection of people, agencies, or units of an agency, that share three traits in common:
# They have an interest in the organisation being modeled or analysed (IE: they are stakeholders).
# As a group, they are co-dependent with other groups of the same organisation. (IE: the groups can not operate with complete autonomy as they depend on each other for their functioning and survival).
# They possess additional distinguishing dimensions of their interest in the organisation that allow them to be functionally separated from some members of the collection and similarly grouped with others (IE: they form an identifiable and functionally similar subgroup of stakeholders).
A stakeholder community of an organisation might be defined as geographically based, and representing all customers within a geographic area, or it might be an enterprise wide collection of staff injured in forklift truck accidents, or a worldwide extra net of ECL policy advisers, or suppliers and corporate buyers for raw materials,... or any one of a long list of possible organisation specific or related groupings.
We call the members of a community "Resources". A resource may be a person or another collection of resources such an organisation, a unit of an organisation, another community. In all cases where a collection of resources is a member of a community, that collection will participate through one or more "community representatives". So in a sense resources can be seen as ultimately comprising people (even though they may be members fulfilling constrained roles).
===The Stakeholder Community Network===
A stakeholder community network is a collection of stakeholder communities that form a network of loosely co-dependent communities.
The communities comprising the network preserve the rules of membership of a stakeholder community domain (as defined above). The links between member communities represent the co-dependencies. The dependencies are functional in nature and may be about information, goods or services - provision or supply, etc. They therefore represent the first layer of potential service level agreements in an organisation.
Technically speaking, the graph connecting all members of the stakeholder network is a digraph (directed graph) when the functional attribute of the network relationship is included in the inter-community link definition.
===The Well-formed Stakeholder Network===
In the universe consisting of all possible stakeholder communities of an organisation, a complete network would include all communities in the network topography. Such a network is said to be "theoretically complete".
Theoretical completeness is neither practical nor possible to achieve in practice. We can not know, and thus enumerate, every possible stakeholder community as each resource and every possible combination of two or more resources up to and including the entire membership of the organisation's stakeholder domain is potentially a community.
Another way of viewing completeness is to first test to ensure that all members of the stakeholder community are also members in one or more of the other communities in the network. This network is then complete in terms of a organisation's resource coverage.
It is worth noting that an organisation's stakeholder resource list may include both members of the public and entities that have no direct dealing with the organisation as well as staff, clients and suppliers (etc.) of an organisation.
===The Stakeholder Community Network Model===
The stakeholder community network model views an organisation in terms of stakeholder communities with shared needs, interests and/or purposes.
The model is a government and business meta-organisational model for organisational design, performance analysis and competitive strategy. It founded on a theory of operational design that embraces networked co-dependent business structures (such as outsourcing, join-ventures and social networking), while not mandating them. The step into communities, however, fundamentally changes the organisational focus from internal structure management to external service delivery. By rejecting all activity not designed to service an identifiable community it forces the entire enterprise to embrace a service culture at every level - everybody is a client of somebody else and in a stakeholder relationship (and usually responsible to someone, or responsible for something) with many other people.
The community structure inherently distributes some of the costs of marketing, sales and servicing, from the net providers to the net consumers within the community, but is effectively a premium willingly paid by community net consumers for greater influence over service form, more relevant and timely information, improved service speed, and risk perception confirmation (the role of public forums), etc.
Communities are essentially self determining and semi-autonomous so a community network modeled organisation naturally accommodates multiple value streams simultaneously. The ability for a community to recursively sub-divide into smaller overlapping specialised communities means the enclosing community structure can accommodate not only multiple value streams internally, but also multiple agendas. Thus financial performance can be enhanced, while quality improvement, social policy or research (and other long term strategies) are driven with equal priority. Further, new value streams can be added to the structure without compromising the integrity or culture of the existing structure.
The semi autonomous nature of communities means that both competitive and and non-competitive business architectures are compatible with the community network model.
We say it is a "meta-organisational model" because, while you might design your physical organisational structure around the model (particularly at the business unit level, or in the online context), it is more common to use it to redesign the roles, service agreements and strategies of existing organisational structures in an organisation. The meta-organisational model is one that floats through a physical organisation providing a new virtualisation of the organisation by re-engineering the service agreements, social networks and logistical networks in an organisation.
One way to think of this is that the impact of applying the community stakeholder thought process is to rearrange the plumbing, the lifts, the corridors and the internal doorways inside a heritage listed building. It is still the same building on the outside, but now you don't get lost inside it, and clients and customers start sharing your destination, not just what you do.
Sure you could tear down the building and replace it with a campus that modelled your stakeholder community structure exactly, but you do not need to do so to get the benefits, and in fact doing so might be counter productive to your market.
The model does tend to have certain organisational impacts - even as a thought exercise:
*The model encourages networked structures and specialisation of semi-autonomous co-dependent internal units.
*The communities share common servicing needs and efficiency dictates some form of shared service provision for these common needs. These structures imply additional cost, which in a zero-sum change process implies that resources will have to be transferred from somewhere else.
*The network model will tend to reach across multiple divisions of an organisation in defining communities.
In the normal entity (government or business) an individual or even business unit might participate in multiple stakeholder communities at once. So the communities are not necessarily defining an organisational structure as much as a set of interlocking co-dependence structures around which services can be consolidated and streamlined, duplication identified and removed, and context specific organisational purposes can be clearly articulated.
=Applying the Stakeholder Community Network in Practice=
==Step 1. Identifying and Defining Stakeholder Communities==
We must fist decide whether we a looking for a directed outcome such "quality improvement" or an undirected (normal) outcome. This impacts the design of every community.
In a directed outcome model the directed outcome becomes a community in its own right that is automatically a participant in every other community. This allows consideration of the requirements of the directed outcome community to be capture and implemented in every other community structure.
In the undirected model no such imposed membership is mandated and the community architecture is left to optimise the framework with which it has been equipped.
In most situations we use the undirected model for analysis and the directed model in conceptual design (refactoring into an undirected model once the directed redesign has been finished).
==Step 2. Identifying and Defining the Community Ennoblement Functions==
In the model, the central object of the organisation is to ensure communities are facilitated, serviced, and harnessed for the purposes of the organisation as best it can, or otherwise "actively managed". The model sees only communities - so every participant within and without the organisation must be able to be defined as falling into one or more stakeholder communities if the model is to be considered "well-formed" (read "complete").
Within the model, the aim of the enterprise is to facilitate communities (generally) and a defined set of communities specifically - which translates into:
*identifying stakeholder communities
*mapping new and existing stakeholder communities to organisations objectives, mandate and purpose as they change
*mapping inter-community work flows testing and identifying duplicated communities, duplicated flows, and under resourcing, etc
*seeding communities as required
*funding stakeholder communities (eg seed capital, cross charging, external billing, etc)
*organising stakeholder communities
*branding stakeholder communities
*fostering community participation and outcome ownership
*providing the and possibly managing the infrastructure for community self organisation
*liaising/interfacing between stakeholder communities (eg. client community versus customer community)
*delivering the community's requested service or goods
*harnessing community ownership of the service/product improvement process
*trapping and archiving expert knowledge from both internal (to the organisation) and external community participants over time
Within an organisation adopting the stakeholder community network paradigm operationally, the stakeholder community network must be actively managed. This means it must be facilitated, moderated and funded. Resourcing is required to make it fast and efficient to implement and equip new communities and retire existing ones. Part of equipping a community is establishing its charter, budget, performance measures, governance, operating rules (constitution), core membership, decision model, meeting space, common (shared) tools and specialised applications or services need.
This necessitates the creation of a new centralised or distributed role of community facilitator(s) and a central role of community registrar (manager). The former is about equipping and assisting new communities, identifying and seed communities as required and advising and improving existing communities. The latter is about containing, policing, funding, planning, judging and budgeting communities.
==Step 3. Considerations in Designing the Stakeholder Community Analytical Structure==
Once we have a standard definition of the community concept as it applies in our analysis and organisation, the next step is to define a framework of communities through which to analyse the organisation.
As each community shares facilities between their members, the fewer top level communities there are the better the efficiency gains in the entire model will be. Unless, of course, their are too few and the resulting groupings are not homogeneous over sufficient characteristics, or the communities are badly chosen with many shared characteristics between the groups rather than within the groups.
Secondly, the choice of communities can slant the servicing view internally or externally, or indeed could simply mirror existing organisation structures. None of these effects are likely to produce efficiency gains sufficient to justify the operational overhead of the stakeholder community support systems. The gain comes from achieving 100% coverage of participants, with communities comprised of both external and internal participants, with the minimum need for intra-community process or system customisation. By demanding the mixing of internal and external members aim to eliminate duplication between external and internal systems and processes servicing the same need.
So, ultimately, the choice of top level stakeholder communities proves to be crucial to the outcome of the model - on all fronts.
In our experience, if the model is well designed the chosen top level community groups will tend to be highly co-dependent which automatically provides a structure and focus for service level agreements, and intra-community risk profiles will be highly consistent.
The choice of stakeholder communities used is prima-facie up to the organisation and the purpose of the analysis. While generalisation is possible at the highest level, as the view descends through the communities into their member sub-communities the groupings become quite specific to an organisation.
After many years of using and refining the concept we have settled on a standard top level stakeholder community model we call SCNM03. It has proven to be work predictably in both government and commercial agencies in both physical (eg manufacturing) and virtual (eg software) organisations. Alternative models include the groupings under Porter's Theory of Competitive Advantage.
=Standard Stakeholder Community Network Model: SCNM03 in Practice=
==SCNM03: Bishop's Model Stakeholder Network==
<div class="mainfloatright" style="width:54%; max-width:70%; float:Right; overflow: none; padding-left:10px; padding-right:10px;" >
[[Image:BishopsStakeholderCommunityModel.png]]
</div>
In the standard BPC model (SCNM03) - also known as Bishop's Model Stakeholder Network - all organisational functions and services are seen as belonging to one or more of 8 top level stakeholder communities:
*clients,
*customers,
*suppliers,
*partners,
*custodians,
*workforce,
*governance, and
*the public.
Each community is comprised of a mixture of the community service providers, enablers and consumers of community services sharing a common focus. Community members share common functional interests and therefore may be serviced with shared services (whether human, networked, or electronic).
The meanings of the communities are explained in detail below. Here we will draw out some specific features:
# The model conceptually splits "the customer" (she who pays) from "the client" (she who receives) - often they are the same, but when they are not (such as in many government agencies) is is a crucial difference.
# The model splits partners from suppliers, and moves contractors to the workforce.
# It simultaneously recognises several communities of governors in the governance community including finance, executive, internal audit, board, cabinet, regulators, parliament, external audit, shareholders, etc.
# Custodians community includes those communities entrusted with a multi year wealth preservation/accumulation and service mandate such as IT, Treasury, Asset Management, etc.
# The model embraces the public as a specific community to be managed as they are the ultimate source of all the other communities' members and specifically the group that can influence the governance community to legislate your organisation out of existence. This community is both the source of greatest opportunity while also being the least organisable community and the most potentially threatening. A stakeholder network organisation notionally endeavours to move all members of the public community to one or more of the other communities with more manageable risk profiles.
==Risk and the Stakeholder Community Network Model==
Risk in the model tends to vary with time and the degree of influence the organsiation (the meta-community) has in the specific community being examined. This influence will vary over time.
Consequently, in the longer time frames (ie. the strategic time frame) the Public and Governance communities are usually the highest inherent strategic risk communities in the model. The organisation tends to have the least influence over the sub-communities contained there-in and may participate only as a guest (information receiver, price-taking customer, subject of legislation, etc.), or not at all. Public attitudes can swing against the activities of the organisation, and influence the legislators, who, in turn, can legislate the marketplace or the organisation out of existence. Consumer preferences can change as technology progresses, making the organisation's business model irrelevant. The stakeholder network model therefore naturally tends to encourage both lobbying and active public relations management (or the exact opposite: invisibility!), and participation in external communities for information gathering.
Where time-frames being considered are shorter, ie. from an operational or tactical risk perspective, Workforce will rank as one of the highest risk spaces. If we think of Workforce as being comprised of smaller communities - say contractors and employees, and then each of these in turn being comprised of even smaller communities - say divisions, units and ultimately individuals we see that the more we subdivide the group the closer we get to a community of one member - the individual. In the very short term humans thus represent a highly variable factor.
In the micro-community of one person. the only member of the community that exists inside the employee's head is him or her self. All the risk minimisation and behaviour modification controls naturally present in a larger community are dependent on that one member. In that community one person fulfills all the roles of the multi-member community. Strategies such as training, and standard processes work over an extended time frame to reduce the probability of incidents and create predictability across the workforce as a group, but in the very short or immediate time-frame the individual is still entirely responsible for each action with little chance for other community members to intercede (because there aren't any!). In the instant, this micro-community can make an unsafe decision that impacts the well being of the larger organisation (as well as themselves). Planning, thorough and extended training, careful member selection, 'idiot-proof' machine and user interface design will improve the predictability of the individual - but all these strategies take time to design, implement and achieve their effects. So, over the shortest unit of time - say, a second into the future - the individual can make a very bad decision with disastrous outcomes. This is a technical way of saying that people do dumb things that can be prevented with enough preparation and training - but only if enough time is available.
==Competition and the Stakeholder Community Network Model==
The SCNM03 model captures a deliberately divergent view of competitive strategy from that presented by many earlier authors. In this model, competitors are seen as potential suppliers, partners, clients, customers or workforce and strategies to bring them into one or more of those communities would be pursued.
Crucial to understanding the SCNM03 stakeholder model is that, purely applied, the model sees the entire universe in terms of these communities. It starts with the ideal vision built-in and therefore models a best fit to that scenario.
One obvious issue, then, is that there is clearly no community of "competitors". Under the pure SCNM03 stakeholder network model our aim is to make competitors a member of one or more of the other communities. We are therefore encouraged to both define our service offering away from competition and structure ourselves as complementary to another's offering or needs. The extent to which we are not able to achieve this influences the inherent risk that lays in the public communities.
We do not loose the unresolved participants, instead they appear as sub-communities of the public community and are subject to a range of risk mitigation strategies.
==Stakeholder Communities and Sub-Communities in SCNM03==
Each of these 8 communities is comprised of smaller communities with more specialised shared needs. For example, workforce is comprised of two specialised communities: contractors and staff (or other appropriate terminology). While many requirements of these groups are the same, there are specific differences in engagement, management, ancillary services, social interaction and disclosure levels between these groups to warrant separate community identities.
Conceptually the stakeholder network organisation is (almost) a franchiser of community management systems within a defined product/service space and in a given organisational cultural context. An organisation adopting this model will naturally look to standardise the managerial and technological profile of the communities it manages.
Applying the stakeholder network model in process design, performance analysis, compliance management or risk assessment often results in process structures and views that differ dramatically from the Divisional, Matrix, Hierarchical and Service models under which the organisation may operate. The community network model is agnostic when it comes to organisational structure (with the one exception being an organisation exactly mirroring the network model itself).
By way of example, an organisation that produces widgets, might traditionally see itself in terms of functions and processes concerning widgets. It has widget raw materials planning and acquisition, inventory management, widget production, widget distribution, widget order management and sales, etc. The same organisation in the stakeholder network model would see the world in terms of satisfying the needs of defined stakeholder groups first - not the things they were manufacturing.
In the SCNM03 stakeholder network model the natural home of the manufacturing functions is in the customer community where they are firmly focused to the customer (note - not client) desires, and materials acquisition function might be seen to contract the services of both the partner and supplier communities to satisfy material demand.
A couple of outcomes of the model are immediately apparent from this example, the model blurs the distinction between internal sourcing and external sourcing,
From a computing perspective, the model automatically leads to service portal based architectures, systems consolidation, cloud structuring (whether internal or externally hosted), and highlights the places where inter-system integration and system standardisation are needed. From an operations perspective it leads to service focused organisational architectures with defined client groups and document service standard agreements.
==The SCNM03 Communities Explained==
An individual is often a member of multiple communities (eg Customers and Clients). Our standard stakeholder communities (which in 12 years have yet to be wrong) are:
{|
|-
|Clients
|style="padding-bottom: 10px; padding-top: 10px; border-bottom: 1px solid black;bottommargin:10px;"|Stakeholders who receive or deliver services Clients are interested in rapidly finding information, requesting service, reporting hazards / incidents / events / ideas.
A classic result of the client stakeholder focus are client portals. In a local government these might take the form of a resident portal, where a city rate payer can find in one spot all the online systems for garbage collection, events, bylaws, parking permits, voting, pet registration, planning applications and objection lodgment, etc. In a direct-to-customer manufacturer the client might have access to a portal with product information, product enhancements, support, manuals, training, online-store, peer forums, product reviews, newsletter/blog, and peer/expert hints and suggestions all in one spot.
|-
|Customers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Stakeholders who pay for services that clients receive. This is separation is very common.
Customers want to pay for things in as convenient and consolidated a way as possible, and have mechanisms available for enquiring, revoking or monitoring services for which they pay. Companies that send multiple bills for the different services they provide are examples of firms that seriously need to look at their customers as a stakeholder group.
Governments provide the classic examples of customer and client separation: A State Government might pay for (or part-pay for) some services that are received by citizens of a city government. The state government is the customer, while the citizen is the client.
|-
|Suppliers
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Suppliers of services and materials to the organisation. Suppliers have common service interests such as finding tenders, quotes, interfacing supply catalogues to purchase order systems, checking on payment status, locating standard contracts, etc.
|-
|Partners
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Partners are providers of complementary services. A “meals on wheels” charity provider may function as a partner to a local government, delivering services complementary to those of the city government, but funded by non-City sources.
Partners are mainly interested in ensuring their services stay complementary and not competitive with the organisation. So information on strategies, management of joint projects, identification of opportunities, etc are of interest.
Roads constructions authorities are partners who provide accident minimisation services, and traffic impact control services, etc. that complement those of the local or city government roads teams.
The relationship between insurance companies and the fire service is another example of a partnering structure. Insurance companies have an interest in facilitating the fire control services as they reduce their insured risks.
Franchised sales teams for a retailer, independent software manufacturers for a computer or games console manufacturer, and joint-ventures are all examples of partner community networks.
|-
|Workforce
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Workforce include both employees, contractors and consultants. HR systems, payroll, contract management, OHS, incident management, etc. are examples of services needed by this community.
|-
|Treasury/Custodians
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|Treasury & other custodians are always an internal community. Their members are charged with maintaining assets and lowest level enabling systems for the other communities.
IT/IS, Building Management, Maintenance and Treasury are always members of the custodians group. They protect assets and provide the infrastructure on which the community specific applications reside.
Email, communications, data storage, server management clearly fit under this group.
|-
|Governance
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The governance community, like the workforce community includes multiple sub-communities, such as the executive, regulators, government bodies, risk management, compliance management, etc. These communities use services related to the provision of control and performance monitoring. Finance, council management, boards, executive team, performance review committee, inter-government reporting, risk, and compliance systems, and planning/budgeting systems are typically included here. Governance community members are both internal and external bodies with which the organisation has an accounting and reporting relationship.
|-
|The Public
|style="padding-bottom: 10px; padding-top: 10px;border-bottom: 1px solid black;"|The public includes everyone else. This is a very important community as it has the ultimate power to remove the entire organisation from existence, or cause government to legislate it out of existence.
It is also the group from which all the other stakeholders originally come. From a strategic perspective, the aim of every organisation should be to get every member of the public community to transition to one of the other stakeholder groups.
The public need to know about the services an organisation provides, its ethics, and social performance.
While most membership of this community is reasonably obvious, the presence of public relations teams, lobbying and marketing in this community may be less so.
An organisation is always a member of the public stakeholder communities of all other organisations.
|}
=Applying the Stakeholder Network Model=
The stakeholder networks model is recursive. It applies organisation wide and through each sub grouping down to the individual business unit level (in fact it can also work at the individual level – but not generally in an IS context). Just as the organisation has these broad stakeholder groups, each business unit has the same stakeholder breakdown, all be it with most stakeholders in the various communities being internal to the organisation – rather than external to the City.
The stakeholder community network has clear relationships between the elements - particularly as realised in SCNM03 - and provides a model under which social networking and portal systems naturally fit. The model leads naturally to both network organisations (those using mixed in and out-sourcing, shared service models and joint-ventures as their standard business model.
The stakeholder community model has a number of applications:
#As an IT system design paradigm and idea promoter.
#As a full organisational modelling paradigm. In this form it results in dramatically different organisation models from those in general usage and is thus often too radical for executive comfort.
#As an analytic “best practice” benchmark it is outstanding, and even when only partly applied results in improved and more cost efficient process design.
#In designing and online and web service business presence. With a little thought it should be apparent how effective the stakeholder model is in designing an online presence and structuring of mutual obligation social networks.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
faf4d8c57cd0f44e4f60fa4218cbe7965d6bc0c5
Business Process Reengineering - Process Charting
0
289
698
506
2019-09-12T02:42:18Z
Bishopj
1
wikitext
text/x-wiki
=Introduction - Business Process Charting=
<noinclude>
==About The Author & The Article==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2019 - Moral Rights Retained.
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
</noinclude>
==Charting the Business Process - A Unified and Holistic Approach==
===Why Chart?===
There are many reasons we may wish to chart a business and its businesses processes including mapping of data flows, documenting process steps, designing automated and hybrid systems, defining intra and inter-organisational relationships, defining or analysing service agreements, etc.
===What is a (Business) Process Chart?===
A process chart is a diagramatic representation of a set of processes, that models the enveloping organisations as if it were a machine with a functional domain that encompassed the diagrammed processes.
From a computational perspective, a business process chart is a diagramatic program describing human, machine, natural, organisational, functional and non-functional systems using digraphs.
===What are the Characterisitics of a Good Process Charting Method?===
====Objectives====
This author proposes that the objectives of a good process charting system should be to:
* improve the understanding and clarity of the data represented in the chart,
* enable domain specific analysis (such as efficiency, economy, effectiveness, reliability, etc),
* enable viewing of the processes at multiple levels of detail simultaneously,
* chart the target analysis domain completely,
* seemlessly represent both automated and non automated processes in the same chart,
* enable the automated modelling of the system directly from the chart (which implies the charting "meta-language" should have a consistent "syntax" and semantics - similar to an "ideal" computer language),
* represent processes across diverse operations, industries, products and services without context specific modification of the syntax or semantics,
* produce charts from unfamiliar industries (etc) that are understandable to a moderately experienced chart reader, with no prior background in the subject charted, and
* enable the construction of "proofs" of the processes.
In this author's view these objectives are assisted when the charting system assumes the properties and conventions of well designed computer programming language - albeit a visual one. These properties include the grammatic (semantic and syntactic) consistency, structured functional encapsulation, object reuse and polymorphism, conceptual inheritance, simplicity and functional expansion.
====Consistent Identifiable Grammar====
The grammar of a process charting method defines the symbols, their meaning, and the rules for "legal" combinations of these symbols and meaning of such combinations.
In computational languages the atomic element in a programming language's grammar is called a token. In a text based computational language these tokens are strings of one or more characters, some of which are defined in the language with a special meaning. The tokens comprise the syntactic elements of the grammar. The grammar itself defines a consistent semantic interpretation of the syntactic elements when combined in pre-defined combinations.
In a process chart the atomic element is a symbol that maps to a real world object such as an organisation, a person, a data element, a process (or function), a data store, etc. These symbols comprise the syntactic elements of the charting method's grammar, and the charting rules document a grammar which delivers a consistent semantic interpretation of the syntactic elements when combined in the pre-defined combinations.
====Completeness====
A well designed charting system is internally consistent in atomic structure and behaviours, while mapping completely (in a mathematical sense) to the real world scenario being modelled.
To be conceptually useful, "completeness" chould be able to be "proven" - at least theoretically. This explanation implies an algebraic representation (eg predicate calculus) of the process charted should be derivable from the charting language. Having said that, it should be noted that few computing languages have such a mathematical validity test available (SQL being one notable exception).
====Minimal Syntactic Complexity====
Completeness in oricess modelling is a complex topic, and one fraught with some potentially counter productive implied solutions.
For example, a charting system with a unique symbol for every-process might achieve completeness, but it would achieve this at the expense of very high grammatic complexity.
The strength of process charting approach lies specifically in its ability to categorise, simplyify, and standardise our view of a social system. If one measure of language complexity lies in the number of rules in a grammar, then the greater the range of predefined (or reserved) symbols in the language, the greater the number of rulee that will be required to define their use.
Complexity, under such a measure, is minimised when the number of unique predifined "terms" is minimised. The mover restricted is symbol set, however, the more symbols must be used to represent simple everyday-repeating processes.
===The BPC Business Process Charting Method===
The core symbols of the process charting language are defined in the BPR overview. This author postulates that all human-machine processes can be documented with this minimum set of symbols. The simplicty of its symbol set (and therefore grammar) can lead to diagramatic complexity.
Certain objects and their processes occur with such rapidity, that diagrammatic complexity is reduced significantly by expamding the core set of symbols as shown in [[Business Process Reengineering - Chart Key]].
==Charting Example - Electronic Grants Management System==
The process charting method included on the following pages demonstrates the business process charting method as designed by this author and improved with input from clients and staff of BPC over 24 years. The example charts represent the BPC Process Reengineering Modelling and the BPC Stakeholder Community model in action in a real world situation. The resulting demonstration is a fully functional government grants management process for whole-of-government administration of government grants to the public.
*[[Business Process Reengineering - Chart Key]]
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Business Process Reengineering]]
[[Category:Stakeholder Community Network Model]]
{{BackLinks}}
</noinclude>
e592d4147f89c8b1f9e5c564ddfea5c8e4784d2c
RiskWiki:Community Portal
0
501
699
2019-09-13T14:13:12Z
Bishopj
1
Created page with "<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="left"> <tr> <td> <div class="center"> Image:BPCF8aBlackLR.j..."
wikitext
text/x-wiki
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="left">
<tr>
<td>
<div class="center">
[[Image:BPCF8aBlackLR.jpg]]
</div>
</td>
</tr>
</table>
=Project Pages=
Pages in this section are restricted to registered users of the riskwiki. In most cases this willbe because they are conceptual or in draft format.
* [[Draft RiskManager Documentation]]
* [[Draft ERM Pages]]
=BackLinks=
*[[Main Page]]
371ddde772078281363f3b17dd14bb5f12dfe7cb
Report Writing
0
294
700
392
2019-09-13T14:27:25Z
Bishopj
1
wikitext
text/x-wiki
==About The Author==
[[Jonathan Bishop]], Group Chairman, Bishop Phillips Consulting. [http://www.bishopphillips.com/]
Copyright 1995-2019 - Moral Rights Retained
This article may be copied and reprinted in whole or in part, provided that the original author and Bishop Phillips Consulting is credited and this copyright notice is included and visible, and that a reference to this web site (http://RiskWiki.bishopphillips.com/) is included.
This article is provided to the community as a service by Bishop Phillips Consulting [http://www.bishopphillips.com/ www.bishopphillips.com].
==About This Document==
This paper compliments the Internal Audit and Management Consulting guides and discussions throughout the RiskWiki. It presents a brief guide to issues of style and presentation in writing up findings generally and with a very few exceptions applies universally to consultant and management reports (as well as to Internal Audit Reports).
Texts used as the basis for some of the views presented in this document and worthy of further exploration include:
* The Penguin Working Words (Penguin 1993)
* Fowler's Modern English Usage 2nd Edition (Oxford University Press 1965)
* Oxford Dictionary (Oxford University Press)
* Style Manual 4th Edition (Australian Government Press Service 1988)
* Practical English Usage - Michael Swan (Oxford University Press 1980)
* The Cambridge Encyclopedia of Language - David Crystal (Cambridge University Press 1987)
* Deloitte Internal Audit Method, Volume 6 - Report Writing - J Bishop & J Crawford (DTT 1992-3)
* Stanton Consulting Partners Style Manual (J Bishop 1995)
* NAB IA Reporting Style Guide ( J Bishop -1999- & an Unknown NAB Staff Member)
* Bishop Phillips Consulting Style Manual (J Bishop 2000)
==Writing Style==
===Introduction===
<table border="1" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="right">
<tr>
<td>
<div class="left" style="background-color:#FFFF99" >
====Bishop's Writing Rules:====
# Rule: The Passive puts people to sleep.
# Rule: Ending a sentence with a preposition is a situation up with which I will not put.
# Rule: Objects like subjects
# Rule: One point to a paragraph
# Rule: Get to the bottom line first
# Rule: Just do it - say what you mean.
# Rule: Readers don’t read
# Rule: Three sentences are company, four is a crowd
# Rule: Conjunctions can't commence (a sentence)
# Rule: Conjunction collections confuse
# Rule: Personalise people not things
# Rule: Negativity negates.
# Rule: DON'T SHOUT
# Rule: Don't plan to make a plan.
# Rule: Consistency is king
# Rule: Death is in the details.
# Rule: Pronouns need a noun
# Don't split the infinitive
# Rule: Unintroduced acronyms are antisocial
# Rule: Generalities are generally imprecise
# Rule: Let the facts carry the case.
</div>
</td>
</tr>
</table>
In written expression, a few simple rules can make the difference between clarity and confusion. Applying the rules in this section will help us both record our ideas efficiently and convey our meaning clearly.
The rules are a mix of style and traditional grammar identified over many years of reviewing and writing audit reports. We will need a rudimentary understanding of grammar to apply a number of these rules effectively.
Syntax assists semantics. Grammar defines the syntax of the language. Good syntax describes the structures a sentence can follow and still be considered well formed.
Semantics is the meaning of a sentence. Syntax assists semantics by managing the flow of ideas, and distinguishing ambiguities.
Consider for a moment the classic poets' joke
"What is this thing called love?" - The plaintive cry of a tortured heart.
"What is this thing called, love?" -The question of a curious friend on sighting a never before seen object.
One stray comma makes all the difference to the meaning of the question. In speech we use tone, rhythm, intonation and body language to convey meaning. In written expression we rely on syntax - the rules of grammar
We can not solve all problems of ambiguity in language with punctuation, but with a better understanding of grammar we can avoid the ambiguity in the first place. Take, for example, the sentence: "Flying saucers can be thrilling". This sentence seemingly can have a number of meanings:
# The act of flying a saucer can thrill the pilot.
# Seeing a saucer in flight can thrill the observer.
# The idea of a saucer that flies thrills.
We will see, however, that even in this situation, the judicious application of some simple rules when forming the sentence can result in clarity:
"Flying a saucer can thrill the pilot."
What has changed? We have moved from the general ("flying saucers") to the specific ("flying a saucer") (rule 20). We have also introduced a subject (the pilot) to the sentence where only the object and verb existed (rule 3) and applied plurals consistently (rule 15). Lastly applying rule 1 eliminates the problem entirely:
"A pilot can be thrilled when flying a saucer.."
To understand how to do this, we need a little grammar.
Since we can not avoid grammar if we wish to understand how best to convey our meaning, our discussion will be facilitated by first establishing the definition of a few grammatical terms. This we do in the next sub-section. Armed with a few parts of speech we will then explore the 19 rules over the subsections thereafter.
==A Grammar Crash Course==
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="right">
<tr>
<td>
<div class="center" >
[[Image:IARepWriteCavemen.png]]
</div>
</td>
</tr>
</table>
===Subject, Verb and Object===
'When nine hundred years you reach, look as good, you will not. Strong with the Force you are…."
Remember Yoda ? Among the little, wrinkly, green "Star Wars" character's more distinctive features was "Yoda Speak". To a linguist, Yoda represents an imaginary member of a very rare and select group: races with languages that use an "Object - Subject - Verb" structure.
The understanding of the difference between each of these components is the first step in mastering sentence structure.
The order of subject (S) - verb (V) - object (O) (SVO) is the classic "natural" english sentence:
<table border=1 align="center" >
<tr><td >"Management </td><td> is adhering </td><td> to </td><td> credit policies."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb </td><td></td><td align="center" > Object </td></tr>
</table>
Things work quite well if we think of a sentence as revolving around a verb. The subject of a verb is the noun (or noun substitute ) that directs the action of the verb. The object of a verb is a noun (or noun substitute) that receives the action, is effected by the action, or about which the action is concerned. In the majority of instances a noun substitute is a pronoun.
In the example "management" directs the action and is therefore the subject, while "credit policies" are the things being "adhered to" and therefore the object. As a rough rule of thumb, if the noun phrase starts with a preposition it is a fair bet that the noun concerned is the object. In the example sentence, "to" is the preposition.
===Prepositions===
A preposition relates a word or phrase to another part of the sentence.
<table border=1 align="center" >
<tr><td >"Management </td><td> is adhering </td><td align="center" > to </td><td> credit policies."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb </td><td align="center" >Preposition</td><td align="center" > Object </td></tr>
</table>
Words that are prepositions include: to, in, into, on, upon, over, before, after, of, with.
In the example the word to joins (or more accurately relates) the noun phrase "credit policies" to the rest of the sentence - "Management is adhering".
A note of caution - a word that is a preposition in one case can be a conjunction in another:
* The auditor arrived before [preposition] the meeting.
* The auditor arrived before [conjunction] the meeting began.
===Conjunctions===
Conjunctions are words that join two sentences, or nouns, but not in a causal relationship as with a preposition but either as equals or in a superior - subordinate relationship. Examples of the former include: and, but, or, nor, whereas, however. Examples of the latter include: because, when, where, if, although.
==Active and Passive Voices==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: The Passive puts people to sleep.'''
</td></tr>
</table>
Recall the earlier discussion about subjects and objects of a sentence. We observed that the "natural" order in English is Subject - Verb - Object (SVO). This is the active voice:
<table border=1 align="center" >
<tr><td >"This firm</td><td> will no longer pay </td><td align="center" > for </td><td> Overtime."</td></tr>
<tr><td align="center" >Subject </td><td align="center" > Verb phrase </td><td align="center" >Preposition</td><td align="center" > Object </td></tr>
</table>
Now we will switch the subject and the object and contrast this with the same sentence expressed in the passive voice:
<table border=1 align="center" >
<tr><td >"Overtime payments</td><td> will no longer be made </td><td align="center" > by </td><td> this firm."</td></tr>
<tr><td align="center" >Object </td><td align="center" > Verb phrase </td><td align="center" >Preposition</td><td align="center" > Subject </td></tr>
</table>
The passive voice essentially reverses the natural order from SVO to OVS.
There is nothing grammatically wrong with either construct, but even a few lines expressed in the passive voice will bore our readers to tears. This effect arises because the passive voice places the reader at a distance from the action by making the object of the sentence the primary focus rather then the subject. Consequently, things appear to come before people.
Consider the following passage (passive voice).
"Significantly more overtime than the firm average has been incurred by roboteller maintenance staff of the Antarctic Division. A number of anomalies in the time sheets including bank branches that have been closed for many years having work recorded for them by individual staff have been revealed by a detailed analysis of the time sheets. Overtime payments will no longer be made by the Antarctic Division as a consequence."
Versus the following version (active voice)
"Roboteller maintenance staff in the Antarctic Division have incurred significantly more overtime than the firm average. An analysis of the time sheets for individual staff shows a number of anomalies, including work conducted for bank branches that have been closed for a number of years. Consequently, the Antarctic Division will no longer pay for overtime."
Which one did you have to read twice? The passive voice is difficult for the reader taken even one paragraph at a time. Try reading it for an entire report and you will be angry, frustrated and tense (assuming you are still awake by the end of it).
The active voice involves the reader, it flows better than the passive, it encourages the writer to go straight to the point rather than inserting "filler words" whose sole purpose is to make the sentence hang together and it reduces the chance of repetition (as apparent in the passage above). The passive voice, however, is not only difficult to read, but it is far more difficult (and therefore slower) to write.
In the passive voice we express the idea of the sentence before we provide the context (subject). The direct result of this is that our thought pattern is reversed and our ideas do not seem to flow properly. We end up adding extra words, leaving sentences hang in mid air (such as when we finish with a preposition) and, most importantly, failing to convince our audience of our point because they have to try too hard to understand it.
A sentence is a "word painting" of an idea. Well formed it is a thing of beauty and, like a great painting, a joy to behold.
==Positioning of Prepositions==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Ending a sentence with a preposition is a situation up with which I will not put.'''<br>
* '''Rule: Objects like Subjects.'''
</td></tr>
</table>
One of the most common errors in common speech is to place the preposition at the end of a sentence. Prepositions, by definition connect and introduce a noun phrase in a sentence. After the use of the active voice, I consider that this is almost the single most important trick to forming logical, easily understood sentences quickly.
Given that it has become almost standard usage to let prepositions drift to the end of a sentence, why is it such a gross error?
You will recall that we defined a preposition as a word that joins and relates a noun phrase to the rest of the sentence. It literally "leads" a phrase. Without the preposition connecting the two ideas in a sentence the sentence appears stilted (or as in the following example the sentence actually seams to mean something completely different):
"Management is adhering credit policies."
Consider a few examples:
<table border=1 align="center" >
<tr><th >Bad Form</th ><th >Good Form</th ></tr>
<tr><td>Where have the auditors come from?</td><td>From where have the auditors come?</td></tr>
<tr><td>Peace is worth striving for.</td><td>It is worth striving for peace</td></tr>
<tr><td>Firm credit policies must be complied with.</td><td>Management must comply with firm credit policies.</td></tr>
</table>
The first two on the left-hand side are merely untidy, but the third highlights the problem with prepositions shifting to the end of a sentence. The version on the left-hand side leaves the sentence "hanging" and most importantly, leaves out the subject. The lack of a subject in the sentence means that it is unclear who should perform the action. (ie. Objects Like Subjects)
If we use the active voice, and lead the sentence with the subject, we will be far less likely to end up with the versions on the left hand side. Since a preposition generally connects the object to the subject, it is the habit of placing the object at the start of the sentence (i.e. the passive voice) that leads to sentences with the preposition at the end.
The second example on the right hand side is still unsatisfactory, because it does not identify the responsibility of the action, and consequently is a generalisation - which is too easy to fault. For whom is it better to strive for peace? An arms manufacturer may see things a little differently! A better rewrite would have been: "We will benefit both materially and socially if we strive for peace."
It is easy to put prepositions in the right place if we remember to use the words "which" and "whom":
This is the day for which we have been waiting. (Not: This is the day we have been waiting for.)
These are the results of which we heard. (Not: These are the results we heard of.)
The rule (attributed to Winston Churchill) "Ending a sentence with a preposition is a situation up with which I will not put" (instead of - "Ending a sentence with a proposition is a situation I will not put up with.") illustrates how to arrange the words to achieve the desired outcome. It also tends to stick in one's mind and so is easily remembered.
==The Formula For A Paragraph==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: One idea to a paragraph'''
* '''Rule: Get to the bottom line first'''
* '''Rule: Three sentences are company, four is a crowd'''
* '''Rule: Just Do It - saying what we mean.'''
* '''Rule: Readers Don't Read'''
</td></tr>
</table>
The purpose of dividing a body of writing into paragraphs is to help the reader absorb the points being made, and the writer to formulate them. These five rules are each about how to put together a paragraph that works.
A couple of simple formulae describing the sequence of sentences in a paragraph can show us what to do:
# Main Point + Counter Point + Conclusion.
# Main Point + Expansion + [Expansion].
In each case we are saying a paragraph should consist of between 2 and 3 sentences. Using more or less sentences in a paragraph is permissible, but to be discouraged unless it is absolutely essential for the purpose of the point. This is particularly true when we a planning to use more than three sentences. (ie Three sentences are company, four is a crowd)
A paragraph end forms a natural break in the flow of though. By implication, we are asking the reader to absorb the entire a paragraph as a single concept before they evaluate it in their minds. The longer is the paragraph, the longer the reader must store the ideas before evaluation.
We risk loosing the reader's attention and comprehension if we ask him or her to temporarily store the ideas for too long a time or to store too many ideas at once. Short, punchy paragraphs built around a single central idea help minimise waffle and assist the reader to rapidly absorb our message. (i.e. One idea to a paragraph))
<table border=0 align="left" width="200px" style="background-color:#FFFF99" >
<tr><td align="center">
<p >
<font color="#990000" size="+1" >'''''…short, punchy paragraphs built around a single idea…'''''</font >
</p >
</td></tr>
</table>
It is a courtesy to the reader, to endeavour to minimise the work they need to do to in reading our work. Opening the paragraph with the main point allows the reader to skip the rest of the sentences in the paragraph if they agree with the point. In each of the two formulae we open with the main point (ie. we get to the bottom line first).
The difference between the forms is that in the first formula we offer a counter point in the second sentence, which is then offset by the conclusion. In this case the conclusion should be consistent with the main point (rather than the second or counter point).
In the second formula we are presenting the main point supported by one or two additional arguments. Should we need six or seven sentences to support the point, these should be presented as a dot-point list, or subdivided into two or three logical groups and split across two or three paragraphs.
<table border=0 align="right" width="200px" style="background-color:#FFFF99" >
<tr><td align="center">
<p >
<font color="#990000" size="+1" >'''''…the most convincing expression of an idea is usually the simplest…'''''</font >
</p >
</td></tr>
</table>
The essence of these ideas is that the most convincing expression of an idea is usually the simplest. Winning a point through confusion is, at best, a Pyrrhic victory. If the issue is important, the reader will dwell on it, and form their own opinion. If they didn't understand your arguments, you will have no effective input into the formation of their position on the matter, other than to raise it in the first place.
<table border="0" style="background-color:transparent;margin-right:0.9em" cellpadding="2" cellspacing="10" align="left">
<tr>
<td>
<div class="center" >
[[Image:IARepWriteSectionStructure.png]]
</div>
</td>
</tr>
</table>
The essence of newspaper journalism is that most readers will not read most of the articles in a paper or magazine completely. Consequently, from the headline down to the end of the article the item is arranged as a series of progressively more detailed "summaries" of the information. There are usually three to four layers.
The first layer is the headline, which attempts to summarise the entire issue in a few words. The second layer is the first paragraph which presents a twenty to thirty word summary of the issue. The third layer is the second, third and perhaps fourth paragraphs, which provide the full story and the fourth layer provides incidental minor details.
The purpose of the structure is to allow the readers to exit at several points when they have collected sufficient information for their interest level. The approach recognises that none of us has time to read every piece of information presented to us, and when we do we tend to skim the information for issues that are relevant to us. (ie. readers don't read)
We should design our reports so that the reader does not have to read all the way to the end to "get" the issue. We can imagine this pattern as a pyramid, with the highest level summary at the top, and progressively more detail to the bottom.
==Using Conjunctions==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Conjunctions can't commence (a sentence)'''
* '''Rule: Conjunction collections confuse'''
</td></tr>
</table>
<table border=0 align="right" width="400px" style="background-color:#FFFF99;margin-left:0.9em" cellpadding="2" cellspacing="10" >
<tr><td align="left">
===The Importance of Correct Punctuation===
'''''The following two passages were written by Rowland Croucher. They illustrate neatly the importance of punctuation in written expression. Only the punctuation changes between the passages….'''''
<em>Dear Thomas,
I want a man who knows what love is all about. You are generous, kind, and thoughtful. People who are not like you admit to being useless and inferior. You have ruined me for other men. I yearn for you. I have no feelings whatsoever when we're apart. I can be forever happy--will you let me be yours?
Maria
----
Dear Thomas,
I want a man who knows what love is. All about you are generous, kind and thoughtful people, who are not like you. Admit to being useless and inferior. You have ruined me. For other men, I yearn; for you, I have no feelings whatsoever. When we're apart, I can be forever happy. Will you let me be?
Yours,
Maria</em>
</td></tr>
</table>
Conjunctions are important time savers and can help the flow of ideas if used correctly, but should not be used more than once in a sentence unless splitting the sentence would detract from it's meaning.
One example where two conjunctions may appear in a sentence is where the sentence contains both a list and two joined or related ideas:
''"The credit approval process involves assessing the amount, viability, merit and purpose of the loan and verifying that the borrower's credit history is of sufficient standing."''
In this case the passage would be harder to follow (and perhaps even misleading) if we wrote it as:
''"The credit approval process involves assessing the amount, viability, merit and purpose of the loan. The credit approval process should also verify that the borrower's credit history is of sufficient standing."''
By splitting the sentence we seem to imply that the credit history is of secondary importance to the information collected about the purpose of the loan.
These situations are generally pretty clear when they arise, but they are rare. A sentence with too many conjunctions suffers from the same problems as a paragraph with too many sentences; we have lost the reader before the end.
Some years ago Professor Manning Clark gave a Boyer lecture concerning the use of English in academic papers. One of his particular annoyances was the use of conjunctions to commence a sentence. His point was simple - a conjunction joins two sentences. If it starts the sentence it is prima-facie not joining two sentences together.
While we all recognise words like "and", "or" and "but" as conjunctions, words such as "however" and "because" are more often missed. Consider the following passage:
''"Because they operate unattended, Roboteller machines are prime targets for fraud. However, if we attach cameras to them they become leading tools in the capture of the perpetrators."''
This can be rewritten to eliminate the problem:
''"Roboteller machines are prime targets for fraud because they operate unattended. If we attach cameras to them, however, the machines become leading tools in the capture of the perpetrators."''
In rewriting the passage we also (once again) moved the subject to the start of the sentences. The "however": is redundant and the passage can be further simplified by writing it thus:
''"Roboteller machines are prime targets for fraud because they operate unattended. The machines become leading tools in the capture of the defrauders if we attach cameras to them."''
This passage demonstrates the appropriate use of "however":
''"Overall corporate / strategic planning is adequately addressed within Premium and Private, however, management attention is required concerning:…"''
==A Few Points of Style==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Personalise people not things'''
* '''Rule: Don't plan to make a plan.'''
* '''Rule: Negativity negates.'''
* '''Rule: DON'T SHOUT'''
</td></tr>
</table>
The three rules of this subsection cover common, but minor, problems of style.
A common written mistake is for a human trait such as "need" or "requiring" to be attributed to an inanimate "thing" such that it takes on the air of an inviolate law. The practice leads to broad statements without justification and hence incomplete argument of a case. Consider:
''"The credit approvals process needs to be reviewed."''
The credit approval process can not need anything. Only living creatures can experience need. It may be appropriate for the process to be updated and management or the auditors may need this to occur, but the process can't spontaneously need such improvement of itself.
Once again we find, as with so many English language errors, that the problem has arisen because of a subject / object mix-up. In the example the credit approval process, which should have been the object has been transformed into the subject. When we rewrite it the way it should have been we find that we are missing a significant part of the message that should have been conveyed (and is now inserted in the rewrite):
''"Management needs to review the credit approvals process focusing on the weaknesses identified in the finding."''
The new version both identifies who should perform the action and guidelines they should follow. It also highlights another important rule (not really one of grammar but one of service quality); the recommendation as written is essentially a plan to make a plan.
Either management should make the changes identified, or they should not. If we merely request them to review the situation we are delivering no committed improvement for the current situation to the Board. We should not say "review" when we mean "implement":
''"Management should implement the identified corrections to rectify the weaknesses in the credit approvals process identified in this report."''
Finally, we briefly consider two ad-hoc matters. The first is to do with capitalisation, while the second concerns the use of negatives.
Capitalising Every Word In a Sentence or even a Random selection Of a few words does not serve to help our presentation. Excessive capitalisation is affronting to the reader. In internet terminology this is akin to SHOUTING AT THE READER. Capitals belong at the beginning of a sentence or when naming a person, place or the title of a "thing". Capitalisation is rarely appropriate in the middle of a sentence.
Secondly, sentences should be expressed in the positive rather than the negative wherever possible. It is a standard sales technique to ask a prospect a question framed in the direction one wishes the answer to go:
"Would you prefer that my quote is open ended?"
As opposed to:
"Would you prefer that my quote is fixed?"
People tend to immediately think in sympathy to the speaker (at least until he or she threatens them with capitals!). If we express our sentences as negatives not only do we lead the reader to naturally disagree (because they have been "trained" to say no by our text, but we also create a sea of double negatives. Which may or may not imply a positive.
==Carrying the Case==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Death is in the details.'''
* '''Rule: Generalities are generally imprecise'''
* '''Rule: Let the facts carry the case.'''
</td></tr>
</table>
Much of what has been written in this goes to the issue of precision. In consulting and audit papers, accuracy of detail can determine the credibility attached to the consultant's/auditor's findings as well as the advice offered. The best strategy is to let the facts, clearly articulated, carry the argument.
The facts should not be embellished with emotional and vague descriptive words such as "large", "most", "substantially". We should state the quanta instead - "70%", "five out of eight", etc.
Try to avoid non-specific or vague words and expressions. This is especially true of quantities and times.
'''Examples'''
<table align=center >
<tr >
<th>Non-specific or vague</th><th> </th><th>Could mean or become</th>
</tr>
<tr>
<td>increased volumes</td><td></td><td>300 or more</td>
</tr>
<tr>
<td>drop in profit</td><td></td><td>profit was 20% lower</td>
</tr>
<tr>
<td>frequently</td><td></td><td>daily/weekly/monthly</td>
</tr>
<tr>
<td>rarely</td><td></td><td>once a year/decade</td>
</tr>
<tr>
<td>recently</td><td></td><td>yesterday/last week/month</td>
</tr>
<tr>
<td>shortly</td><td></td><td>tomorrow/next week/month</td>
</tr>
</table>
In the absence of statistical support for a finding, generalisation emerges. The discussion of the matter with the client becomes sidetracked over the meaning of words like "large" or "significant", rather than focussing on the issue identified and the solution required by the adviser.
Linked to these ideas is the form of words used to convey your point. Never use a long word where a short word will do. Long words may be interpreted by the reader as a deliberate attempt to mask purility with false grandeur, because the underlying point is decrepid or flawed. (See what I mean?).
Having said that, do not be frightened of using a long or technically correct word, simply because it has more than one syllable. Your can always provide a clossary of terms at the start of the document (and frequently that is a good idea for even some commonly missused terms). If your reader needs to get a little more educated to understand your work then fine.
Writing is not about stooping to the lowest common denominator, but it is about communicating your point accurately and effectively. That is: you must actually get your point across; not merely make your reader feel inadequate. There is no point in being right, if nobody realises.
The point, then, is to use the shortest possible ''correct'' word - not merely the shortest word.
As a rule-of-thumb, if your reader has to seek out the meaning of more than 2 or three words in your report you have probably lost them...and they will probably resent you for it. Know your audience, prepare your audience for your language, and make sure they don't feel stupid by the end of it.
The customer for a consulting or audit report needs to be assured that adopting recommendations based upon the consultant's finding will add value to the business.
Auditors (particularly) need to go well beyond describing what is wrong. They need to explain the meaning of any finding: how it affects the organisation’s bottom line; estimating the potential cost of not addressing a problem; predicting the likelihood of exposure or error.
Likewise, consultants need to go well beyond simply parroting back the latest theory they discovered in the bottom of a glass of scotch or on the back of the cereal packet that morning. Consultants need to do a little more of the 'audit' thing and actually analyse what is really the issue/wrong before agruing convincingly for change.
Wherever possible in all such instances, be specific. Numerous, several, many are words lacking in specifics. If this flies in the face of other advice to be brief, so be it.
The auditor/consultant should attempt to quantify the financial impact of a finding. While it may not be possible to arrive at a figure with mathematical precision, an informed guess can help management make a decision.
To be specific, following are some examples of content.
'''Poor'''
Differences exist in the cost of processing biscuit requisitions in various regions.
'''Better'''
The cost of processing biscuit requisitions differs from region to region. Vancouver can process a cheque for AUD 8 cents while the equivalent in Australia is AUD 15 cents. Australia might save up to AUD $15 million by adopting Vancouver’s methods.
'''Poor'''
There is a lack of adequate management information to support activities and to facilitate meaningful comparisons between regional units.
'''Better'''
Management information is inadequate: staff costs are not analysed for benchmarking across various offices; calculation of product profitability does not include processing costs; and there is no allocation of fees and interest income by product type.
Finally, '''summaries''' are meant to be just that: a tight condensation of the main point or points of an issue. Be ruthless in getting rid of perhaps interesting but non-essential pieces of additional information – but retain the specifics.
==Tense, Pronouns and Infinitives==
<table border=1 align="center" style="background-color:#FFFF99" >
<tr><td >
* '''Rule: Don't split the infinitive'''
* '''Rule: Consistency is king'''
* '''Rule: Pronouns need a noun'''
* '''Rule: Unintroduced acronyms are antisocial'''
</td></tr>
</table>
"To Boldly Go Where No Man Has Gone Before…" Perhaps one of the most recognised phrases in the English language, this bight of the Star Trek prime directive is also a prime example of atrocious English! This is a classic example of the split infinitive (not to mention the redundant preposition at the end of the sentence).
The directive should have read:
" Boldly To Go Where No Man Has Gone…" or less poetically, " To Go Boldly Where No Man Has Gone…"
Perhaps, it would be best as:
"Go boldly, where none have gone.."
The infinitive is the basic form of verb invariably commencing with "to". It generally has no subject, and should not be split according to luminaries on the subject. The reason is more stylistic than grammatic. The problem with split infinitives is more obvious when a few words are inserted between the "to" and it's verb:
"The Roboteller machines are expected to really try hard to accurately and silently recognise the customer's identity."
Can be improved by:
"The Roboteller machines are expected to try really hard to recognise the customer's identity accurately and silently."
There are two common ways to fix avoid the split infinitive. Both are presented in the rewrite above. The first is to simply move the offending adverb after the verb, although sometimes this leads to a stilted speech pattern. The second is to move the adverb(s) to the end of the sentence as above.
Pronouns are words like he, she, it, etc that substitute for a noun like Jim, Phred or bank branch. The noun to which a pronoun relates is established by the context in which the pronoun is placed. Consequently, if too many pronouns are used together it becomes very difficult to determine for which noun an individual pronoun substitutes. As a general rule the target noun should immediately precede it's related pronoun and be refreshed at least every two pronouns.
Similarly, and acronym (abbreviation substituting for a noun or phrase) should be preceded immediately the first time it is used by the originating word or phrase. For example:
"The National Australia Bank (NAB) is a large and wonderful establishment. The NAB has an effective and happy audit team."
A completely unrelated matter (but grouped here for convenience) is that of consistency in the use of plurals and tense. It should be apparent to all authors, that the use of the singular in a sentence should be reflected continuously throughout the rest of the sentence. It may be less obvious that the same rule applies to verb tense.
If we express a verb in one tense, such as the present continuous as in "I am having a good day", the balance of the argument should normally be presented in the same tense. This is not a strict rule, because there will be situations in which a finding will relate a historic situation in the lead sentence, while the discussion relates an assessment that is in the present tense.
It is reasonable to say that within a sentence changes in tense will generally create confusion, unless separated by a conjunction. For example:
"In Antarctic Division wire transfer requests were accepted via e-mail and customer signatures were not obtained at all times."
Not
"In Antarctic Division wire transfer requests were accepted via e-mail and customer signatures are not obtained at all times."
But the following would be ok because the first part states a continuous state, while the latter part describes an historic observation relating to the first situation.:
"In Antarctic Division wire transfer requests are accepted via e-mail and customer signatures were not obtained at all times."
Agreement of subject and verb: A singular subject demands a singular verb; a plural one demands a plural verb. Many such problems are caused by long sentences overloaded with adjectives and subordinate clauses where the subject is separated from its verbs. This is another reason for keeping sentences short.
Sometimes the rule is not immediately obvious, such as in the case of "None": "none were" should be "none was" (none=not one or no one)
Example None of us is perfect.
==Confusing Words==
These words are often confused
* Affect (to impact upon, to assume) / effect (to bring about a change in)
* Object (the purpose)/ objective (the point of an exercise - usually military)
* Idol (a religious artefact, or object of worship) / Idyll (an imaginary ideal, or pastoral setting) / Idle (lazy, not in motion)
* Whom (the objective form of the relative pronoun) / who (the subjective form of the relative pronoun)
===A note about affect & effect===
A frequent source of error is confusion in the use of the similar-sounding words affect, affected, effect and effected and continual and continuous.
A cause for confusion is that affect is always a verb while effect can be either a noun or a verb. Both continual and continuous are adjectives.
Affect is a verb in the sense of being to influence. Effect as a verb means to bring about; as a noun it is equivalent to the word result.
The following represent correct usage.
Examples:
* Errors in computing affected the accuracy of the result.
* The effect of errors in computing was to produce an inaccurate result.
* Smoking cigarettes may affect your lungs.
* Giving up smoking had no effect on her general health.
* I didn’t finish the report because of continual telephone interruptions.
* Lights are left on in traffic tunnels to provide continuous illumination.
===A note about "due to"===
"Due to" is often used in the sense of through, because of or owing to. Mostly those alternatives are to be preferred. But it is correct to use due to in the sense of being attributable to.
Example The plane crash was due to bad visibility.
Don’t rely on your computer’s spellchecker for advice on grammar or correct spelling. Some systems are misleading. For example, you may be advised to change personal to personnel (or the other way round).
===A note about who & whom===
"Captain Kirk is the man whom the federation pays to fly the Enterprise." (Whom is the object of pays - the pronoun effected by the action of payment)
And
"Captain Kirk is the man who we think flies the Enterprise." (Who is the subject of flies, not the object of think).
==Punctuation==
Punctuation matters.
* "What is this thing called love?" (As in: Let me count the ways...)
* "What! Is this thing called love?" (As in: Let me out of here...)
* "What is this thing called, love?" (As in: OMG! You are not comming near me with that!)
===Comma===
Used when essential for clarity or to indicate a small interruption in continuity of thought. Short sentence construction reduces the need for commas.
===Semicolon===
Using a semicolon indicates a pause greater than a comma but less than a colon or full stop. Often a semicolon helps to alert the reader to an alternative or compensating thought.
'''Example:''' ''The risk of lost muffins was high; however, quick action averted this crisis.''
Semicolons should be used at the end of each line in a series of bullet points as an alternative to commas. (see later).
'''Example:'''
<em>
The following actions are planned to clear the biscuits:
* Engage two temporary staff for six months;
* Schedule extra training for these and permanent staff;
* Upgrade software in the Biscuit Dispensing Machine;
* Simplify the standard form used for requisitioning for biscuits from the kitchen from ten pages to five; and
* Remove the requirement for VP Supply, VP HR, and CEO counter signing of all biscuit requisitions.
</em>
===Colon===
The colon is used to introduce a quotation, summary, conclusion or list of bullet points (as in the example above); or to introduce a list within a sentence.
'''Example:'''
''The report contains the following sections: employment, training, promotion, legal compliance, relations with other departments.''
===Full stop===
(Period in U.S. usage)
As well as indicating the end of a sentence, full stops are used in some abbreviations. It has become common for periods to be ommitted from word abreviations. We counsel against such a style because: with the plethora of acronyms and technical jargon in today's language signalling that a word is an abreviation of a possible familiar word, with the use of the period; rather than a technical term unknown to the reader, adds to clarity.
Where a bulleted list includes points that have more than one sentence, it is preferable to separate the points with full stops, not semi-colons as set out in the previous example.
Example:
<em>
The following actions are planned to clear the biscuits:
* Engage two temporary staff for six months. Qualifications include large appetities and general slothfullness. It is estimated that salaries will be approximately $13,000 per month each plus biscuits.
* Schedule extra training for these and permanent staff. It is anticipated the training officer will need to allocate three hours weekly to the task.
* Upgrade software . . . (etc)
</em>
Note that where a full stop is used in a dot-point list, no conjunction is used to join the last to items.
Regardless of which dot point separater is chosen, it MUST be used consistently throughout the list and ideally the document.
===Hyphen===
General usage previously demanded that a hyphen be used if a prefix or suffix had the same letter as the word to which it was attached. So cooperate and coordinate generally were spelt co-operate and co-ordinate; hyphens in these instances are unnecessary. While reinforce and react are other examples where hyphens are not needed, sometimes a hyphen provides a warning that a word should not be read as a single syllable (e.g. re-use). Words formed by using the prefix non- should nearly always be hyphenated (e.g. non-compliant, non-aligned) as with some words prefixed by pre- (e.g. pre-existing).
===Apostrophe===
Used to indicate possession or the omission of letters in a contraction.
'''Examples'''
<em>
* Bill’s car was taken to the wreckers.
* Bill hasn’t had time to replace his car yet.
</em>
There is often confusion about its and it’s. The simple test is whether the construction of a sentence means it is (or it has etc). If so, it’s is a contraction and needs an apostrophe; if not, its is a pronoun and needs no apostrophe. (Warning: Don’t get fooled by some computer spellchecking systems which get this wrong.)
A rough rule of thumb: if we are using "it" in the possessive sense (as in "its red tyre"), leave out the "'".
'''Examples'''
<em>
* It’s been a long time between drinks.
* The engine was tuned but its vibration wasn’t greatly reduced.
</em>
===Ellipses===
This is the term to indicate words have been omitted from a quotation and is represented by three full stops separated by spaces.
'''Example'''
''Now is the time . . . to come to the aid of the party.''
===Quote marks===
These should not be used for emphasis. Use bold type or italic instead. Use quotation marks only when you are quoting or, after very long consideration of alternatives, when you are using a word or phrase you consider less than ideal for the situation.
<noinclude>
[[Category:Featured Article]]
[[Category:Management Science]]
[[Category:Internal Audit]]
{{BackLinks}}
</noinclude>
f7d731975ba8a57c6d342057ee50faeede282386
User:Bishopj
2
502
701
2019-09-13T16:27:55Z
Bishopj
1
Created page with "{{:Jonathan Bishop}}"
wikitext
text/x-wiki
{{:Jonathan Bishop}}
0fca3ece1ceb208911984fd37a5f8608f767f892
Jonathan Bishop
0
347
702
538
2019-09-13T16:30:50Z
Bishopj
1
wikitext
text/x-wiki
Jonathan Bishop
BCOM BSC CIA CISA CA MAICD
MANAGING DIRECTOR & CHAIRMAN - BISHOP PHILLIPS CONSULTING GROUP
<table width="100%">
<tr >
<td width="25%" valign=top >
[[Image:bishopj.png]]
==Education & Qualifications==
* BSc (Computer Science)
* BCom (Accounting/Economics)
* Chartered Accountant
* Certified Information Systems Auditor
* Certified Internal Auditor
==Current==
* Chairman – Bishop Phillips Consulting (Aust) & (Canada)
* Managing Director - Bishop Finance Pty Ltd
</td >
<td valign=top >
==Summary==
Jonathan has more than 25years experience consulting in the accounting, governance, IT and strategy consulting industries, and 16 years in CEO, senior executive and board positions with a variety of commercial, educational and semi-government entities. His work history includes leading and authoring a variety of commercial software systems, enterprise scale project management, internal audit leadership and audit committee membership, strategy and performance management leadership, process design and the authoring of books, methods and papers covering a wide range of management technologies.
He was the first person in Australia to be tripply qualified as an ACA, CIA and CISA.
Jonathan is a thought leader in the areas of governance and control theory. With other thinkers in the field of governance systems design, he argues that a concept of holistic organisation design based around a network of stakeholder communities and assertion based process control objectives with predictive reporting can provide economic, realistic, flexible, measurable and sustainable proof of governance and deliver efficient operations and effective market response. He is a strong proposer of control self assessment models for internal control and risk management. His theories have been applied in practice on many commercial and government orgainsations (see client list) and he was a significant advisor in the government reform space during the mid to late 1990's that saw some Australian governments embrace accrual accounting, whole-of-government shared service delivery, public-private partnerships, multi-year budgeting, and formal organisation governance and risk management.
His team was one of the first to define and adopt risk based project management throughout all project advisory and large scale systems implementation services. They have of a 100% success rate in systems implementation projects across a period of 10 years using the approach and during that time delivered more than 20 whole of organisation financial, HR and business management systems.
He has had the rare privilege of having been engaged by major consulting firms to advise them on internal systems delivery and project governance matters.
An active software developer in Delphi, .Net and a variety of web languages, Jonathan also leads the software engineering team of Bishop Phillips Consulting (a publisher of among other things BPC RiskManager, BPC SurveyManger and BPC IncidentManager).
He is currently actively exploring the application of massively multiplayer virtual worlds for corporate, education and social networks and over the last two years his virtual design company has completed a number of large design and construction projects in Second Life, one of the leading online virtual world environments.
</td>
</tr>
<tr >
<td valign=top >
==History==
* Joint Managing Director - Acumen Alliance, Victoria (5yrs)
* Joint Principal-in-Charge – Stanton Consulting Partners (7yrs)
* Vice President – William Angliss Institute of TAFE (7yrs)
* Vice President – Central Health Interpreter Service (1yr)
* Chairman – Angliss Consulting Pty Ltd (8yrs)
* Snr Mngr – Deloitte Touche Tohmatsu (2yrs)
* Chairman – Angliss Performance Review Committee (6yrs)
* Chairman – Angliss Audit Committee (2yrs)
* Other Directorships (Crnt & Frmr)
** Acumen Alliance Australia (4yrs)
** Acumen Alliance (Victoria) (5yrs)
** William Angliss Institute of TAFE (8yrs)
** Radio Beacon (Aust) (2yrs)
** Asia Alliance (Hong Kong) (2yrs)
** Acumen (UK) (2yrs)
* Other:
** Price Waterhouse (Snr)
** Australian National University (Academic)
** RMIT University (Academic)
** Judge - ACT Enterprise Workshop
</td >
<td >
==Capabilities & Skills==
* Project Management
* IT & Corporate Strategy
* Risk Management, Enterprise Governance & EGMS
* Process design & Control Systems Analysis
* Financial Management, Performance Measurement & Activity Based Costing
* Project Management
* Government Accounting
* Business Case Preparation
* Probity, Internal Audit, QA Reviews
* System Evaluations and Selection
* Information Systems – Design Specification
* IT Architecture, Network Design & Implementation
* Software Engineering – Design & Development
* Second Life simulator design, development and construction
==Governance & Consulting Client Engagements include:==
* '''''Managed the Internal Audit Units and Client Service Delivery''''' for over 50 Internal Audit & Governance Clients (over 18 years) including:
** Toyota, General Motors - Holden, United Automobile Manufacturers Association of Australia, City of Melbourne, City of Monash, Victorian Egg Marketing Board, VicRoads, Victorian Electricity Commission, VCGA/VCGR (Victorian Casino & Gaming Authority), Gippsland Water, William Angliss Institute of TAFE, Victorian College of the Arts, DEET/DEETYA, ComCare, ACT Dept. of Health, Woden Hospital, Royal Women’s Hospital, PSE Credit Union, Catholic Bank, NAB, National Mutual Insurance Ltd, Telstra, Dept. of the Treasury, Dept. of Finance, JHD, Dept. of The Senate, Dept. of The House of Representative’s, Dept. of Parliamentary Library, ACT Urban Sevices, ACTEW, AusSpace Ltd, ACT TSA, BTR Nylex, etc
* ''''Process Reengineering and Strategy/Organisational Advice'''' for many, many clients across corporate, government and not-for-profit sectors including:
** Department for Victorian Communities (Grants Management), Department of the Senate, Central Health Interpreter Service, First Mildura Irrigation Trust, SCOPE Victoria, TabCorp, JB Were, Country Fire Authority, FrontLine Defense Services, ACT Health, Royal Women's Hospital, Australian National University, Sirius Communications, William Angliss, Department of Human Services (Victoria), Building Services Agency, City of Melbourne, VEC, Museum of Victoria, Department of Education (Victoria), Sutherland City Council, AIATSIS, Central Health Interpreter Service, etc.
* '''''Designed the Internal Method''''' and wrote the Internal Audit manuals for DEET/DEETYA, National Mutual, Acumen Alliance, Stanton Consulting, Deloitte Touche Tohmatsu,
* Designed and prototyped the online Board Internal Audit Reporting System for National Australia Bank
* '''''Enterprise Systems Project Management''''' or Advisory on implementations in Oracle Financials, PeopleSoft, SAP R3, Microsoft Networks and other vendors for clients such as TabCorp, FrontLine Defense Services, JB Were & Sons, Melbourne University, RMIT University, Dept of Treasury, Ansett Air Freight, SSOV, Trade Measurement Victoria, Farmer Brothers, National Library of Australia.
==Software Systems Developed Include:==
* '''''Co-Author of BPC RiskManager''''' used by – Australia Post, Telstra, Gippsland Water, United Energy, Deakin University, Monash University, Melbourne University, Victoria University, Swinburne University, Simon Fraser University, University of British Columbia, Nova Scotia University, BCIT, University of Calgary, University of Technology, BHP, AMP, Benfield
* '''''Author of BPC SurveyManager''''' used by various among others, the OTTE - Department of Education (Victorian Government) and ACFE - Department for Victorian Communities (Victorian Government) for versy large scale state-wide student surveys, and by various BPC RiskManager clients as the core compliance engine for that system.
* '''''Co-author of BPC IncidentManager''''' (incident support system for BPC RiskManager)
* '''''Authored or co-authored other successful business software applications''''' including BPC TenderEvaluation, Arthur Andersen BAS Reporting, Arthur Andersen Practice Management, VG Dxf, TMV CRIS, Farmer Bros Retail System, FDS EIS & Stock Management System, TabCorp Tender Evaluation System, etc.
==Papers & Management Science Technologies==
Jonathan has a particular strength in policy and governance administration strategy & processes and has authored or co-authored a large volume of papers and manuals on a variety of related topics including:
* Authored various strategy and business modeling concepts including recent systems based on community networks, e-business enablement, commercial competitive tactics and strategies
* Authored the Deloitte Internal Audit Method (over 1000 pages), NAB Internal Audit Reporting manual, DEET Internal Audit Method and National Mutual Internal Audit Method, Stanton Internal Audit Method, Acumen Internal Audit Method.
* Authored the Stanton & Acumen Business Process Reengineering Method, Stanton & Acumen Victoria Management Reporting Design Method, Acumen Victoria Strategic Planning Method, Report Writing Style Guide, etc
* Authored and/or presented at conferences on Neural Networks, and Diagnosis Assisting Databases using Neural Networks. Jointly designed and built one of the first practical neural network databases in the 1980's for pattern recognistion and predictive reasoning.
</td>
</tr>
</table >
4c77849b8a5c2460b314d8b46eb337738fc9a0f9
703
702
2019-09-13T16:31:52Z
Bishopj
1
wikitext
text/x-wiki
Jonathan Bishop
BCOM BSC CIA CISA CA MAICD
MANAGING DIRECTOR & CHAIRMAN - BISHOP PHILLIPS CONSULTING GROUP
<table width="100%">
<tr >
<td width="25%" valign=top >
[[Image:bishopj.png]]
==Education & Qualifications==
* BSc (Computer Science)
* BCom (Accounting/Economics)
* Chartered Accountant
* Certified Information Systems Auditor
* Certified Internal Auditor
==Current==
* Chairman – Bishop Phillips Consulting (Aust) & (Canada)
* Managing Director - Bishop Finance Pty Ltd
</td >
<td valign=top >
==Summary==
Jonathan has more than 25years experience consulting in the accounting, governance, IT and strategy consulting industries, and 16 years in CEO, senior executive and board positions with a variety of commercial, educational and semi-government entities. His work history includes leading and authoring a variety of commercial software systems, enterprise scale project management, internal audit leadership and audit committee membership, strategy and performance management leadership, process design and the authoring of books, methods and papers covering a wide range of management technologies.
He was the first person in Australia to be tripply qualified as an ACA, CIA and CISA.
Jonathan is a thought leader in the areas of governance and control theory. With other thinkers in the field of governance systems design, he argues that a concept of holistic organisation design based around a network of stakeholder communities and assertion based process control objectives with predictive reporting can provide economic, realistic, flexible, measurable and sustainable proof of governance and deliver efficient operations and effective market response. He is a strong proposer of control self assessment models for internal control and risk management. His theories have been applied in practice on many commercial and government orgainsations (see client list) and he was a significant advisor in the government reform space during the mid to late 1990's that saw some Australian governments embrace accrual accounting, whole-of-government shared service delivery, public-private partnerships, multi-year budgeting, and formal organisation governance and risk management.
His team was one of the first to define and adopt risk based project management throughout all project advisory and large scale systems implementation services. They have of a 100% success rate in systems implementation projects across a period of 10 years using the approach and during that time delivered more than 20 whole of organisation financial, HR and business management systems.
He has had the rare privilege of having been engaged by major consulting firms to advise them on internal systems delivery and project governance matters.
An active software developer in Delphi, .Net and a variety of web languages, Jonathan also leads the software engineering team of Bishop Phillips Consulting (a publisher of among other things BPC RiskManager, BPC SurveyManger and BPC IncidentManager).
He is currently actively exploring the application of massively multiplayer virtual worlds for corporate, education and social networks and over the last two years his virtual design company has completed a number of large design and construction projects in Second Life, one of the leading online virtual world environments.
</td>
</tr>
<tr >
<td valign=top >
==History==
* Joint Managing Director - Acumen Alliance, Victoria (5yrs)
* Joint Principal-in-Charge – Stanton Consulting Partners (7yrs)
* Vice President – William Angliss Institute of TAFE (7yrs)
* Vice President – Central Health Interpreter Service (1yr)
* Chairman – Angliss Consulting Pty Ltd (8yrs)
* Snr Mngr – Deloitte Touche Tohmatsu (2yrs)
* Chairman – Angliss Performance Review Committee (6yrs)
* Chairman – Angliss Audit Committee (2yrs)
* Other Directorships (Crnt & Frmr)
** Acumen Alliance Australia (4yrs)
** Acumen Alliance (Victoria) (5yrs)
** William Angliss Institute of TAFE (8yrs)
** Radio Beacon (Aust) (2yrs)
** Asia Alliance (Hong Kong) (2yrs)
** Acumen (UK) (2yrs)
* Other:
** Price Waterhouse (Snr)
** Australian National University (Academic)
** RMIT University (Academic)
** Judge - ACT Enterprise Workshop
</td >
<td >
==Capabilities & Skills==
* Project Management
* IT & Corporate Strategy
* Risk Management, Enterprise Governance & EGMS
* Process design & Control Systems Analysis
* Financial Management, Performance Measurement & Activity Based Costing
* Project Management
* Government Accounting
* Business Case Preparation
* Probity, Internal Audit, QA Reviews
* System Evaluations and Selection
* Information Systems – Design Specification
* IT Architecture, Network Design & Implementation
* Software Engineering – Design & Development
* Second Life simulator design, development and construction
==Governance & Consulting Client Engagements include:==
* '''''Managed the Internal Audit Units and Client Service Delivery''''' for over 50 Internal Audit & Governance Clients (over 18 years) including:
** Toyota, General Motors - Holden, United Automobile Manufacturers Association of Australia, City of Melbourne, City of Monash, Victorian Egg Marketing Board, VicRoads, Victorian Electricity Commission, VCGA/VCGR (Victorian Casino & Gaming Authority), Gippsland Water, William Angliss Institute of TAFE, Victorian College of the Arts, DEET/DEETYA, ComCare, ACT Dept. of Health, Woden Hospital, Royal Women’s Hospital, PSE Credit Union, Catholic Bank, NAB, National Mutual Insurance Ltd, Telstra, Dept. of the Treasury, Dept. of Finance, JHD, Dept. of The Senate, Dept. of The House of Representative’s, Dept. of Parliamentary Library, ACT Urban Sevices, ACTEW, AusSpace Ltd, ACT TSA, BTR Nylex, etc
* ''''Process Reengineering and Strategy/Organisational Advice'''' for many, many clients across corporate, government and not-for-profit sectors including:
** Department for Victorian Communities (Grants Management), Department of the Senate, Central Health Interpreter Service, First Mildura Irrigation Trust, SCOPE Victoria, TabCorp, JB Were, Country Fire Authority, FrontLine Defense Services, ACT Health, Royal Women's Hospital, Australian National University, Sirius Communications, William Angliss, Department of Human Services (Victoria), Building Services Agency, City of Melbourne, VEC, Museum of Victoria, Department of Education (Victoria), Sutherland City Council, AIATSIS, Central Health Interpreter Service, etc.
* '''''Designed the Internal Method''''' and wrote the Internal Audit manuals for DEET/DEETYA, National Mutual, Acumen Alliance, Stanton Consulting, Deloitte Touche Tohmatsu,
* Designed and prototyped the online Board Internal Audit Reporting System for National Australia Bank
* '''''Enterprise Systems Project Management''''' or Advisory on implementations in Oracle Financials, PeopleSoft, SAP R3, Microsoft Networks and other vendors for clients such as TabCorp, FrontLine Defense Services, JB Were & Sons, Melbourne University, RMIT University, Dept of Treasury, Ansett Air Freight, SSOV, Trade Measurement Victoria, Farmer Brothers, National Library of Australia.
==Software Systems Developed Include:==
* '''''Co-Author of BPC RiskManager''''' used by – Australia Post, Telstra, Gippsland Water, United Energy, Deakin University, Monash University, Melbourne University, Victoria University, Swinburne University, Simon Fraser University, University of British Columbia, Nova Scotia University, BCIT, University of Calgary, University of Technology, BHP, AMP, Benfield
* '''''Author of BPC SurveyManager''''' used by various among others, the OTTE - Department of Education (Victorian Government) and ACFE - Department for Victorian Communities (Victorian Government) for versy large scale state-wide student surveys, and by various BPC RiskManager clients as the core compliance engine for that system.
* '''''Co-author of BPC IncidentManager''''' (incident support system for BPC RiskManager)
* '''''Authored or co-authored other successful business software applications''''' including BPC TenderEvaluation, Arthur Andersen BAS Reporting, Arthur Andersen Practice Management, VG Dxf, TMV CRIS, Farmer Bros Retail System, FDS EIS & Stock Management System, TabCorp Tender Evaluation System, etc.
==Papers & Management Science Technologies==
Jonathan has a particular strength in policy and governance administration strategy & processes and has authored or co-authored a large volume of papers and manuals on a variety of related topics including:
* Authored various strategy and business modeling concepts including recent systems based on community networks, e-business enablement, commercial competitive tactics and strategies
* Authored the Deloitte Internal Audit Method (over 1000 pages), NAB Internal Audit Reporting manual, DEET Internal Audit Method and National Mutual Internal Audit Method, Stanton Internal Audit Method, Acumen Internal Audit Method.
* Authored the Stanton & Acumen Business Process Reengineering Method, Stanton & Acumen Victoria Management Reporting Design Method, Acumen Victoria Strategic Planning Method, Report Writing Style Guide, etc
* Authored and/or presented at conferences on Neural Networks, and Diagnosis Assisting Databases using Neural Networks. Jointly designed and built one of the first practical neural network databases in the 1980's for pattern recognistion and predictive reasoning.
</td>
</tr>
</table >
a02aec9d135e7fc9b54fb1ba8bd442ca1d64f612
704
703
2019-09-13T16:32:48Z
Bishopj
1
wikitext
text/x-wiki
Jonathan Bishop
BCOM BSC CIA CISA CA MAICD<br/>
MANAGING DIRECTOR & CHAIRMAN - BISHOP PHILLIPS CONSULTING GROUP
<table width="100%">
<tr >
<td width="25%" valign=top >
[[Image:bishopj.png]]
==Education & Qualifications==
* BSc (Computer Science)
* BCom (Accounting/Economics)
* Chartered Accountant
* Certified Information Systems Auditor
* Certified Internal Auditor
==Current==
* Chairman – Bishop Phillips Consulting (Aust) & (Canada)
* Managing Director - Bishop Finance Pty Ltd
</td >
<td valign=top >
==Summary==
Jonathan has more than 25years experience consulting in the accounting, governance, IT and strategy consulting industries, and 16 years in CEO, senior executive and board positions with a variety of commercial, educational and semi-government entities. His work history includes leading and authoring a variety of commercial software systems, enterprise scale project management, internal audit leadership and audit committee membership, strategy and performance management leadership, process design and the authoring of books, methods and papers covering a wide range of management technologies.
He was the first person in Australia to be tripply qualified as an ACA, CIA and CISA.
Jonathan is a thought leader in the areas of governance and control theory. With other thinkers in the field of governance systems design, he argues that a concept of holistic organisation design based around a network of stakeholder communities and assertion based process control objectives with predictive reporting can provide economic, realistic, flexible, measurable and sustainable proof of governance and deliver efficient operations and effective market response. He is a strong proposer of control self assessment models for internal control and risk management. His theories have been applied in practice on many commercial and government orgainsations (see client list) and he was a significant advisor in the government reform space during the mid to late 1990's that saw some Australian governments embrace accrual accounting, whole-of-government shared service delivery, public-private partnerships, multi-year budgeting, and formal organisation governance and risk management.
His team was one of the first to define and adopt risk based project management throughout all project advisory and large scale systems implementation services. They have of a 100% success rate in systems implementation projects across a period of 10 years using the approach and during that time delivered more than 20 whole of organisation financial, HR and business management systems.
He has had the rare privilege of having been engaged by major consulting firms to advise them on internal systems delivery and project governance matters.
An active software developer in Delphi, .Net and a variety of web languages, Jonathan also leads the software engineering team of Bishop Phillips Consulting (a publisher of among other things BPC RiskManager, BPC SurveyManger and BPC IncidentManager).
He is currently actively exploring the application of massively multiplayer virtual worlds for corporate, education and social networks and over the last two years his virtual design company has completed a number of large design and construction projects in Second Life, one of the leading online virtual world environments.
</td>
</tr>
<tr >
<td valign=top >
==History==
* Joint Managing Director - Acumen Alliance, Victoria (5yrs)
* Joint Principal-in-Charge – Stanton Consulting Partners (7yrs)
* Vice President – William Angliss Institute of TAFE (7yrs)
* Vice President – Central Health Interpreter Service (1yr)
* Chairman – Angliss Consulting Pty Ltd (8yrs)
* Snr Mngr – Deloitte Touche Tohmatsu (2yrs)
* Chairman – Angliss Performance Review Committee (6yrs)
* Chairman – Angliss Audit Committee (2yrs)
* Other Directorships (Crnt & Frmr)
** Acumen Alliance Australia (4yrs)
** Acumen Alliance (Victoria) (5yrs)
** William Angliss Institute of TAFE (8yrs)
** Radio Beacon (Aust) (2yrs)
** Asia Alliance (Hong Kong) (2yrs)
** Acumen (UK) (2yrs)
* Other:
** Price Waterhouse (Snr)
** Australian National University (Academic)
** RMIT University (Academic)
** Judge - ACT Enterprise Workshop
</td >
<td >
==Capabilities & Skills==
* Project Management
* IT & Corporate Strategy
* Risk Management, Enterprise Governance & EGMS
* Process design & Control Systems Analysis
* Financial Management, Performance Measurement & Activity Based Costing
* Project Management
* Government Accounting
* Business Case Preparation
* Probity, Internal Audit, QA Reviews
* System Evaluations and Selection
* Information Systems – Design Specification
* IT Architecture, Network Design & Implementation
* Software Engineering – Design & Development
* Second Life simulator design, development and construction
==Governance & Consulting Client Engagements include:==
* '''''Managed the Internal Audit Units and Client Service Delivery''''' for over 50 Internal Audit & Governance Clients (over 18 years) including:
** Toyota, General Motors - Holden, United Automobile Manufacturers Association of Australia, City of Melbourne, City of Monash, Victorian Egg Marketing Board, VicRoads, Victorian Electricity Commission, VCGA/VCGR (Victorian Casino & Gaming Authority), Gippsland Water, William Angliss Institute of TAFE, Victorian College of the Arts, DEET/DEETYA, ComCare, ACT Dept. of Health, Woden Hospital, Royal Women’s Hospital, PSE Credit Union, Catholic Bank, NAB, National Mutual Insurance Ltd, Telstra, Dept. of the Treasury, Dept. of Finance, JHD, Dept. of The Senate, Dept. of The House of Representative’s, Dept. of Parliamentary Library, ACT Urban Sevices, ACTEW, AusSpace Ltd, ACT TSA, BTR Nylex, etc
* ''''Process Reengineering and Strategy/Organisational Advice'''' for many, many clients across corporate, government and not-for-profit sectors including:
** Department for Victorian Communities (Grants Management), Department of the Senate, Central Health Interpreter Service, First Mildura Irrigation Trust, SCOPE Victoria, TabCorp, JB Were, Country Fire Authority, FrontLine Defense Services, ACT Health, Royal Women's Hospital, Australian National University, Sirius Communications, William Angliss, Department of Human Services (Victoria), Building Services Agency, City of Melbourne, VEC, Museum of Victoria, Department of Education (Victoria), Sutherland City Council, AIATSIS, Central Health Interpreter Service, etc.
* '''''Designed the Internal Method''''' and wrote the Internal Audit manuals for DEET/DEETYA, National Mutual, Acumen Alliance, Stanton Consulting, Deloitte Touche Tohmatsu,
* Designed and prototyped the online Board Internal Audit Reporting System for National Australia Bank
* '''''Enterprise Systems Project Management''''' or Advisory on implementations in Oracle Financials, PeopleSoft, SAP R3, Microsoft Networks and other vendors for clients such as TabCorp, FrontLine Defense Services, JB Were & Sons, Melbourne University, RMIT University, Dept of Treasury, Ansett Air Freight, SSOV, Trade Measurement Victoria, Farmer Brothers, National Library of Australia.
==Software Systems Developed Include:==
* '''''Co-Author of BPC RiskManager''''' used by – Australia Post, Telstra, Gippsland Water, United Energy, Deakin University, Monash University, Melbourne University, Victoria University, Swinburne University, Simon Fraser University, University of British Columbia, Nova Scotia University, BCIT, University of Calgary, University of Technology, BHP, AMP, Benfield
* '''''Author of BPC SurveyManager''''' used by various among others, the OTTE - Department of Education (Victorian Government) and ACFE - Department for Victorian Communities (Victorian Government) for versy large scale state-wide student surveys, and by various BPC RiskManager clients as the core compliance engine for that system.
* '''''Co-author of BPC IncidentManager''''' (incident support system for BPC RiskManager)
* '''''Authored or co-authored other successful business software applications''''' including BPC TenderEvaluation, Arthur Andersen BAS Reporting, Arthur Andersen Practice Management, VG Dxf, TMV CRIS, Farmer Bros Retail System, FDS EIS & Stock Management System, TabCorp Tender Evaluation System, etc.
==Papers & Management Science Technologies==
Jonathan has a particular strength in policy and governance administration strategy & processes and has authored or co-authored a large volume of papers and manuals on a variety of related topics including:
* Authored various strategy and business modeling concepts including recent systems based on community networks, e-business enablement, commercial competitive tactics and strategies
* Authored the Deloitte Internal Audit Method (over 1000 pages), NAB Internal Audit Reporting manual, DEET Internal Audit Method and National Mutual Internal Audit Method, Stanton Internal Audit Method, Acumen Internal Audit Method.
* Authored the Stanton & Acumen Business Process Reengineering Method, Stanton & Acumen Victoria Management Reporting Design Method, Acumen Victoria Strategic Planning Method, Report Writing Style Guide, etc
* Authored and/or presented at conferences on Neural Networks, and Diagnosis Assisting Databases using Neural Networks. Jointly designed and built one of the first practical neural network databases in the 1980's for pattern recognistion and predictive reasoning.
</td>
</tr>
</table >
f909e0a32d7986022954e1bcb2de7aae94b9166a
705
704
2019-09-13T16:33:59Z
Bishopj
1
wikitext
text/x-wiki
Jonathan Bishop
BCOM BSC CIA CISA CA MAICD<br/>
MANAGING DIRECTOR & CHAIRMAN - BISHOP PHILLIPS CONSULTING GROUP
<table width="100%">
<tr >
<td width="25%" valign=top >
[[Image:bishopj.png]]
==Education & Qualifications==
* BSc (Computer Science)
* BCom (Accounting/Economics)
* Chartered Accountant
* Certified Information Systems Auditor
* Certified Internal Auditor
==Current==
* Chairman – Bishop Phillips Consulting (Aust) & (Canada)
* Managing Director - Bishop Finance Pty Ltd
</td >
<td valign=top >
==Summary==
Jonathan has more than 25years experience consulting in the accounting, governance, IT and strategy consulting industries, and 16 years in CEO, senior executive and board positions with a variety of commercial, educational and semi-government entities. His work history includes leading and authoring a variety of commercial software systems, enterprise scale project management, internal audit leadership and audit committee membership, strategy and performance management leadership, process design and the authoring of books, methods and papers covering a wide range of management technologies.
He was the first person in Australia to be tripply qualified as an ACA, CIA and CISA.
Jonathan is a thought leader in the areas of governance and control theory. With other thinkers in the field of governance systems design, he argues that a concept of holistic organisation design based around a network of stakeholder communities and assertion based process control objectives with predictive reporting can provide economic, realistic, flexible, measurable and sustainable proof of governance and deliver efficient operations and effective market response. He is a strong proposer of control self assessment models for internal control and risk management. His theories have been applied in practice on many commercial and government orgainsations (see client list) and he was a significant advisor in the government reform space during the mid to late 1990's that saw some Australian governments embrace accrual accounting, whole-of-government shared service delivery, public-private partnerships, multi-year budgeting, and formal organisation governance and risk management.
His team was one of the first to define and adopt risk based project management throughout all project advisory and large scale systems implementation services. They have of a 100% success rate in systems implementation projects across a period of 10 years using the approach and during that time delivered more than 20 whole of organisation financial, HR and business management systems.
He has had the rare privilege of having been engaged by major consulting firms to advise them on internal systems delivery and project governance matters.
An active software developer in Delphi, .Net and a variety of web languages, Jonathan also leads the software engineering team of Bishop Phillips Consulting (a publisher of among other things BPC RiskManager, BPC SurveyManger and BPC IncidentManager).
He is currently actively exploring the application of massively multiplayer virtual worlds for corporate, education and social networks and over the last two years his virtual design company has completed a number of large design and construction projects in Second Life, one of the leading online virtual world environments.
</td>
</tr>
<tr >
<td valign=top >
==History==
* Joint Managing Director - Acumen Alliance, Victoria (5yrs)
* Joint Principal-in-Charge – Stanton Consulting Partners (7yrs)
* Vice President – William Angliss Institute of TAFE (7yrs)
* Vice President – Central Health Interpreter Service (1yr)
* Chairman – Angliss Consulting Pty Ltd (8yrs)
* Snr Mngr – Deloitte Touche Tohmatsu (2yrs)
* Chairman – Angliss Performance Review Committee (6yrs)
* Chairman – Angliss Audit Committee (2yrs)
* Other Directorships (Crnt & Frmr)
** Acumen Alliance Australia (4yrs)
** Acumen Alliance (Victoria) (5yrs)
** William Angliss Institute of TAFE (8yrs)
** Radio Beacon (Aust) (2yrs)
** Asia Alliance (Hong Kong) (2yrs)
** Acumen (UK) (2yrs)
* Other:
** Price Waterhouse (Snr)
** Australian National University (Academic)
** RMIT University (Academic)
** Judge - ACT Enterprise Workshop
</td >
<td >
==Capabilities & Skills==
* Project Management
* IT & Corporate Strategy
* Risk Management, Enterprise Governance & EGMS
* Process design & Control Systems Analysis
* Financial Management, Performance Measurement & Activity Based Costing
* Project Management
* Government Accounting
* Business Case Preparation
* Probity, Internal Audit, QA Reviews
* System Evaluations and Selection
* Information Systems – Design Specification
* IT Architecture, Network Design & Implementation
* Software Engineering – Design & Development
* Second Life simulator design, development and construction
==Governance & Consulting Client Engagements include:==
* '''''Managed the Internal Audit Units and Client Service Delivery''''' for over 50 Internal Audit & Governance Clients (over 18 years) including:
** Toyota, General Motors - Holden, United Automobile Manufacturers Association of Australia, City of Melbourne, City of Monash, Victorian Egg Marketing Board, VicRoads, Victorian Electricity Commission, VCGA/VCGR (Victorian Casino & Gaming Authority), Gippsland Water, William Angliss Institute of TAFE, Victorian College of the Arts, DEET/DEETYA, ComCare, ACT Dept. of Health, Woden Hospital, Royal Women’s Hospital, PSE Credit Union, Catholic Bank, NAB, National Mutual Insurance Ltd, Telstra, Dept. of the Treasury, Dept. of Finance, JHD, Dept. of The Senate, Dept. of The House of Representative’s, Dept. of Parliamentary Library, ACT Urban Sevices, ACTEW, AusSpace Ltd, ACT TSA, BTR Nylex, etc
* ''''Process Reengineering and Strategy/Organisational Advice'''' for many, many clients across corporate, government and not-for-profit sectors including:
** Department for Victorian Communities (Grants Management), Department of the Senate, Central Health Interpreter Service, First Mildura Irrigation Trust, SCOPE Victoria, TabCorp, JB Were, Country Fire Authority, FrontLine Defense Services, ACT Health, Royal Women's Hospital, Australian National University, Sirius Communications, William Angliss, Department of Human Services (Victoria), Building Services Agency, City of Melbourne, VEC, Museum of Victoria, Department of Education (Victoria), Sutherland City Council, AIATSIS, Central Health Interpreter Service, etc.
* '''''Designed the Internal Method''''' and wrote the Internal Audit manuals for DEET/DEETYA, National Mutual, Acumen Alliance, Stanton Consulting, Deloitte Touche Tohmatsu,
* Designed and prototyped the online Board Internal Audit Reporting System for National Australia Bank
* '''''Enterprise Systems Project Management''''' or Advisory on implementations in Oracle Financials, PeopleSoft, SAP R3, Microsoft Networks and other vendors for clients such as TabCorp, FrontLine Defense Services, JB Were & Sons, Melbourne University, RMIT University, Dept of Treasury, Ansett Air Freight, SSOV, Trade Measurement Victoria, Farmer Brothers, National Library of Australia.
==Software Systems Developed Include:==
* '''''Co-Author of BPC RiskManager''''' used by – Australia Post, Telstra, Gippsland Water, United Energy, Deakin University, Monash University, Melbourne University, Victoria University, Swinburne University, Simon Fraser University, University of British Columbia, Nova Scotia University, BCIT, University of Calgary, University of Technology, BHP, AMP, Benfield
* '''''Author of BPC SurveyManager''''' used by various among others, the OTTE - Department of Education (Victorian Government) and ACFE - Department for Victorian Communities (Victorian Government) for versy large scale state-wide student surveys, and by various BPC RiskManager clients as the core compliance engine for that system.
* '''''Co-author of BPC IncidentManager''''' (incident support system for BPC RiskManager)
* '''''Authored or co-authored other successful business software applications''''' including BPC TenderEvaluation, Arthur Andersen BAS Reporting, Arthur Andersen Practice Management, VG Dxf, TMV CRIS, Farmer Bros Retail System, FDS EIS & Stock Management System, TabCorp Tender Evaluation System, etc.
==Papers & Management Science Technologies==
Jonathan has a particular strength in policy and governance administration strategy & processes and has authored or co-authored a large volume of papers and manuals on a variety of related topics including:
* Authored various strategy and business modeling concepts including recent systems based on community networks, e-business enablement, commercial competitive tactics and strategies
* Authored the Deloitte Internal Audit Method (over 1000 pages), NAB Internal Audit Reporting manual, DEET Internal Audit Method and National Mutual Internal Audit Method, Stanton Internal Audit Method, Acumen Internal Audit Method.
* Authored the Stanton & Acumen Business Process Reengineering Method, Stanton & Acumen Victoria Management Reporting Design Method, Acumen Victoria Strategic Planning Method, Report Writing Style Guide, etc
* Authored and/or presented at conferences on Neural Networks, and Diagnosis Assisting Databases using Neural Networks. Jointly designed and built one of the first practical neural network databases in the 1980's for pattern recognistion and predictive reasoning.
</td>
</tr>
</table >
70459f766958fdf7d3d4262510d01b0e54fe223a
706
705
2019-09-13T16:34:53Z
Bishopj
1
wikitext
text/x-wiki
Jonathan Bishop
BCOM BSC CIA CISA CA MAICD<br/>
MANAGING DIRECTOR & CHAIRMAN - BISHOP PHILLIPS CONSULTING GROUP
<table width="100%">
<tr >
<td width="25%" valign=top >
[[Image:bishopj.png]]
==Education & Qualifications==
* BSc (Computer Science)
* BCom (Accounting/Economics)
* Chartered Accountant
* Certified Information Systems Auditor
* Certified Internal Auditor
==Current==
* Chairman – Bishop Phillips Consulting (Aust) & (Canada)
* Managing Director - Bishop Finance Pty Ltd
</td >
<td valign=top >
==Summary==
Jonathan has more than 25years experience consulting in the accounting, governance, IT and strategy consulting industries, and 16 years in CEO, senior executive and board positions with a variety of commercial, educational and semi-government entities. His work history includes leading and authoring a variety of commercial software systems, enterprise scale project management, internal audit leadership and audit committee membership, strategy and performance management leadership, process design and the authoring of books, methods and papers covering a wide range of management technologies.
He was the first person in Australia to be tripply qualified as an ACA, CIA and CISA.
Jonathan is a thought leader in the areas of governance and control theory. With other thinkers in the field of governance systems design, he argues that a concept of holistic organisation design based around a network of stakeholder communities and assertion based process control objectives with predictive reporting can provide economic, realistic, flexible, measurable and sustainable proof of governance and deliver efficient operations and effective market response. He is a strong proposer of control self assessment models for internal control and risk management. His theories have been applied in practice on many commercial and government orgainsations (see client list) and he was a significant advisor in the government reform space during the mid to late 1990's that saw some Australian governments embrace accrual accounting, whole-of-government shared service delivery, public-private partnerships, multi-year budgeting, and formal organisation governance and risk management.
His team was one of the first to define and adopt risk based project management throughout all project advisory and large scale systems implementation services. They have of a 100% success rate in systems implementation projects across a period of 10 years using the approach and during that time delivered more than 20 whole of organisation financial, HR and business management systems.
He has had the rare privilege of having been engaged by major consulting firms to advise them on internal systems delivery and project governance matters.
An active software developer in Delphi, .Net and a variety of web languages, Jonathan also leads the software engineering team of Bishop Phillips Consulting (a publisher of among other things BPC RiskManager, BPC SurveyManger and BPC IncidentManager).
He is currently actively exploring the application of massively multiplayer virtual worlds for corporate, education and social networks and over the last two years his virtual design company has completed a number of large design and construction projects in Second Life, one of the leading online virtual world environments.
</td>
</tr>
<tr >
<td valign=top >
==History==
* Joint Managing Director - Acumen Alliance, Victoria (5yrs)
* Joint Principal-in-Charge – Stanton Consulting Partners (7yrs)
* Vice President – William Angliss Institute of TAFE (7yrs)
* Vice President – Central Health Interpreter Service (1yr)
* Chairman – Angliss Consulting Pty Ltd (8yrs)
* Snr Mngr – Deloitte Touche Tohmatsu (2yrs)
* Chairman – Angliss Performance Review Committee (6yrs)
* Chairman – Angliss Audit Committee (2yrs)
* Other Directorships (Crnt & Frmr)
** Acumen Alliance Australia (4yrs)
** Acumen Alliance (Victoria) (5yrs)
** William Angliss Institute of TAFE (8yrs)
** Radio Beacon (Aust) (2yrs)
** Asia Alliance (Hong Kong) (2yrs)
** Acumen (UK) (2yrs)
* Other:
** Price Waterhouse (Snr)
** Australian National University (Academic)
** RMIT University (Academic)
** Judge - ACT Enterprise Workshop
</td >
<td >
==Capabilities & Skills==
* Project Management
* IT & Corporate Strategy
* Risk Management, Enterprise Governance & EGMS
* Process design & Control Systems Analysis
* Financial Management, Performance Measurement & Activity Based Costing
* Project Management
* Government Accounting
* Business Case Preparation
* Probity, Internal Audit, QA Reviews
* System Evaluations and Selection
* Information Systems – Design Specification
* IT Architecture, Network Design & Implementation
* Software Engineering – Design & Development
* Second Life simulator design, development and construction
==Governance & Consulting Client Engagements include:==
* '''''Managed the Internal Audit Units and Client Service Delivery''''' for over 50 Internal Audit & Governance Clients (over 18 years) including:
** Toyota, General Motors - Holden, United Automobile Manufacturers Association of Australia, City of Melbourne, City of Monash, Victorian Egg Marketing Board, VicRoads, Victorian Electricity Commission, VCGA/VCGR (Victorian Casino & Gaming Authority), Gippsland Water, William Angliss Institute of TAFE, Victorian College of the Arts, DEET/DEETYA, ComCare, ACT Dept. of Health, Woden Hospital, Royal Women’s Hospital, PSE Credit Union, Catholic Bank, NAB, National Mutual Insurance Ltd, Telstra, Dept. of the Treasury, Dept. of Finance, JHD, Dept. of The Senate, Dept. of The House of Representative’s, Dept. of Parliamentary Library, ACT Urban Sevices, ACTEW, AusSpace Ltd, ACT TSA, BTR Nylex, etc
* ''''Process Reengineering and Strategy/Organisational Advice'''' for many, many clients across corporate, government and not-for-profit sectors including:
** Department for Victorian Communities (Grants Management), Department of the Senate, Central Health Interpreter Service, First Mildura Irrigation Trust, SCOPE Victoria, TabCorp, JB Were, Country Fire Authority, FrontLine Defense Services, ACT Health, Royal Women's Hospital, Australian National University, Sirius Communications, William Angliss, Department of Human Services (Victoria), Building Services Agency, City of Melbourne, VEC, Museum of Victoria, Department of Education (Victoria), Sutherland City Council, AIATSIS, Central Health Interpreter Service, etc.
* '''''Designed the Internal Method''''' and wrote the Internal Audit manuals for DEET/DEETYA, National Mutual, Acumen Alliance, Stanton Consulting, Deloitte Touche Tohmatsu,
* Designed and prototyped the online Board Internal Audit Reporting System for National Australia Bank
* '''''Enterprise Systems Project Management''''' or Advisory on implementations in Oracle Financials, PeopleSoft, SAP R3, Microsoft Networks and other vendors for clients such as TabCorp, FrontLine Defense Services, JB Were & Sons, Melbourne University, RMIT University, Dept of Treasury, Ansett Air Freight, SSOV, Trade Measurement Victoria, Farmer Brothers, National Library of Australia.
==Software Systems Developed Include:==
* '''''Co-Author of BPC RiskManager''''' used by – Australia Post, Telstra, Gippsland Water, United Energy, Deakin University, Monash University, Melbourne University, Victoria University, Swinburne University, Simon Fraser University, University of British Columbia, Nova Scotia University, BCIT, University of Calgary, University of Technology, BHP, AMP, Benfield
* '''''Author of BPC SurveyManager''''' used by various among others, the OTTE - Department of Education (Victorian Government) and ACFE - Department for Victorian Communities (Victorian Government) for versy large scale state-wide student surveys, and by various BPC RiskManager clients as the core compliance engine for that system.
* '''''Co-author of BPC IncidentManager''''' (incident support system for BPC RiskManager)
* '''''Authored or co-authored other successful business software applications''''' including BPC TenderEvaluation, Arthur Andersen BAS Reporting, Arthur Andersen Practice Management, VG Dxf, TMV CRIS, Farmer Bros Retail System, FDS EIS & Stock Management System, TabCorp Tender Evaluation System, etc.
==Papers & Management Science Technologies==
Jonathan has a particular strength in policy and governance administration strategy & processes and has authored or co-authored a large volume of papers and manuals on a variety of related topics including:
* Authored various strategy and business modeling concepts including recent systems based on community networks, e-business enablement, commercial competitive tactics and strategies
* Authored the Deloitte Internal Audit Method (over 1000 pages), NAB Internal Audit Reporting manual, DEET Internal Audit Method and National Mutual Internal Audit Method, Stanton Internal Audit Method, Acumen Internal Audit Method.
* Authored the Stanton & Acumen Business Process Reengineering Method, Stanton & Acumen Victoria Management Reporting Design Method, Acumen Victoria Strategic Planning Method, Report Writing Style Guide, etc
* Authored and/or presented at conferences on Neural Networks, and Diagnosis Assisting Databases using Neural Networks. Jointly designed and built one of the first practical neural network databases in the 1980's for pattern recognistion and predictive reasoning.
</td>
</tr>
</table >
647beda4d83c4679cd955b3db0e658656f4ccc9f
707
706
2019-09-13T16:35:29Z
Bishopj
1
wikitext
text/x-wiki
Jonathan Bishop
BCOM BSC CIA CISA CA MAICD<br/>
MANAGING DIRECTOR & CHAIRMAN - BISHOP PHILLIPS CONSULTING GROUP
<table width="100%">
<tr >
<td width="25%" valign=top >
[[Image:bishopj.png]]
==Education & Qualifications==
* BSc (Computer Science)
* BCom (Accounting/Economics)
* Chartered Accountant
* Certified Information Systems Auditor
* Certified Internal Auditor
==Current==
* Chairman – Bishop Phillips Consulting (Aust) & (Canada)
* Managing Director - Bishop Finance Pty Ltd
</td >
<td valign=top >
==Summary==
Jonathan has more than 25years experience consulting in the accounting, governance, IT and strategy consulting industries, and 16 years in CEO, senior executive and board positions with a variety of commercial, educational and semi-government entities. His work history includes leading and authoring a variety of commercial software systems, enterprise scale project management, internal audit leadership and audit committee membership, strategy and performance management leadership, process design and the authoring of books, methods and papers covering a wide range of management technologies.
He was the first person in Australia to be tripply qualified as an ACA, CIA and CISA.
Jonathan is a thought leader in the areas of governance and control theory. With other thinkers in the field of governance systems design, he argues that a concept of holistic organisation design based around a network of stakeholder communities and assertion based process control objectives with predictive reporting can provide economic, realistic, flexible, measurable and sustainable proof of governance and deliver efficient operations and effective market response. He is a strong proposer of control self assessment models for internal control and risk management. His theories have been applied in practice on many commercial and government orgainsations (see client list) and he was a significant advisor in the government reform space during the mid to late 1990's that saw some Australian governments embrace accrual accounting, whole-of-government shared service delivery, public-private partnerships, multi-year budgeting, and formal organisation governance and risk management.
His team was one of the first to define and adopt risk based project management throughout all project advisory and large scale systems implementation services. They have of a 100% success rate in systems implementation projects across a period of 10 years using the approach and during that time delivered more than 20 whole of organisation financial, HR and business management systems.
He has had the rare privilege of having been engaged by major consulting firms to advise them on internal systems delivery and project governance matters.
An active software developer in Delphi, .Net and a variety of web languages, Jonathan also leads the software engineering team of Bishop Phillips Consulting (a publisher of among other things BPC RiskManager, BPC SurveyManger and BPC IncidentManager).
He is currently actively exploring the application of massively multiplayer virtual worlds for corporate, education and social networks and over the last two years his virtual design company has completed a number of large design and construction projects in Second Life, one of the leading online virtual world environments.
</td>
</tr>
<tr >
<td valign=top >
==History==
* Joint Managing Director - Acumen Alliance, Victoria (5yrs)
* Joint Principal-in-Charge – Stanton Consulting Partners (7yrs)
* Vice President – William Angliss Institute of TAFE (7yrs)
* Vice President – Central Health Interpreter Service (1yr)
* Chairman – Angliss Consulting Pty Ltd (8yrs)
* Snr Mngr – Deloitte Touche Tohmatsu (2yrs)
* Chairman – Angliss Performance Review Committee (6yrs)
* Chairman – Angliss Audit Committee (2yrs)
* Other Directorships (Crnt & Frmr)
** Acumen Alliance Australia (4yrs)
** Acumen Alliance (Victoria) (5yrs)
** William Angliss Institute of TAFE (8yrs)
** Radio Beacon (Aust) (2yrs)
** Asia Alliance (Hong Kong) (2yrs)
** Acumen (UK) (2yrs)
* Other:
** Price Waterhouse (Snr)
** Australian National University (Academic)
** RMIT University (Academic)
** Judge - ACT Enterprise Workshop
</td >
<td >
==Capabilities & Skills==
* Project Management
* IT & Corporate Strategy
* Risk Management, Enterprise Governance & EGMS
* Process design & Control Systems Analysis
* Financial Management, Performance Measurement & Activity Based Costing
* Project Management
* Government Accounting
* Business Case Preparation
* Probity, Internal Audit, QA Reviews
* System Evaluations and Selection
* Information Systems – Design Specification
* IT Architecture, Network Design & Implementation
* Software Engineering – Design & Development
* Second Life simulator design, development and construction
==Governance & Consulting Client Engagements include:==
* '''''Managed the Internal Audit Units and Client Service Delivery''''' for over 50 Internal Audit & Governance Clients (over 18 years) including:
** Toyota, General Motors - Holden, United Automobile Manufacturers Association of Australia, City of Melbourne, City of Monash, Victorian Egg Marketing Board, VicRoads, Victorian Electricity Commission, VCGA/VCGR (Victorian Casino & Gaming Authority), Gippsland Water, William Angliss Institute of TAFE, Victorian College of the Arts, DEET/DEETYA, ComCare, ACT Dept. of Health, Woden Hospital, Royal Women’s Hospital, PSE Credit Union, Catholic Bank, NAB, National Mutual Insurance Ltd, Telstra, Dept. of the Treasury, Dept. of Finance, JHD, Dept. of The Senate, Dept. of The House of Representative’s, Dept. of Parliamentary Library, ACT Urban Sevices, ACTEW, AusSpace Ltd, ACT TSA, BTR Nylex, etc
* ''''Process Reengineering and Strategy/Organisational Advice'''' for many, many clients across corporate, government and not-for-profit sectors including:
** Department for Victorian Communities (Grants Management), Department of the Senate, Central Health Interpreter Service, First Mildura Irrigation Trust, SCOPE Victoria, TabCorp, JB Were, Country Fire Authority, FrontLine Defense Services, ACT Health, Royal Women's Hospital, Australian National University, Sirius Communications, William Angliss, Department of Human Services (Victoria), Building Services Agency, City of Melbourne, VEC, Museum of Victoria, Department of Education (Victoria), Sutherland City Council, AIATSIS, Central Health Interpreter Service, etc.
* '''''Designed the Internal Method''''' and wrote the Internal Audit manuals for DEET/DEETYA, National Mutual, Acumen Alliance, Stanton Consulting, Deloitte Touche Tohmatsu,
* Designed and prototyped the online Board Internal Audit Reporting System for National Australia Bank
* '''''Enterprise Systems Project Management''''' or Advisory on implementations in Oracle Financials, PeopleSoft, SAP R3, Microsoft Networks and other vendors for clients such as TabCorp, FrontLine Defense Services, JB Were & Sons, Melbourne University, RMIT University, Dept of Treasury, Ansett Air Freight, SSOV, Trade Measurement Victoria, Farmer Brothers, National Library of Australia.
==Software Systems Developed Include:==
* '''''Co-Author of BPC RiskManager''''' used by – Australia Post, Telstra, Gippsland Water, United Energy, Deakin University, Monash University, Melbourne University, Victoria University, Swinburne University, Simon Fraser University, University of British Columbia, Nova Scotia University, BCIT, University of Calgary, University of Technology, BHP, AMP, Benfield
* '''''Author of BPC SurveyManager''''' used by various among others, the OTTE - Department of Education (Victorian Government) and ACFE - Department for Victorian Communities (Victorian Government) for versy large scale state-wide student surveys, and by various BPC RiskManager clients as the core compliance engine for that system.
* '''''Co-author of BPC IncidentManager''''' (incident support system for BPC RiskManager)
* '''''Authored or co-authored other successful business software applications''''' including BPC TenderEvaluation, Arthur Andersen BAS Reporting, Arthur Andersen Practice Management, VG Dxf, TMV CRIS, Farmer Bros Retail System, FDS EIS & Stock Management System, TabCorp Tender Evaluation System, etc.
==Papers & Management Science Technologies==
Jonathan has a particular strength in policy and governance administration strategy & processes and has authored or co-authored a large volume of papers and manuals on a variety of related topics including:
* Authored various strategy and business modeling concepts including recent systems based on community networks, e-business enablement, commercial competitive tactics and strategies
* Authored the Deloitte Internal Audit Method (over 1000 pages), NAB Internal Audit Reporting manual, DEET Internal Audit Method and National Mutual Internal Audit Method, Stanton Internal Audit Method, Acumen Internal Audit Method.
* Authored the Stanton & Acumen Business Process Reengineering Method, Stanton & Acumen Victoria Management Reporting Design Method, Acumen Victoria Strategic Planning Method, Report Writing Style Guide, etc
* Authored and/or presented at conferences on Neural Networks, and Diagnosis Assisting Databases using Neural Networks. Jointly designed and built one of the first practical neural network databases in the 1980's for pattern recognistion and predictive reasoning.
</td>
</tr>
</table >
081b46964692abfec79a6a613f3b7e380af09fd8
708
707
2019-09-13T16:36:36Z
Bishopj
1
wikitext
text/x-wiki
Jonathan Bishop
BCOM BSC CIA CISA CA MAICD<br/>
MANAGING DIRECTOR & CHAIRMAN - BISHOP PHILLIPS CONSULTING GROUP
<table width="100%">
<tr >
<td width="25%" valign=top >
[[Image:bishopj.png]]
==Education & Qualifications==
* BSc (Computer Science)
* BCom (Accounting/Economics)
* Chartered Accountant
* Certified Information Systems Auditor
* Certified Internal Auditor
==Current==
* Chairman – Bishop Phillips Consulting (Aust) & (Canada)
* Managing Director - Bishop Finance Pty Ltd
</td >
<td valign=top >
==Summary==
Jonathan has more than 25years experience consulting in the accounting, governance, IT and strategy consulting industries, and 16 years in CEO, senior executive and board positions with a variety of commercial, educational and semi-government entities. His work history includes leading and authoring a variety of commercial software systems, enterprise scale project management, internal audit leadership and audit committee membership, strategy and performance management leadership, process design and the authoring of books, methods and papers covering a wide range of management technologies.
He was the first person in Australia to be tripply qualified as an ACA, CIA and CISA.
Jonathan is a thought leader in the areas of governance and control theory. With other thinkers in the field of governance systems design, he argues that a concept of holistic organisation design based around a network of stakeholder communities and assertion based process control objectives with predictive reporting can provide economic, realistic, flexible, measurable and sustainable proof of governance and deliver efficient operations and effective market response. He is a strong proposer of control self assessment models for internal control and risk management. His theories have been applied in practice on many commercial and government orgainsations (see client list) and he was a significant advisor in the government reform space during the mid to late 1990's that saw some Australian governments embrace accrual accounting, whole-of-government shared service delivery, public-private partnerships, multi-year budgeting, and formal organisation governance and risk management.
His team was one of the first to define and adopt risk based project management throughout all project advisory and large scale systems implementation services. They have of a 100% success rate in systems implementation projects across a period of 10 years using the approach and during that time delivered more than 20 whole of organisation financial, HR and business management systems.
He has had the rare privilege of having been engaged by major consulting firms to advise them on internal systems delivery and project governance matters.
An active software developer in Delphi, .Net and a variety of web languages, Jonathan also leads the software engineering team of Bishop Phillips Consulting (a publisher of among other things BPC RiskManager, BPC SurveyManger and BPC IncidentManager).
He is currently actively exploring the application of massively multiplayer virtual worlds for corporate, education and social networks and over the last two years his virtual design company has completed a number of large design and construction projects in Second Life, one of the leading online virtual world environments.
</td>
</tr>
<tr >
<td valign=top >
==History==
* Joint Managing Director - Acumen Alliance, Victoria (5yrs)
* Joint Principal-in-Charge – Stanton Consulting Partners (7yrs)
* Vice President – William Angliss Institute of TAFE (7yrs)
* Vice President – Central Health Interpreter Service (1yr)
* Chairman – Angliss Consulting Pty Ltd (8yrs)
* Snr Mngr – Deloitte Touche Tohmatsu (2yrs)
* Chairman – Angliss Performance Review Committee (6yrs)
* Chairman – Angliss Audit Committee (2yrs)
* Other Directorships (Crnt & Frmr)
** Acumen Alliance Australia (4yrs)
** Acumen Alliance (Victoria) (5yrs)
** William Angliss Institute of TAFE (8yrs)
** Radio Beacon (Aust) (2yrs)
** Asia Alliance (Hong Kong) (2yrs)
** Acumen (UK) (2yrs)
* Other:
** Price Waterhouse (Snr)
** Australian National University (Academic)
** RMIT University (Academic)
** Judge - ACT Enterprise Workshop
</td >
<td >
==Capabilities & Skills==
* Project Management
* IT & Corporate Strategy
* Risk Management, Enterprise Governance & EGMS
* Process design & Control Systems Analysis
* Financial Management, Performance Measurement & Activity Based Costing
* Project Management
* Government Accounting
* Business Case Preparation
* Probity, Internal Audit, QA Reviews
* System Evaluations and Selection
* Information Systems – Design Specification
* IT Architecture, Network Design & Implementation
* Software Engineering – Design & Development
* Second Life simulator design, development and construction
==Governance & Consulting Client Engagements include:==
* '''''Managed the Internal Audit Units and Client Service Delivery''''' for over 50 Internal Audit & Governance Clients (over 18 years) including:
** Toyota, General Motors - Holden, United Automobile Manufacturers Association of Australia, City of Melbourne, City of Monash, Victorian Egg Marketing Board, VicRoads, Victorian Electricity Commission, VCGA/VCGR (Victorian Casino & Gaming Authority), Gippsland Water, William Angliss Institute of TAFE, Victorian College of the Arts, DEET/DEETYA, ComCare, ACT Dept. of Health, Woden Hospital, Royal Women’s Hospital, PSE Credit Union, Catholic Bank, NAB, National Mutual Insurance Ltd, Telstra, Dept. of the Treasury, Dept. of Finance, JHD, Dept. of The Senate, Dept. of The House of Representative’s, Dept. of Parliamentary Library, ACT Urban Sevices, ACTEW, AusSpace Ltd, ACT TSA, BTR Nylex, etc
* ''''Process Reengineering and Strategy/Organisational Advice'''' for many, many clients across corporate, government and not-for-profit sectors including:
** Department for Victorian Communities (Grants Management), Department of the Senate, Central Health Interpreter Service, First Mildura Irrigation Trust, SCOPE Victoria, TabCorp, JB Were, Country Fire Authority, FrontLine Defense Services, ACT Health, Royal Women's Hospital, Australian National University, Sirius Communications, William Angliss, Department of Human Services (Victoria), Building Services Agency, City of Melbourne, VEC, Museum of Victoria, Department of Education (Victoria), Sutherland City Council, AIATSIS, Central Health Interpreter Service, etc.
* '''''Designed the Internal Method''''' and wrote the Internal Audit manuals for DEET/DEETYA, National Mutual, Acumen Alliance, Stanton Consulting, Deloitte Touche Tohmatsu,
* Designed and prototyped the online Board Internal Audit Reporting System for National Australia Bank
* '''''Enterprise Systems Project Management''''' or Advisory on implementations in Oracle Financials, PeopleSoft, SAP R3, Microsoft Networks and other vendors for clients such as TabCorp, FrontLine Defense Services, JB Were & Sons, Melbourne University, RMIT University, Dept of Treasury, Ansett Air Freight, SSOV, Trade Measurement Victoria, Farmer Brothers, National Library of Australia.
==Software Systems Developed Include:==
* '''''Co-Author of BPC RiskManager''''' used by – Australia Post, Telstra, Gippsland Water, United Energy, Deakin University, Monash University, Melbourne University, Victoria University, Swinburne University, Simon Fraser University, University of British Columbia, Nova Scotia University, BCIT, University of Calgary, University of Technology, BHP, AMP, Benfield
* '''''Author of BPC SurveyManager''''' used by various among others, the OTTE - Department of Education (Victorian Government) and ACFE - Department for Victorian Communities (Victorian Government) for versy large scale state-wide student surveys, and by various BPC RiskManager clients as the core compliance engine for that system.
* '''''Co-author of BPC IncidentManager''''' (incident support system for BPC RiskManager)
* '''''Authored or co-authored other successful business software applications''''' including BPC TenderEvaluation, Arthur Andersen BAS Reporting, Arthur Andersen Practice Management, VG Dxf, TMV CRIS, Farmer Bros Retail System, FDS EIS & Stock Management System, TabCorp Tender Evaluation System, etc.
==Papers & Management Science Technologies==
Jonathan has a particular strength in policy and governance administration strategy & processes and has authored or co-authored a large volume of papers and manuals on a variety of related topics including:
* Authored various strategy and business modeling concepts including recent systems based on community networks, e-business enablement, commercial competitive tactics and strategies
* Authored the Deloitte Internal Audit Method (over 1000 pages), NAB Internal Audit Reporting manual, DEET Internal Audit Method and National Mutual Internal Audit Method, Stanton Internal Audit Method, Acumen Internal Audit Method.
* Authored the Stanton & Acumen Business Process Reengineering Method, Stanton & Acumen Victoria Management Reporting Design Method, Acumen Victoria Strategic Planning Method, Report Writing Style Guide, etc
* Authored and/or presented at conferences on Neural Networks, and Diagnosis Assisting Databases using Neural Networks. Jointly designed and built one of the first practical neural network databases in the 1980's for pattern recognistion and predictive reasoning.
</td>
</tr>
</table >
4f1052c25ccd219fdd7874f1c6ddf48244ea0a22
709
708
2019-09-13T16:41:34Z
Bishopj
1
wikitext
text/x-wiki
Jonathan Bishop
BCOM BSC CIA CISA CA MAICD<br/>
MANAGING DIRECTOR & CHAIRMAN - BISHOP PHILLIPS CONSULTING GROUP
<table width="100%">
<tr >
<td width="25%" valign=top >
[[Image:bishopj.png]]
==Education & Qualifications==
* BSc (Computer Science)
* BCom (Accounting/Economics)
* Chartered Accountant
* Certified Information Systems Auditor
* Certified Internal Auditor
==Current==
* Chairman – Bishop Phillips Consulting (Aust) & (Canada)
* Managing Director - Bishop Finance Pty Ltd
</td >
<td valign=top >
==Summary==
Jonathan has more than 34 years experience consulting in the accounting, governance, IT and strategy consulting industries, and 25 years in CEO, senior executive and board positions with a variety of commercial, educational and semi-government entities. His work history includes leading and authoring a variety of commercial software systems, enterprise scale project management, internal audit leadership and audit committee membership, strategy and performance management leadership, process design and the authoring of books, methods and papers covering a wide range of management technologies.
He was the first person in Australia to be tripply qualified as an ACA, CIA and CISA.
Jonathan is a thought leader in the areas of governance and control theory. With other thinkers in the field of governance systems design, he argues that a concept of holistic organisation design based around a network of stakeholder communities and assertion based process control objectives with predictive reporting can provide economic, realistic, flexible, measurable and sustainable proof of governance and deliver efficient operations and effective market response. He is a strong proposer of control self assessment models for internal control and risk management. His theories have been applied in practice on many commercial and government orgainsations (see client list) and he was a significant advisor in the government reform space during the mid to late 1990's that saw some Australian governments embrace accrual accounting, whole-of-government shared service delivery, public-private partnerships, multi-year budgeting, and formal organisation governance and risk management.
His team was one of the first to define and adopt risk based project management throughout all project advisory and large scale systems implementation services. They have of a 100% success rate in systems implementation projects across a period of 10 years using the approach and during that time delivered more than 20 whole of organisation financial, HR and business management systems.
He has had the rare privilege of having been engaged by major consulting firms to advise them on internal systems delivery and project governance matters.
An active software developer in Delphi, .Net and a variety of web languages, Jonathan also leads the software engineering team of Bishop Phillips Consulting (a publisher of among other things BPC RiskManager, BPC SurveyManger and BPC IncidentManager).
He is currently actively exploring the application of massively multiplayer virtual worlds for corporate, education and social networks and over the last two years his virtual design company has completed a number of large design and construction projects in Second Life, one of the leading online virtual world environments.
</td>
</tr>
<tr >
<td valign=top >
==History==
* Joint Managing Director - Acumen Alliance, Victoria (5yrs)
* Joint Principal-in-Charge – Stanton Consulting Partners (7yrs)
* Vice President – William Angliss Institute of TAFE (7yrs)
* Vice President – Central Health Interpreter Service (1yr)
* Chairman – Angliss Consulting Pty Ltd (8yrs)
* Snr Mngr – Deloitte Touche Tohmatsu (2yrs)
* Chairman – Angliss Performance Review Committee (6yrs)
* Chairman – Angliss Audit Committee (2yrs)
* Other Directorships (Crnt & Frmr)
** Acumen Alliance Australia (4yrs)
** Acumen Alliance (Victoria) (5yrs)
** William Angliss Institute of TAFE (8yrs)
** Radio Beacon (Aust) (2yrs)
** Asia Alliance (Hong Kong) (2yrs)
** Acumen (UK) (2yrs)
* Other:
** Price Waterhouse (Snr)
** Australian National University (Academic)
** RMIT University (Academic)
** Judge - ACT Enterprise Workshop
</td >
<td >
==Capabilities & Skills==
* Project Management
* IT & Corporate Strategy
* Risk Management, Enterprise Governance & EGMS
* Process design & Control Systems Analysis
* Financial Management, Performance Measurement & Activity Based Costing
* Project Management
* Government Accounting
* Business Case Preparation
* Probity, Internal Audit, QA Reviews
* System Evaluations and Selection
* Information Systems – Design Specification
* IT Architecture, Network Design & Implementation
* Software Engineering – Design & Development
* Second Life simulator design, development and construction
==Governance & Consulting Client Engagements include:==
* '''''Managed the Internal Audit Units and Client Service Delivery''''' for over 50 Internal Audit & Governance Clients (over 18 years) including:
** Toyota, General Motors - Holden, United Automobile Manufacturers Association of Australia, City of Melbourne, City of Monash, Victorian Egg Marketing Board, VicRoads, Victorian Electricity Commission, VCGA/VCGR (Victorian Casino & Gaming Authority), Gippsland Water, William Angliss Institute of TAFE, Victorian College of the Arts, DEET/DEETYA, ComCare, ACT Dept. of Health, Woden Hospital, Royal Women’s Hospital, PSE Credit Union, Catholic Bank, NAB, National Mutual Insurance Ltd, Telstra, Dept. of the Treasury, Dept. of Finance, JHD, Dept. of The Senate, Dept. of The House of Representative’s, Dept. of Parliamentary Library, ACT Urban Sevices, ACTEW, AusSpace Ltd, ACT TSA, BTR Nylex, etc
* ''''Process Reengineering and Strategy/Organisational Advice'''' for many, many clients across corporate, government and not-for-profit sectors including:
** Department for Victorian Communities (Grants Management), Department of the Senate, Central Health Interpreter Service, First Mildura Irrigation Trust, SCOPE Victoria, TabCorp, JB Were, Country Fire Authority, FrontLine Defense Services, ACT Health, Royal Women's Hospital, Australian National University, Sirius Communications, William Angliss, Department of Human Services (Victoria), Building Services Agency, City of Melbourne, VEC, Museum of Victoria, Department of Education (Victoria), Sutherland City Council, AIATSIS, Central Health Interpreter Service, etc.
* '''''Designed the Internal Method''''' and wrote the Internal Audit manuals for DEET/DEETYA, National Mutual, Acumen Alliance, Stanton Consulting, Deloitte Touche Tohmatsu,
* Designed and prototyped the online Board Internal Audit Reporting System for National Australia Bank
* '''''Enterprise Systems Project Management''''' or Advisory on implementations in Oracle Financials, PeopleSoft, SAP R3, Microsoft Networks and other vendors for clients such as TabCorp, FrontLine Defense Services, JB Were & Sons, Melbourne University, RMIT University, Dept of Treasury, Ansett Air Freight, SSOV, Trade Measurement Victoria, Farmer Brothers, National Library of Australia.
==Software Systems Developed Include:==
* '''''Co-Author of BPC RiskManager''''' used by – Australia Post, Telstra, Gippsland Water, United Energy, Deakin University, Monash University, Melbourne University, Victoria University, Swinburne University, Simon Fraser University, University of British Columbia, Nova Scotia University, BCIT, University of Calgary, University of Technology, BHP, AMP, Benfield
* '''''Author of BPC SurveyManager''''' used by various among others, the OTTE - Department of Education (Victorian Government) and ACFE - Department for Victorian Communities (Victorian Government) for versy large scale state-wide student surveys, and by various BPC RiskManager clients as the core compliance engine for that system.
* '''''Co-author of BPC IncidentManager''''' (incident support system for BPC RiskManager)
* '''''Authored or co-authored other successful business software applications''''' including BPC TenderEvaluation, Arthur Andersen BAS Reporting, Arthur Andersen Practice Management, VG Dxf, TMV CRIS, Farmer Bros Retail System, FDS EIS & Stock Management System, TabCorp Tender Evaluation System, etc.
==Papers & Management Science Technologies==
Jonathan has a particular strength in policy and governance administration strategy & processes and has authored or co-authored a large volume of papers and manuals on a variety of related topics including:
* Authored various strategy and business modeling concepts including recent systems based on community networks, e-business enablement, commercial competitive tactics and strategies
* Authored the Deloitte Internal Audit Method (over 1000 pages), NAB Internal Audit Reporting manual, DEET Internal Audit Method and National Mutual Internal Audit Method, Stanton Internal Audit Method, Acumen Internal Audit Method.
* Authored the Stanton & Acumen Business Process Reengineering Method, Stanton & Acumen Victoria Management Reporting Design Method, Acumen Victoria Strategic Planning Method, Report Writing Style Guide, etc
* Authored and/or presented at conferences on Neural Networks, and Diagnosis Assisting Databases using Neural Networks. Jointly designed and built one of the first practical neural network databases in the 1980's for pattern recognistion and predictive reasoning.
</td>
</tr>
</table >
1c736d18925aa7b1690e9ef4c1848a4ccccfd26e
710
709
2019-09-13T16:44:09Z
Bishopj
1
wikitext
text/x-wiki
Jonathan Bishop
BCOM BSC CIA CISA CA MAICD<br/>
MANAGING DIRECTOR & CHAIRMAN - BISHOP PHILLIPS CONSULTING GROUP
<table width="100%">
<tr >
<td width="25%" valign=top >
[[Image:bishopj.png]]
==Education & Qualifications==
* BSc (Computer Science)
* BCom (Accounting/Economics)
* Chartered Accountant
* Certified Information Systems Auditor
* Certified Internal Auditor
==Current==
* Chairman – Bishop Phillips Consulting (Aust) & (Canada)
* Managing Director - Bishop Finance Pty Ltd
</td >
<td valign=top >
==Summary==
Jonathan has more than 34 years experience consulting in the accounting, governance, IT and strategy consulting industries, and 25 years in CEO, senior executive and board positions with a variety of commercial, educational and semi-government entities. His work history includes leading and authoring a variety of commercial software systems, enterprise scale project management, internal audit leadership and audit committee membership, strategy and performance management leadership, process design and the authoring of books, methods and papers covering a wide range of management technologies.
He was the first person in Australia to be triply qualified as an ACA, CIA and CISA.
Jonathan is a thought leader in the areas of governance and control theory. With other thinkers in the field of governance systems design, he argues that a concept of holistic organisation design based around a network of stakeholder communities and assertion based process control objectives with predictive reporting can provide economic, realistic, flexible, measurable and sustainable proof of governance and deliver efficient operations and effective market response. He is a strong proposer of control self assessment models for internal control and risk management. His theories have been applied in practice on many commercial and government organisations (see client list) and he was a significant adviser in the government reform space during the mid to late 1990's that saw some Australian governments embrace accrual accounting, whole-of-government shared service delivery, public-private partnerships, multi-year budgeting, and formal organisation governance and risk management.
His team was one of the first to define and adopt risk based project management throughout all project advisory and large scale systems implementation services. They have of a 100% success rate in systems implementation projects across a period of 10 years using the approach and during that time delivered more than 20 whole of organisation financial, HR and business management systems.
He has had the rare privilege of having been engaged by major consulting firms to advise them on internal systems delivery and project governance matters.
An active software developer in Delphi, .Net and a variety of web languages, Jonathan also leads the software engineering team of Bishop Phillips Consulting (a publisher of among other things BPC RiskManager, BPC SurveyManger and BPC IncidentManager).
He is currently actively exploring the application of massively multiplayer virtual worlds for corporate, education and social networks and over the last two years his virtual design company has completed a number of large design and construction projects in Second Life, one of the leading online virtual world environments.
</td>
</tr>
<tr >
<td valign=top >
==History==
* Joint Managing Director - Acumen Alliance, Victoria (5yrs)
* Joint Principal-in-Charge – Stanton Consulting Partners (7yrs)
* Vice President – William Angliss Institute of TAFE (7yrs)
* Vice President – Central Health Interpreter Service (1yr)
* Chairman – Angliss Consulting Pty Ltd (8yrs)
* Snr Mngr – Deloitte Touche Tohmatsu (2yrs)
* Chairman – Angliss Performance Review Committee (6yrs)
* Chairman – Angliss Audit Committee (2yrs)
* Other Directorships (Crnt & Frmr)
** Acumen Alliance Australia (4yrs)
** Acumen Alliance (Victoria) (5yrs)
** William Angliss Institute of TAFE (8yrs)
** Radio Beacon (Aust) (2yrs)
** Asia Alliance (Hong Kong) (2yrs)
** Acumen (UK) (2yrs)
* Other:
** Price Waterhouse (Snr)
** Australian National University (Academic)
** RMIT University (Academic)
** Judge - ACT Enterprise Workshop
</td >
<td >
==Capabilities & Skills==
* Project Management
* IT & Corporate Strategy
* Risk Management, Enterprise Governance & EGMS
* Process design & Control Systems Analysis
* Financial Management, Performance Measurement & Activity Based Costing
* Project Management
* Government Accounting
* Business Case Preparation
* Probity, Internal Audit, QA Reviews
* System Evaluations and Selection
* Information Systems – Design Specification
* IT Architecture, Network Design & Implementation
* Software Engineering – Design & Development
* Second Life simulator design, development and construction
==Governance & Consulting Client Engagements include:==
* '''''Managed the Internal Audit Units and Client Service Delivery''''' for over 50 Internal Audit & Governance Clients (over 18 years) including:
** Toyota, General Motors - Holden, United Automobile Manufacturers Association of Australia, City of Melbourne, City of Monash, Victorian Egg Marketing Board, VicRoads, Victorian Electricity Commission, VCGA/VCGR (Victorian Casino & Gaming Authority), Gippsland Water, William Angliss Institute of TAFE, Victorian College of the Arts, DEET/DEETYA, ComCare, ACT Dept. of Health, Woden Hospital, Royal Women’s Hospital, PSE Credit Union, Catholic Bank, NAB, National Mutual Insurance Ltd, Telstra, Dept. of the Treasury, Dept. of Finance, JHD, Dept. of The Senate, Dept. of The House of Representative’s, Dept. of Parliamentary Library, ACT Urban Sevices, ACTEW, AusSpace Ltd, ACT TSA, BTR Nylex, etc
* ''''Process Reengineering and Strategy/Organisational Advice'''' for many, many clients across corporate, government and not-for-profit sectors including:
** Department for Victorian Communities (Grants Management), Department of the Senate, Central Health Interpreter Service, First Mildura Irrigation Trust, SCOPE Victoria, TabCorp, JB Were, Country Fire Authority, FrontLine Defense Services, ACT Health, Royal Women's Hospital, Australian National University, Sirius Communications, William Angliss, Department of Human Services (Victoria), Building Services Agency, City of Melbourne, VEC, Museum of Victoria, Department of Education (Victoria), Sutherland City Council, AIATSIS, Central Health Interpreter Service, etc.
* '''''Designed the Internal Method''''' and wrote the Internal Audit manuals for DEET/DEETYA, National Mutual, Acumen Alliance, Stanton Consulting, Deloitte Touche Tohmatsu,
* Designed and prototyped the online Board Internal Audit Reporting System for National Australia Bank
* '''''Enterprise Systems Project Management''''' or Advisory on implementations in Oracle Financials, PeopleSoft, SAP R3, Microsoft Networks and other vendors for clients such as TabCorp, FrontLine Defense Services, JB Were & Sons, Melbourne University, RMIT University, Dept of Treasury, Ansett Air Freight, SSOV, Trade Measurement Victoria, Farmer Brothers, National Library of Australia.
==Software Systems Developed Include:==
* '''''Co-Author of BPC RiskManager''''' used by – Australia Post, Telstra, Gippsland Water, United Energy, Deakin University, Monash University, Melbourne University, Victoria University, Swinburne University, Simon Fraser University, University of British Columbia, Nova Scotia University, BCIT, University of Calgary, University of Technology, BHP, AMP, Benfield
* '''''Author of BPC SurveyManager''''' used by various among others, the OTTE - Department of Education (Victorian Government) and ACFE - Department for Victorian Communities (Victorian Government) for versy large scale state-wide student surveys, and by various BPC RiskManager clients as the core compliance engine for that system.
* '''''Co-author of BPC IncidentManager''''' (incident support system for BPC RiskManager)
* '''''Authored or co-authored other successful business software applications''''' including BPC TenderEvaluation, Arthur Andersen BAS Reporting, Arthur Andersen Practice Management, VG Dxf, TMV CRIS, Farmer Bros Retail System, FDS EIS & Stock Management System, TabCorp Tender Evaluation System, etc.
==Papers & Management Science Technologies==
Jonathan has a particular strength in policy and governance administration strategy & processes and has authored or co-authored a large volume of papers and manuals on a variety of related topics including:
* Authored various strategy and business modeling concepts including recent systems based on community networks, e-business enablement, commercial competitive tactics and strategies
* Authored the Deloitte Internal Audit Method (over 1000 pages), NAB Internal Audit Reporting manual, DEET Internal Audit Method and National Mutual Internal Audit Method, Stanton Internal Audit Method, Acumen Internal Audit Method.
* Authored the Stanton & Acumen Business Process Reengineering Method, Stanton & Acumen Victoria Management Reporting Design Method, Acumen Victoria Strategic Planning Method, Report Writing Style Guide, etc
* Authored and/or presented at conferences on Neural Networks, and Diagnosis Assisting Databases using Neural Networks. Jointly designed and built one of the first practical neural network databases in the 1980's for pattern recognition and predictive reasoning.
</td>
</tr>
</table >
94ea2bc42a93b80b86f611b3cd6bb7f5d75dca5f
User talk:Bishopj
3
503
711
2021-03-19T04:56:14Z
Bishopj
1
Created page with "Bishop Talk Page"
wikitext
text/x-wiki
Bishop Talk Page
6854c77e2ae8de21e9d6431a13d32200e5838b6b
MediaWiki:Sidebar
8
504
712
2023-10-19T14:51:35Z
Bishopj
1
Created page with " * navigation ** [https://www.bishopphillips.com BPC%20Home] ** mainpage|mainpage-description ** recentchanges-url|recentchanges ** randompage-url|randompage ** helppage|help..."
wikitext
text/x-wiki
* navigation
** [https://www.bishopphillips.com BPC%20Home]
** mainpage|mainpage-description
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
6992c4cfc570d104ac20ed02e504142f424ce495
713
712
2023-10-19T14:54:07Z
Bishopj
1
wikitext
text/x-wiki
* navigation
** [https://www.bishopphillips.com BPC Home]
** mainpage|mainpage-description
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
f8610de4ba3ddcb7f3df6ced0a2e15d356df4e43
714
713
2023-10-19T15:03:05Z
Bishopj
1
wikitext
text/x-wiki
* navigation
** [https://www.bishopphillips.com BPC Home]|BPCHome
** mainpage|mainpage-description
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
616c2f0434a6a979b02d7132172affb26631069b
715
714
2023-10-19T15:06:10Z
Bishopj
1
wikitext
text/x-wiki
* navigation
** https://www.bishopphillips.com|BPCHome
** mainpage|mainpage-description
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
63e84f8e65988aeeebc9fedcd86ec31f39979ae2
716
715
2023-10-19T15:06:31Z
Bishopj
1
wikitext
text/x-wiki
* navigation
** https://www.bishopphillips.com|BPC Home
** mainpage|mainpage-description
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
9541207f2313145721f82a655646934c575a4071
717
716
2023-10-19T15:07:13Z
Bishopj
1
wikitext
text/x-wiki
* navigation
** https://www.bishopphillips.com/ |BPC Home
** mainpage|mainpage-description
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
5c114d4d324de58bd6de3157b579b6c7cf9bb5cd